Perceiving Systems, Computer Vision

From Variational to Deterministic Autoencoders

19 April 2020

04:56

paper: https://arxiv.org/abs/1903.12436 code: https://github.com/ParthaEth/Regularized_autoencoders-RAE- abstract: Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets. citation: @inproceedings{ghosh2020from, title={From Variational to Deterministic Autoencoders}, author={Partha Ghosh and Mehdi S. M. Sajjadi and Antonio Vergari and Michael Black and Bernhard Scholkopf}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=S1g7tpEYDS} }

=