Variational Autoencoder (VAE) and Generative adversarial network (GAN) are two classic generative
models that generate realistic data from a predefined prior distribution, such as a Gaussian distribution.
One advantage of VAE over GAN is its ability to simultaneously generate high-dimensional data and learn
latent representations that are useful for data manipulation. However, it has been observed that a tradeoff exists between reconstruction and generation in VAE, as matching the prior distribution for the latent
representations may destroy the geometric structure of the data manifold. To address this issue, we propose an autoencoder-based generative model that allows the prior to learn the embedding distribution,
rather than imposing the latent variables to fit the prior. To preserve the geometric structure of the data
manifold to the maximum, the embedding distribution is trained using a simple regularized autoencoder
architecture. Then an adversarial strategy is employed to achieve a latent mapping. We provide both theoretical and experimental support for the effectiveness of our method, which eliminates the contradiction
between preserving the geometric structure of the data manifold and matching the distribution in latent
space. The code is available at https://github.com/gengcong940126/GMIEL.
Author(s): Cong Geng, Jia Wang, Li Chen, Zhiyong Gao
Series: 549
Publisher: Elsevier
Year: 2023
Solving the reconstruction-generation trade-off: Generative model withimplicit embedding learning
1. Introduction
2. Problem definition
3. Method
4. Experiments
5. Conclusion
CRediT authorship contribution statement
Data availability
Declaration of Competing Interest
References