Solving the reconstruction-generation trade-off: Generative model with implicit embedding learning

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Variational Autoencoder (VAE) and Generative adversarial network (GAN) are two classic generative models that generate realistic data from a predefined prior distribution, such as a Gaussian distribution. One advantage of VAE over GAN is its ability to simultaneously generate high-dimensional data and learn latent representations that are useful for data manipulation. However, it has been observed that a tradeoff exists between reconstruction and generation in VAE, as matching the prior distribution for the latent representations may destroy the geometric structure of the data manifold. To address this issue, we propose an autoencoder-based generative model that allows the prior to learn the embedding distribution, rather than imposing the latent variables to fit the prior. To preserve the geometric structure of the data manifold to the maximum, the embedding distribution is trained using a simple regularized autoencoder architecture. Then an adversarial strategy is employed to achieve a latent mapping. We provide both theoretical and experimental support for the effectiveness of our method, which eliminates the contradiction between preserving the geometric structure of the data manifold and matching the distribution in latent space. The code is available at https://github.com/gengcong940126/GMIEL.

Author(s): Cong Geng, Jia Wang, Li Chen, Zhiyong Gao
Series: 549
Publisher: Elsevier
Year: 2023

Language: English

Solving the reconstruction-generation trade-off: Generative model withimplicit embedding learning
1. Introduction
2. Problem definition
3. Method
4. Experiments
5. Conclusion
CRediT authorship contribution statement
Data availability
Declaration of Competing Interest
References