Abstract
Bidirectional Generative Adversarial Networks (BiGANs) is a generative model with an invertible mapping between latent and image space. The mapping allows us to encode real images into latent representations and reconstruct input images. However, from preliminary experiments, we found that the joint probability distributions learned by the generator and the encoder are inconsistent, leading to poor-quality mapping. Therefore, to solve this issue, we propose an architecture-agnostic additional learning method to make the two joint probability distributions closer. In the experiments, we evaluated the reconstruction quality on synthetic and natural image datasets and found that our additional learning improves the invertible mapping of BiGAN.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27 (2014)
Donahue, J., Krähenbühl, P., Darrell, T.: Adversarial feature learning. In: ICLR (2017)
Dumoulin, V., et al.: Adversarially learned inference. In: ICLR, Alex Lamb (2017)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning, pp. 214–223. PMLR (2017)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems 30 (2017)
Rubenstein, P.K., Li, Y., Roblek, D.: An empirical study of generative models with encoders. arXiv preprint arXiv:1812.07909 (2018)
Sánchez-Martín, P., Olmos, P.M., Perez-Cruz, F.: Improved BiGAN training with marginal likelihood equalization. arXiv preprint arXiv:1911.01425 (2019)
Intrator, Y., Katz, G., Shabtai, A.: MDGAN: boosting anomaly detection using multi-discriminator generative adversarial networks. arXiv preprint arXiv:1810.05221 (2018)
Carrara, F., Amato, G., Brombin, L., Falchi, F., Gennaro, C.: Combining GANs and autoencoders for efficient anomaly detection. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 3939–3946. IEEE (2021)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems 30 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zheng, J., Aizawa, H., Kurita, T. (2023). Additional Learning for Joint Probability Distribution Matching in BiGAN. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-30105-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30104-9
Online ISBN: 978-3-031-30105-6
eBook Packages: Computer ScienceComputer Science (R0)