Skip to main content
Log in

FVAE: a regularized variational autoencoder using the Fisher criterion

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

As a deep generative model, the variational autoencoder (VAE) is widely applied to solve problems of insufficient samples and imbalanced labels. In the VAE, the distribution of latent variables affects the quality of the generated samples. To obtain discriminative latent variables and generated samples, a Fisher variational autoencoder (FVAE) based on the Fisher criterion is proposed in this study. The FVAE introduces the Fisher criterion into the VAE by adding the Fisher regularization term to the loss function, aiming to maximize the between-class distance and minimize the within-class distance of latent variables. Different from the unsupervised learning of the VAE, the FVAE requires class labels to calculate the Fisher regularization loss so that the learned latent variables and generated samples have sufficient category information to complete classification tasks. Experiments on benchmark datasets show that the learned latent variables of the FVAE are more discriminative and that the generated samples are more efficient in improving the performance of various classifiers than the VAE, β-variational autoencoder (β-VAE), conditional variational autoencoder (CVAE), Denoising variational autoencoder (DVAE) and information maximizing variational autoencoder (IM-VAE).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Jørgensen PB, Schmidt MN, Winther O (2018) Deep generative models for molecular science. Mole Inform 37:1700133

    Article  Google Scholar 

  2. Kingma DP, Welling M (2013) Auto-encoding variational Bayes. arXiv arXiv:1312.6114

  3. Goodfellow I Pouget-Abadie J Mirza M, Xu B (2014) Generative adversarial nets. In proceedings of International Conference on Neural Information Processing Systems, Kuching, Malaysia 2672–2680

  4. Du L, Li L, Guo Y (2021) Two-stream deep fusion network based on VAE and CNN for synthetic aperture radar target recognition. Remote Sens 13:4021

    Article  Google Scholar 

  5. Satheesh C, Kamal S, Mujeeb A, Supriya MH (2021) Passive sonar target classification using deep generative beta-VAE. IEEE Signal Process Lett 28:808–812

    Article  Google Scholar 

  6. Xu X, Li J, Yang Y, Shen FM (2021) Towards effective intrusion detection using log-cosh conditional variational autoencoder. IEEE Internet Things J 8:6187–6196

    Article  Google Scholar 

  7. Guo Y, Ji T, Wang Q, Yu L (2020) Unsupervised anomaly detection in IoT systems for smart cities. IEEE Trans Network Sci Eng 7:2231–2242

    Article  Google Scholar 

  8. Ko T, Kim H (2019) Fault classification in high-dimensional complex processes using semi-supervised deep convolutional generative models. IEEE Trans Industrial Inform 16:2868–2877

    Article  Google Scholar 

  9. Zhang Y, Su X, Meng K, Zhao Y (2020) Robust fault detection approach for wind farms considering missing data tolerance and recovery. IET Renew Power Gen 14:4150–4158

    Article  Google Scholar 

  10. Li Y, Zhang Y, Yu K (2021) Adversarial training with Wasserstein distance for learning cross-lingual word embeddings. Appl Intell 51:7666–7678

    Article  Google Scholar 

  11. Zhang T, Sun X, Li X (2021) Image generation and constrained two-stage feature fusion for person re-identification. Appl Intell 51:7679–7689

    Article  Google Scholar 

  12. Wang X, Tan K, Du Q, Chen Y, Du P (2020) CVA2E: a conditional variational autoencoder with an adversarial training process for hyperspectral imagery classification. IEEE Trans Geosci Remote Sens 58:5676–5692

    Article  Google Scholar 

  13. Sohn K, Yan X, Lee H (2015) Learning structured output representation using deep conditional generative models. In Proceedings of International Conference on Neural Information Processing Systems, Istanbul, Turkey, 9–12. pp. 3483–3491

  14. Louizos C, Swersky K, Li Y, Welling M, Zemel R (2016) The variational fair autoencoder. In Proceedings of International Conference on Learning Representations, San Juan, Puerto Rico, 2–4, pp. 1–11

  15. Zhao S, Song J, Ermon S (2019) InfoVAE: Balancing learning and inference in variational autoencoders. In Proceedings of the AAAI Conference on Artificial Intelligence, Hawaii, USA. 5885–5892

  16. Vahdat A, Kautz J (2020) NVAE: A deep hierarchical variational autoencoder. In Proceedings of the International Conference on Neural Information Processing Systems, Vancouver, Canada. pp. 1–20

  17. Joo W, Lee W, Park S, Moon IC (2020) Dirichlet variational autoencoder. Pattern Recogn 107:107514

    Article  Google Scholar 

  18. Creswell A, Bharath AA (2018) Denoising adversarial autoencoders. IEEE Trans Neural Networks Learn Syst 30:968–984

    Article  Google Scholar 

  19. Wang X, Liu H (2020) Data supplement for a soft sensor using a new generative model based on a variational autoencoder and Wasserstein GAN. J Process Control 85:91–99

    Article  Google Scholar 

  20. Du C, Chen B, Xu B, Guo DD, Liu HW (2019) Factorized discriminative conditional variational auto-encoder for radar HRRP target recognition. Signal Process 158:176–189

    Article  Google Scholar 

  21. Lu W, Yan X (2020) Deep fisher autoencoder combined with self-organizing map for visual industrial process monitoring. J Manuf Syst 56:241–251

    Article  Google Scholar 

  22. Li Y, Pan Q, Wang S, Cambria E (2019) Disentangled variational auto-encoder for semi-supervised learning. Inf Sci 482:73–85

    Article  Google Scholar 

  23. Kaur T, Saini BS, Gupta S (2018) A novel feature selection method for brain tumor MR image classification based on the fisher criterion and parameter-free bat optimization. Neural Comput Applic 29:193–206

    Article  Google Scholar 

  24. Higgins I, Matthey L, Pal A (2017) Beta-VAE: Learning basic visual concepts with a constrained variational framework. In Proceedings of International Conference on Learning Representations, Toulon, France, 24–26. pp. 1–22

  25. Iwoong ID, Ahn S, Memisevic R (2017) Denoising Criterion for Variational Auto-Encoding Framework[C]//Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, USA. 2059–2065

  26. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2324

    Article  Google Scholar 

  27. Xiao H, Rasul K, Vollgraf R (2017) Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv arXiv:1708.07747

  28. Lake BM, Salakhutdinov R, Tenenbaum JB (2013) One-shot learning by inverting a compositional causal process. In Proceedings of the International Conference on Neural Information Processing Systems, Nevada, USA, 5–10. 2526–2534

  29. Ravuri S, Vinyals O (2019) Classification accuracy score for conditional generative models. In Proceedings of International Conference on Neural Information Processing Systems, Vancouver, Canada.. 12247–12258

  30. Larochelle H, Bengio Y, Louradour J, Lamblin P (2009) Exploring strategies for training deep neural networks. J Mach Learn Res 10:1–40

    MATH  Google Scholar 

  31. Kasun LLC, Zhou H, Huang GB (2013) Representational learning with extreme learning machine for big data. IEEE Intell Syst 28:31–34

    Google Scholar 

  32. Sun Y, Xue B, Zhang M (2018) A particle swarm optimization-based flexible convolutional autoencoder for image classification. IEEE Trans Neural Networks Learn Syst 30:2295–2309

    Article  Google Scholar 

  33. Lamata L, Alvarez-Rodriguez U, Martin-Guerrero JD (2018) Quantum autoencoders via quantum adders with genetic algorithms. Quantum Sci Technol 4:014007

    Article  Google Scholar 

Download references

Data availability statement

The data used to support the findings of this study are available from the corresponding author upon request.

Funding

The research leading to these results has received funding from the National Natural Science Foundation of China (61876189, 61273275, 61806219 and 61703426) and the Natural Science Basic Research Plan in Shaanxi Province (No. 2021JM—226).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, J.L. and X.W.; Methodology, J.L.; investigation, J.L. and Q.X.; writing—original draft preparation, J.L.; writing—review and editing, R.L. and Y.S. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Xiaodan Wang.

Ethics declarations

Conflict of interest

The authors declare no conflicts of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lai, J., Wang, X., Xiang, Q. et al. FVAE: a regularized variational autoencoder using the Fisher criterion. Appl Intell 52, 16869–16885 (2022). https://doi.org/10.1007/s10489-022-03422-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03422-6

Keywords

Navigation