skip to main content
research-article

Self-supervised Discriminative Representation Learning by Fuzzy Autoencoder

Published: 09 November 2022 Publication History

Abstract

Representation learning based on autoencoders has received great concern for its potential ability to capture valuable latent information. Conventional autoencoders pursue minimal reconstruction error, but in most machine learning tasks such as classification and clustering, the discrimination of feature representation is also important. To address this limitation, an enhanced self-supervised discriminative fuzzy autoencoder (FAE) is innovatively proposed, which focuses on exploring information within data to guide the unsupervised training process and enhancing feature discrimination in a self-supervised manner. In FAE, fuzzy membership is applied to provide a means of self-supervised, which allows FAE can not only utilize AE’s outstanding representation learning capabilities but can also transform the original data into another space with improved discrimination. First, the objective function corresponding to FAE is proposed by reconstruction loss and clustering oriented loss simultaneously. Subsequently, Mini-Batch Gradient Descent is applied to infer the objective function and the detailed process is illustrated step by step. Finally, empirical studies on clustering tasks have demonstrated the superiority of FAE over the state of the art.

References

[1]
Rami Al-Hmouz, Witold Pedrycz, Abdullah Balamash, and Ali Morfeq. 2022. Logic-oriented autoencoders and granular logic autoencoders: Developing interpretable data representation. IEEE Trans. Fuzzy Syst. 31, 3 (2022), 869–877.
[2]
Ali Alqahtani, Xianghua Xie, J. Deng, and Mark W. Jones. 2018. A deep convolutional auto-encoder with embedded clustering. In Proceedings of the 25th IEEE International Conference on Image Processing. 4058–4062.
[3]
Deng Cai, Xiaofei He, and Jiawei Han. 2005. Document clustering using locality preserving indexing. IEEE Trans. Knowl. Data Eng. 17, 12 (2005), 1624–1637.
[4]
Ricky T. Q. Chen, Xuechen Li, Roger B. Grosse, and David K. Duvenaud. 2018. Isolating sources of disentanglement in variational autoencoders. In Advances in Neural Information Processing Systems, Vol. 31.
[5]
Xingyu Chen, Chunyu Wang, Xuguang Lan, Nanning Zheng, and Wenjun Zeng. 2021. Neighborhood geometric structure-preserving variational autoencoder for smooth and bounded data sources. IEEE Trans. Neural Netw. Learn. 33, 8 (2022), 3598–3611.
[6]
Zheng Ding, Yifan Xu, Weijian Xu, Gaurav Parmar, Yang Yang, Max Welling, and Zhuowen Tu. 2020. Guided variational autoencoder for disentanglement learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7920–7929.
[7]
Ji Feng and Zhi-Hua Zhou. 2018. Autoencoder by forest. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
[8]
Brendan J. Frey and Delbert Dueck. 2007. Clustering by passing messages between data points. Science 315, 5814 (2007), 972–976.
[9]
Salvador García, Alberto Fernández, Julián Luengo, and Francisco Herrera. 2010. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 180, 10 (2010), 2044–2064.
[10]
Spyros Gidaris and Nikos Komodakis. 2019. Generating classification weights with gnn denoising autoencoders for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 21–30.
[11]
Kamal Gupta, Saurabh Singh, and Abhinav Shrivastava. 2020. Patchvae: Learning local latent codes for recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4746–4755.
[12]
Xufeng Hu, Yibin Li, Lei Jia, and Meikang Qiu. 2021. A novel two-stage unsupervised fault recognition framework combining feature extraction and fuzzy clustering for collaborative AIoT. IEEE Trans. Industr. Inf. 18, 2 (2021), 1291–1300.
[13]
Dong Huang, Chang-Dong Wang, Jian-Huang Lai, and Chee-Keong Kwoh. 2021. Toward multidiversified ensemble clustering of high-dimensional data: From subspaces to metrics and beyond. IEEE Trans. Cybernet. (2021), 1–14.
[14]
Dong Huang, Chang-Dong Wang, Jian-Sheng Wu, Jian-Huang Lai, and Chee-Keong Kwoh. 2019. Ultra-scalable spectral clustering and ensemble clustering. IEEE Trans. Knowl. Data Eng. 32, 6 (2019), 1212–1226.
[15]
Savvas Karatsiolis and Christos N. Schizas. 2019. Conditional generative denoising autoencoder. IEEE Trans. Neural Netw. Learn. Syst. 31, 10 (2019), 4117–4129.
[16]
Hyunjik Kim and Andriy Mnih. 2018. Disentangling by factorising. In Proceedings of the International Conference on Machine Learning. PMLR, 2649–2658.
[17]
Soheil Kolouri, Phillip E. Pope, Charles E. Martin, and Gustavo K. Rohde. 2018. Sliced Wasserstein auto-encoders. In Proceedings of the International Conference on Learning Representations.
[18]
Zhichao Li, Li Tian, Qingchao Jiang, and Xuefeng Yan. 2021. Distributed-ensemble stacked autoencoder model for non-linear process monitoring. Inf. Sci. 542 (2021), 302–316.
[19]
Kin Gwn Lore, Adedotun Akintayo, and Soumik Sarkar. 2017. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61 (2017), 650–662.
[20]
Wei Luo, Jun Li, Jian Yang, Wei Xu, and Jian Zhang. 2017. Convolutional sparse autoencoders for image classification. IEEE Trans. Neural Netw. Learn. Syst. 29, 7 (2017), 3289–3294.
[21]
Ning Lv, Chen Chen, Tie Qiu, and Arun Kumar Sangaiah. 2018. Deep learning and superpixel feature extraction based on contractive autoencoder for change detection in SAR images. IEEE Trans. Industr. Inf. 14, 12 (2018), 5530–5538.
[22]
Andrzej Maćkiewicz and Waldemar Ratajczak. 1993. Principal components analysis. Comput. Geosci. 19, 3 (1993), 303–342.
[23]
James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability. 281–297.
[24]
Francisco J. Pulgar, Francisco Charte, Antonio J. Rivera, and María J. del Jesus. 2021. ClEnDAE: A classifier based on ensembles with built-in dimensionality reduction through denoising autoencoders. Inf. Sci. 565 (2021), 146–176.
[25]
Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. 2011. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on Machine Learning. 833–840.
[26]
Alex Rodriguez and Alessandro Laio. 2014. Clustering by fast search and find of density peaks. Science 344, 6191 (2014), 1492–1496.
[27]
Sam T. Roweis and Lawrence K. Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. Science 290, 5500 (2000), 2323–2326.
[28]
David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back-propagating errors. Nature 323, 6088 (1986), 533–536.
[29]
Abd-Krim Seghouane, Navid Shokouhi, and Inge Koch. 2019. Sparse principal component analysis with preserved sparsity pattern. IEEE Trans. Image Process. 28, 7 (2019), 3274–3285.
[30]
Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. 2000. A global geometric framework for nonlinear dimensionality reduction. Science 290, 5500 (2000), 2319–2323.
[31]
Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 11 (2008), 2579–2605.
[32]
Meng Wang, Linjun Yang, and Xian-Sheng Hua. 2009. MSRA-MM: Bridging Research and Industrial Societies for Multimedia Information Retrieval. Microsoft Research Asia.
[33]
Shaoyu Wang, Xinyu Wang, Liangpei Zhang, and Yanfei Zhong. 2021. Auto-AD: Autonomous hyperspectral anomaly detection network based on fully convolutional autoencoder. IEEE Trans. Geosci. Remote Sens. 60 (2021), 1–14.
[34]
Xiu-Shen Wei, Yang Shen, Xuhao Sun, Han-Jia Ye, and Jian Yang. 2021. \(A^2\)-Net: Learning attribute-aware hash codes for large-scale fine-grained image retrieval. In Advances in Neural Information Processing Systems, Vol 34, 5720–5730.
[35]
Bohan Wu, Suraj Nair, Roberto Martin-Martin, Li Fei-Fei, and Chelsea Finn. 2021. Greedy hierarchical variational autoencoders for large-scale video prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2318–2328.
[36]
Jiqing Wu, Zhiwu Huang, Dinesh Acharya, Wen Li, Janine Thoma, Danda Pani Paudel, and Luc Van Gool. 2019. Sliced wasserstein generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3713–3722.
[37]
Kai Wu, Jing Liu, Penghui Liu, and Shanchao Yang. 2019. Time series prediction using sparse autoencoder and high-order fuzzy cognitive maps. IEEE Trans. Fuzzy Syst. 28, 12 (2019), 3110–3121.
[38]
Yue Wu, Jiaheng Li, Yongzhe Yuan, A. K. Qin, Qi-Guang Miao, and Mao-Guo Gong. 2021. Commonality autoencoder: Learning common features for change detection from heterogeneous images. IEEE Trans. Neural Netw. Learn. Syst. 33, 9 (2022), 4257–4270.
[39]
Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In Proceedings of 31st AAAI Conference on Artificial Intelligence, Vol. 8. 22945–22954.
[40]
Xu Yang, Cheng Deng, Feng Zheng, Junchi Yan, and Wei Liu. 2019. Deep spectral clustering using dual autoencoder network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4066–4075.
[41]
Zhirong Yang and Erkki Oja. 2010. Linear and nonlinear projective nonnegative matrix factorization. IEEE Trans. Neural Netw. 21, 5 (2010), 734–749.
[42]
Xiwen Yao, Junwei Han, Gong Cheng, Xueming Qian, and Lei Guo. 2016. Semantic annotation of high-resolution satellite images via weakly supervised learning. IEEE Trans. Geosci. Remote Sens. 54, 6 (2016), 3660–3671.
[43]
Ji Zhang, Jingkuan Song, Lianli Gao, Ye Liu, and Heng Tao Shen. 2022. Progressive meta-learning with curriculum. IEEE Trans. Circ. Syst. Vid. Technol. 32, 9 (2022), 5916–5930.
[44]
Ji Zhang, Jingkuan Song, Yazhou Yao, and Lianli Gao. 2021. Curriculum-based meta-learning. In Proceedings of the 29th ACM International Conference on Multimedia. 1838–1846.
[45]
Yuyan Zhang, Xinyu Li, Liang Gao, Wen Chen, and Peigen Li. 2020. Ensemble deep contractive auto-encoders for intelligent fault diagnosis of machines under noisy environment. Knowl.-Bas. Syst. 196 (2020), 105764.
[46]
Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann LeCun. 2018. Adversarially regularized autoencoders. In Proceedings of the International Conference on Machine Learning. 5902–5911.
[47]
Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2019. Infovae: Balancing learning and inference in variational autoencoders. In Proceedings of the AAAI Conference on Artificial Intelligence. 5885–5892.

Cited By

View all
  • (2025)Multidimensional Scaling Orienting Discriminative Co-Representation LearningIEEE Transactions on Human-Machine Systems10.1109/THMS.2024.348384855:1(71-82)Online publication date: Feb-2025
  • (2024)Dandelion optimization based feature selection with machine learning for digital transaction fraud detectionAIMS Mathematics10.3934/math.20242099:2(4241-4258)Online publication date: 2024
  • (2024)Credit Card Fraud Detection via Intelligent Sampling and Self-supervised LearningACM Transactions on Intelligent Systems and Technology10.1145/364128315:2(1-29)Online publication date: 28-Mar-2024
  • Show More Cited By

Index Terms

  1. Self-supervised Discriminative Representation Learning by Fuzzy Autoencoder

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Intelligent Systems and Technology
      ACM Transactions on Intelligent Systems and Technology  Volume 14, Issue 1
      February 2023
      487 pages
      ISSN:2157-6904
      EISSN:2157-6912
      DOI:10.1145/3570136
      • Editor:
      • Huan Liu
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 09 November 2022
      Online AM: 02 September 2022
      Accepted: 19 July 2022
      Revised: 26 June 2022
      Received: 02 October 2021
      Published in TIST Volume 14, Issue 1

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Autoencoders
      2. discriminative representation learning
      3. fuzzy clustering
      4. self-supervised learning

      Qualifiers

      • Research-article
      • Refereed

      Funding Sources

      • National Natural Science Foundation of China

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)80
      • Downloads (Last 6 weeks)5
      Reflects downloads up to 02 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Multidimensional Scaling Orienting Discriminative Co-Representation LearningIEEE Transactions on Human-Machine Systems10.1109/THMS.2024.348384855:1(71-82)Online publication date: Feb-2025
      • (2024)Dandelion optimization based feature selection with machine learning for digital transaction fraud detectionAIMS Mathematics10.3934/math.20242099:2(4241-4258)Online publication date: 2024
      • (2024)Credit Card Fraud Detection via Intelligent Sampling and Self-supervised LearningACM Transactions on Intelligent Systems and Technology10.1145/364128315:2(1-29)Online publication date: 28-Mar-2024
      • (2024)Evolving Knowledge Graph Representation Learning with Multiple Attention Strategies for Citation Recommendation SystemACM Transactions on Intelligent Systems and Technology10.1145/363527315:2(1-26)Online publication date: 28-Mar-2024
      • (2024)One-Step Joint Learning of Self-Supervised Spectral Clustering With Anchor Graph and Fuzzy Clustering for Land Cover ClassificationIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing10.1109/JSTARS.2024.340881717(11178-11193)Online publication date: 2024
      • (2024)Discriminative Regularized Input Manifold for multilayer perceptronPattern Recognition10.1016/j.patcog.2024.110421151(110421)Online publication date: Jul-2024

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media