Skip to main content
Log in

Similarity-based second chance autoencoders for textual data

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Applying conventional autoencoders for textual data often results in learning trivial and redundant representations due to high text dimensionality, sparsity, and following power-law word distribution. To address these challenges, we propose two novel autoencoders, SCAT (Second Chance Autoencoder for Text) and SSCAT (Similarity-based SCAT). Our autoencoders utilize competitive learning among the k winner neurons in the bottleneck layer, which become specialized in recognizing specific patterns, leading to learning more semantically meaningful representations of textual data. In addition, the SSCAT model presents a novel competition based on a similarity measurement to eliminate redundant features. Our experiments prove that SCAT and SSCAT achieve high performance on several tasks, including classification, topic modeling, and document visualization, compared to LDA, k-Sparse, KATE, ProdLDA, NVCTM and ZeroShotTM.The experiments were conducted using the 20 Newsgroups, Wiki10+, and Reuters datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The ProdLDA source code is available at https://github.com/akashgit/autoencoding_vi_for_topic_models.

  2. The ZeroShotTM source code is available at https://github.com/MilaNLProc/contextualized-topic-models

  3. The KATE source code is available at https://github.com/hugochan/KATE

  4. We used the Wilcoxon module in Spacy library: https://spacy.io/.

References

  1. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X (2015) TensorFlow: Large-scale machine learning on heterogeneous systems https://www.tensorflow.org/. Software available from tensorflow.org

  2. Bahrani M, Sameti H (2010) A new bigram-plsa language model for speech recognition. EURASIP Journal on Advances in Signal Processing 2010(1):308437

    Article  Google Scholar 

  3. Benavoli A, Corani G, Mangili F, Zaffalon M, Ruggeri F (2014) A bayesian wilcoxon signed-rank test based on the dirichlet process. In: International conference on machine learning. PMLR, pp 1026–1034

  4. Bengio Y (2009) Learning deep architectures for AI. Now Publishers Inc

  5. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. In: Advances in neural information processing systems, pp 153–160

  6. Bianchi F, Terragni S, Hovy D, Nozza D, Fersini E (2020) Cross-lingual contextualized topic models with zero-shot learning. arXiv: 2004.07737

  7. Biju VG, Prashanth C (2017) Friedman and wilcoxon evaluations comparing svm, bagging, boosting, k-nn and decision tree classifiers. Journal of Applied Computer Science Methods 9

  8. Blei DM, Griffiths TL, Jordan MI (2010) The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. Journal of the ACM (JACM) 57(2):7

    Article  MathSciNet  Google Scholar 

  9. Blei DM, Ng AY, Jordan MI: Latent dirichlet allocation. Journal of machine Learning research 3(Jan), 993–1022 (2003)

  10. Canini K, Shi L, Griffiths T (2009) Online inference of topics with latent dirichlet allocation. In: Artificial Intelligence and Statistics, pp. 65–72

  11. Chen Y, Zaki MJ (2017) Kate: K-competitive autoencoder for text. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pp 85–94

  12. Chollet, F., et al (2015) Keras. https://github.com/fchollet/keras

  13. Dieng AB, Ruiz FJ, Blei DM (2020) Topic modeling in embedding spaces. Transactions of the Association for Computational Linguistics 8:439–453

    Article  Google Scholar 

  14. Eisenstein J, Ahmed A, Xing EP (2011) Sparse additive generative models of text

  15. Fouladvand S, Mielke MM, Vassilaki M, Sauver JS, Petersen RC, Sohn S (2019) Deep learning prediction of mild cognitive impairment using electronic health records. In: 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, pp 799–806

  16. Gharibi G, Walunj V, Alanazi R, Rella S, Lee Y (2019) Automated management of deep learning experiments. In: Proceedings of the 3rd International Workshop on Data Management for End-to-End Machine Learning, pp 1–4

  17. Gharibi G, Walunj V, Rella S, Lee Y (2019) Modelkb: towards automated management of the modeling lifecycle in deep learning. In: 2019 IEEE/ACM 7th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE). IEEE, pp 28–34

  18. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT press

  19. Goudarzvand S, Gharibi G, Lee Y (2020) Scat: Second chance autoencoder for textual data. arXiv:2005.06632

  20. Goudarzvand S, Sauver JS, Mielke MM, Takahashi PY, Lee Y, Sohn S (2019) Early temporal characteristics of elderly patient cognitive impairment in electronic health records. BMC medical informatics and decision making 19(4):149

    Article  Google Scholar 

  21. Goudarzvand S, Sauver JS, Mielke MM, Takahashi PY, Sohn S (2018) Analyzing early signals of older adult cognitive impairment in electronic health records. In: 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, pp 1636–1640

  22. Hofmann T (2001) Unsupervised learning by probabilistic latent semantic analysis. Machine learning 42(1–2):177–196

    Article  Google Scholar 

  23. Hosseini M, Maida AS, Hosseini M, Raju G (2020) Inception lstm for next-frame video prediction (student abstract). Proceedings of the AAAI Conference on Artificial Intelligence 34:13809–13810

    Article  Google Scholar 

  24. Jiang H, Rao Y (2005) Axon formation: fate versus growth. Nature neuroscience 8(5):544–546

    Article  Google Scholar 

  25. Kuhn M, Johnson K (2013) Applied predictive modeling in rr

  26. Lang K (1995) Newsweeder: Learning to filter netnews. In: Machine Learning Proceedings 1995. Elsevier, pp 331–339

  27. Larochelle H, Lauly S (2012) A neural autoregressive topic model. In: Advances in Neural Information Processing Systems, pp 2708–2716

  28. Lau JH, Newman D, Baldwin T (2014) Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In: Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pp 530–539

  29. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. nature 521(7553):436

    Google Scholar 

  30. Lewis, D.D., Yang, Y., Rose, T.G., Li, F.: Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research 5(Apr), 361–397 (2004)

  31. Liu J, Chang WC, Wu Y, Yang Y (2017) Deep learning for extreme multi-label text classification. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp 115–124

  32. Liu L, Huang H, Gao Y, Zhang Y, Wei X (2019) Neural variational correlated topic modeling. In: The World Wide Web Conference, pp. 1142–1152

  33. Lu X, Tsao Y, Matsuda S, Hori C (2013) Speech enhancement based on deep denoising autoencoder. Interspeech 2013:436–440

    Google Scholar 

  34. Maaloe L, Arngren M, Winther O (2015) Deep belief nets for topic modeling. arXiv:1501.04325

  35. Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(Nov), 2579–2605 (2008)

  36. Makhzani A, Frey B (2013) K-sparse autoencoders. arXiv:1312.5663

  37. Miao Y, Yu L, Blunsom P (2016) Neural variational inference for text processing. In: International conference on machine learning, pp 1727–1736

  38. Mingorance-Le Meur A (2006) Jnk gives axons a second chance. Journal of Neuroscience 26(47):12104–12105

    Article  Google Scholar 

  39. Nan F, Ding R, Nallapati R, Xiang B (2019) Topic modeling with wasserstein autoencoders. arXiv:1907.12374

  40. O’Mahony N, Campbell S, Carvalho A, Harapanahalli S, Hernandez GV, Krpalkova L, Riordan D, Walsh J (2019) Deep learning vs. traditional computer vision. In: Science and Information Conference. Springer , pp 128–144

  41. Reimers N, Gurevych I (2019) Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv:1908.10084

  42. Rubenstein PK, Schoelkopf B, Tolstikhin I (2018) On the latent space of wasserstein auto-encoders. arXiv:1802.03761

  43. Schneider J, Vlachos M (2018) Topic modeling based on keywords and context. In: Proceedings of the 2018 SIAM International Conference on Data Mining. SIAM, pp 369–377

  44. Srivastava A, Sutton C (2017) Autoencoding variational inference for topic models. arXiv:1703.01488

  45. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  46. Tolstikhin I, Bousquet O, Gelly S, Schoelkopf B (2017) Wasserstein auto-encoders. arXiv:1711.01558

  47. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research 11(Dec), 3371–3408 (2010)

  48. Wang R, Zhou D, He Y (2019) Atm: Adversarial-neural topic model. Information Processing & Management 56(6):102098

    Article  Google Scholar 

  49. Wang X, Yang Y (2020) Neural topic model with attention for supervised learning. In: International Conference on Artificial Intelligence and Statistics. PMLR, pp 1147–1156

  50. Wang X, Zhao Y, Pourpanah F (2020) Recent advances in deep learning

  51. Wani MA, Bhat FA, Afzal S, Khan AI (2020) Advances in deep learning, vol. 57. Springer

  52. Wei X, Croft WB (2006) Lda-based document models for ad-hoc retrieval. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pp 178–185

  53. Xu Y, Goodacre R (2018) On splitting training and validation set: a comparative study of cross-validation, bootstrap and systematic sampling for estimating the generalization performance of supervised learning. Journal of Analysis and Testing 2(3):249–262

    Article  Google Scholar 

  54. Zhai S, Zhang ZM (2016) Semisupervised autoencoder for sentiment analysis. In: Thirtieth AAAI Conference on Artificial Intelligence

  55. Zhang C, Butepage J, Kjellstrom H, Mandt S (2018) Advances in variational inference. IEEE transactions on pattern analysis and machine intelligence

  56. Zhang Z, Geiger J, Pohjalainen J, Mousa AED, Jin W, Schuller B (2018) Deep learning for environmentally robust speech recognition: An overview of recent developments. ACM Transactions on Intelligent Systems and Technology (TIST) 9(5):1–28

    Article  Google Scholar 

  57. Zhu J, Xing EP (2012) Sparse topical coding. arXiv:1202.3778

  58. Zhu Z, Wang X, Bai S, Yao C, Bai X (2016) Deep learning representation using autoencoder for 3d shape retrieval. Neurocomputing 204:41–50

    Article  Google Scholar 

  59. Zubiaga A (2012) Enhancing navigation on wikipedia with social tags. arXiv:1202.5469

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saria Goudarzvand.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Goudarzvand, S., Gharibi, G. & Lee, Y. Similarity-based second chance autoencoders for textual data. Appl Intell 52, 12330–12346 (2022). https://doi.org/10.1007/s10489-021-03100-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-03100-z

Keywords

Navigation