Skip to main content

A Comparison Among Significance Tests and Other Feature Building Methods for Sentiment Analysis: A First Study

  • Conference paper
  • First Online:
Book cover Computational Linguistics and Intelligent Text Processing (CICLing 2017)

Abstract

Words that participate in the sentiment (positive or negative) classification decision are known as significant words for sentiment classification. Identification of such significant words as features from the corpus reduces the amount of irrelevant information in the feature set under supervised sentiment classification settings. In this paper, we conceptually study and compare various types of feature building methods, viz., unigrams, TFIDF, Relief, Delta-TFIDF, \(\chi ^2\) test and Welch’s t-test for sentiment analysis task. Unigrams and TFIDF are the classic ways of feature building from the corpus. Relief, Delta-TFIDF and \(\chi ^2\) test have recently attracted much attention for their potential use as feature building methods in sentiment analysis. On the contrary, t-test is the least explored for the identification of significant words from the corpus as features.

We show the effectiveness of significance tests over other feature building methods for three types of sentiment analysis tasks, viz., in-domain, cross-domain and cross-lingual. Delta-TFIDF, \(\chi ^2\) test and Welch’s t-test compute the significance of the word for classification in the corpus, whereas unigrams, TFIDF and Relief do not observe the significance of the word for classification. Furthermore, significance tests can be divided into two categories, bag-of-words-based test and distribution-based test. Bag-of-words-based test observes the total count of the word in different classes to find significance of the word, while distribution-based test observes the distribution of the word. In this paper, we substantiate that the distribution-based Welch’s t-test is more accurate than bag-of-words-based \(\chi ^2\) test and Delta-TFIDF in identification of significant words from the corpus.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We also observed the performance of unigrams with the frequency in the document as feature value, but we did not find any improvement in SA accuracy over the unigram’s presence.

  2. 2.

    Available at: http://java-ml.sourceforge.net/.

  3. 3.

    More detail about the implementation of Relief can be obtained from Liu and Hiroshi [23].

  4. 4.

    \(\chi ^2\) value and P-value have inverse correlation, hence a high \(\chi ^2\) value corresponds to a low P-value. The correlation table is available at: http://sites.stat.psu.edu/~mga/401/tables/Chi-square-table.pdf.

  5. 5.

    t value and P-value have inverse correlation, hence a high t value corresponds to a low P-value. The correlation table is available at: http://www.sjsu.edu/faculty/gerstman/StatPrimer/t-table.pdf.

  6. 6.

    The threshold 0.05 on P-value is a standard value in statistics as it gives \(95\%\) confidence in the decision.

  7. 7.

    Available at: http://www.cs.cornell.edu/people/pabo/movie-review-data/.

  8. 8.

    Available at: http://www.cs.jhu.edu/~mdredze/datasets/sentiment/index2.html. This dataset has one more domain, that is, DVD domain. The contents of reviews in the DVD domain are very similar to the reviews in the movie domain; hence, to avoid redundancy, we have not reported results with the DVD domain.

  9. 9.

    A threshold on score is set empirically to filter out the words about which tests are not very confident, where the low confidence is visible from the low score assigned by Relief.

  10. 10.

    Available at: https://commons.apache.org/proper/commons-math/download_math.cgi.

  11. 11.

    We use SVM package libsvm, which is available in java-based WEKA toolkit for machine learning. Available at: http://www.cs.waikato.ac.nz/ml/weka/downloading.html.

  12. 12.

    Application of significance test (Delta-TFIDF or \(\chi ^2\) test or t-test) reduces the feature set size substantially, which stimulates a less computationally expensive SA system in comparison to unigrams, TFIDF and Relief.

  13. 13.

    Since movie domain has the highest average length of the document (review), we have selected movie domain to show the variation among confusion matrices obtained with different feature building methods.

  14. 14.

    CLSA results are reported using the four different languages, viz., English (en), French (fr), German (de) and Russian (ru). The more detail about the dataset is given in Table 5.

  15. 15.

    In all CLSA experiments, training data is obtained by translating source language data, while test data is taken from the available manually tagged non-translated data.

  16. 16.

    Available at: http://crunchbang.org/forums/viewtopic.php?id=17034.

  17. 17.

    For pairs en\(\rightarrow \)en, fr\(\rightarrow \)fr, de\(\rightarrow \)de and ru\(\rightarrow \)ru, source and target languages are the same and training data is not the translated data, it is the original manually tagged dataset in the language.

  18. 18.

    In case of in-language pairs, for example, en\(\rightarrow \)en we assumed a BLEU score of 100 considering that this pair has \(100\%\) correct translation as there is no translation process involved.

  19. 19.

    Here, the P-value for the t value is less than 0.05. Significance of difference in accuracy is observed at \(P<0.05\), which gives \(95\%\) confidence in decision.

References

  1. Oakes, M., Gaaizauskas, R., Fowkes, H., Jonsson, A., Wan, V., Beaulieu, M.: A method based on the chi-square test for document classification. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 440–441. ACM (2001)

    Google Scholar 

  2. Jin, X., Xu, A., Bie, R., Guo, P.: Machine learning techniques and chi-square feature selection for cancer classification using SAGE gene expression profiles. In: Li, J., Yang, Q., Tan, A.-H. (eds.) BioDM 2006. LNCS, vol. 3916, pp. 106–115. Springer, Heidelberg (2006). https://doi.org/10.1007/11691730_11

    Chapter  Google Scholar 

  3. Moh’d, A., Mesleh, A.: Chi square feature extraction based SVMS arabic language text categorization system. J. Comput. Sci. 3, 430–435 (2007)

    Article  Google Scholar 

  4. Kilgarriff, A.: Comparing corpora. Int. J. Corpus Linguist. 6, 97–133 (2001)

    Article  Google Scholar 

  5. Paquot, M., Bestgen, Y.: Distinctive words in academic writing: a comparison of three statistical tests for keyword extraction. Lang. Comput. 68, 247–269 (2009)

    Google Scholar 

  6. Lijffijt, J., Nevalainen, T., Säily, T., Papapetrou, P., Puolamäki, K., Mannila, H.: Significance testing of word frequencies in corpora. Digital Scholarsh. Humanit. (2014) (fqu064)

    Google Scholar 

  7. Glorot, X., Bordes, A., Bengio, Y.: Domain adaptation for large-scale sentiment classification: a deep learning approach. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 513–520 (2011)

    Google Scholar 

  8. Zhou, J.T., Pan, S.J., Tsang, I.W., Yan, Y.: Hybrid heterogeneous transfer learning through deep learning. AAAI, 2213–2220 (2014)

    Google Scholar 

  9. Pang, B., Lee, L., Vaithyanathan, S.: Thumbs up?: sentiment classification using machine learning techniques. In: Proceedings of Conference on Empirical Methods in Natural Language Processing, pp. 79–86 (2002)

    Google Scholar 

  10. Meyer, T.A., Whateley, B.: Spambayes: effective open-source, bayesian based, email classification system. In: CEAS. Citeseer (2004)

    Google Scholar 

  11. Kanayama, H., Nasukawa, T.: Fully automatic lexicon expansion for domain-oriented sentiment analysis. In: Proceedings of Conference on Empirical Methods in Natural Language Processing, pp. 355–363 (2006)

    Google Scholar 

  12. Cheng, A., Zhulyn, O.: A system for multilingual sentiment learning on large data sets. In: Proceedings of International Conference on Computational Linguistics, pp. 577–592 (2012)

    Google Scholar 

  13. Leskovec, J., Rajaraman, A., Ullman, J.D.: Mining of massive datasets. Cambridge University Press, Cambridge (2014)

    Google Scholar 

  14. Oakes, M.P., Farrow, M.: Use of the chi-squared test to examine vocabulary differences in english language corpora representing seven different countries. Lit. Linguist. Comput. 22, 85–99 (2007)

    Article  Google Scholar 

  15. Al-Harbi, S., Almuhareb, A., Al-Thubaity, A., Khorsheed, M., Al-Rajeh, A.: Automatic Arabic text classification (2008)

    Google Scholar 

  16. Rayson, P., Garside, R.: Comparing corpora using frequency profiling. In: Proceedings of the workshop on Comparing Corpora, Association for Computational Linguistics, pp. 1–6 (2000)

    Google Scholar 

  17. Sharma, R., Bhattacharyya, P.: Detecting domain dedicated polar words. In: Proceedings of the International Joint Conference on Natural Language Processing, pp. 661–666 (2013)

    Google Scholar 

  18. Kira, K., Rendell, L.A.: The feature selection problem: traditional methods and a new algorithm. AAAI 2, 129–134 (1992)

    Google Scholar 

  19. Martineau, J., Finin, T.: Delta TFIDF: an improved feature space for sentiment analysis. ICWSM 9, 106 (2009)

    Google Scholar 

  20. Martineau, J., Finin, T., Joshi, A., Patel, S.: Improving binary classification on text problems using differential word features. In: Proceedings of the 18th ACM conference on Information and knowledge management, pp. 2019–2024. ACM (2009)

    Google Scholar 

  21. Wu, H.C., Luk, R.W.P., Wong, K.F., Kwok, K.L.: Interpreting TF-IDF term weights as making relevance decisions. ACM Trans. Inf. Syst. (TOIS) 26, 13 (2008)

    Article  Google Scholar 

  22. Čehovin, L., Bosnić, Z.: Empirical evaluation of feature selection methods in classification. Intell. Data Anal. 14, 265–281 (2010)

    Google Scholar 

  23. Liu, H., Motoda, H.: Computational methods of feature selection. CRC Press, Boca Raton (2007)

    Google Scholar 

  24. Pang, B., Lee, L.: A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In: Proceedings of Association for Computational Linguistics, pp. 271–279 (2004)

    Google Scholar 

  25. Blitzer, J., Dredze, M., Pereira, F., et al.: Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. In: Proceedings of Association for Computational Linguistics, pp. 440–447 (2007)

    Google Scholar 

  26. Balamurali, A.R., Khapra, M.M., Bhattacharyya, P.: Lost in translation: viability of machine translation for cross language sentiment analysis. In: Gelbukh, A. (ed.) CICLing 2013. LNCS, vol. 7817, pp. 38–49. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-37256-8_4

    Chapter  Google Scholar 

  27. Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. J. Mach. Learn. Res. 2, 45–66 (2001)

    MATH  Google Scholar 

  28. Sharma, R., Bhattacharyya, P.: Domain sentiment matters: a two stage sentiment analyzer. In: Proceedings of the International Conference on Natural Language Processing (2015)

    Google Scholar 

  29. Pan, S.J., Ni, X., Sun, J.T., Yang, Q., Chen, Z.: Cross-domain sentiment classification via spectral feature alignment. In: Proceedings of the 19th International Conference on World Wide Web, pp. 751–760. ACM (2010)

    Google Scholar 

  30. Wan, X.: Co-training for cross-lingual sentiment classification. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, vol. 1, pp. 235–243. Association for Computational Linguistics (2009)

    Google Scholar 

  31. Wei, B., Pal, C.: Cross lingual adaptation: an experiment on sentiment classifications. In: Proceedings of the ACL 2010 Conference Short Papers, Association for Computational Linguistics, pp. 258–262 (2010)

    Google Scholar 

  32. Koehn, P.: Europarl: a parallel corpus for statistical machine translation. MT Summit. 5, 79–86 (2005)

    Google Scholar 

  33. Ng, V., Dasgupta, S., Arifin, S.: Examining the role of linguistic knowledge sources in the automatic identification and classification of reviews. In: Proceedings of the COLING/ACL on Main Conference Poster Sessions, pp. 611–618. Association for Computational Linguistics (2006)

    Google Scholar 

  34. Salton, G., Buckley, C.: Term-weighting approaches in automatic text retrieval. Inf. Process. Manag. 24, 513–523 (1988)

    Article  Google Scholar 

  35. Lin, Y., Zhang, J., Wang, X., Zhou, A.: An information theoretic approach to sentiment polarity classification. In: Proceedings of the 2nd Joint WICOW/AIRWeb Workshop on Web Quality, pp. 35–40. ACM (2012)

    Google Scholar 

  36. Demiroz, G., Yanikoglu, B., Tapucu, D., Saygin, Y.: Learning domain-specific polarity lexicons. In: 2012 IEEE 12th International Conference on Data Mining Workshops, pp. 674–679. IEEE (2012)

    Google Scholar 

  37. Habernal, I., Ptácek, T., Steinberger, J.: Sentiment analysis in czech social media using supervised machine learning. In: Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pp. 65–74 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raksha Sharma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sharma, R., Mondal, D., Bhattacharyya, P. (2018). A Comparison Among Significance Tests and Other Feature Building Methods for Sentiment Analysis: A First Study. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2017. Lecture Notes in Computer Science(), vol 10762. Springer, Cham. https://doi.org/10.1007/978-3-319-77116-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-77116-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-77115-1

  • Online ISBN: 978-3-319-77116-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics