Skip to main content

Semantic Oppositeness Embedding Using an Autoencoder-Based Learning Model

  • Conference paper
  • First Online:
Database and Expert Systems Applications (DEXA 2019)

Abstract

Semantic oppositeness is the natural counterpart of the much popular natural language processing concept, semantic similarity. Much like how semantic similarity is a measure of the degree to which two concepts are similar, semantic oppositeness yields the degree to which two concepts would oppose each other. This complementary nature has resulted in most applications and studies incorrectly assuming semantic oppositeness to be the inverse of semantic similarity. In other trivializations, “semantic oppositeness” is used interchangeably with “antonymy”, which is as inaccurate as replacing semantic similarity with simple synonymy. These erroneous assumptions and over-simplifications exist due, mainly, to either lack of information, or the computational complexity of calculation of semantic oppositeness. The objective of this research is to prove that it is possible to extend the idea of word vector embedding to incorporate semantic oppositeness, so that an effective mapping of semantic oppositeness can be obtained in a given vector space. In the experiments we present in this paper, we show that our proposed method achieves a training accuracy of 97.91% and a test accuracy of 97.82%, proving the applicability of this method even in potentially highly sensitive applications and dispelling doubts of over-fitting. Further, this work also introduces a novel, unanchored vector embedding method and a novel, inductive transfer learning process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://goo.gl/yV57W3.

  2. 2.

    /usr/share/dict/words.

References

  1. Gomaa, W.H., Fahmy, A.A.: A survey of text similarity approaches. Int. J. Comput. Appl. 68(13), 13–18 (2013)

    Google Scholar 

  2. Stavrianou, A., Andritsos, P., Nicoloyannis, N.: Overview and semantic issues of text mining. ACM Sigmod Rec. 36(3), 23–34 (2007)

    Article  Google Scholar 

  3. Turney, P.D.: Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In: De Raedt, L., Flach, P. (eds.) ECML 2001. LNCS (LNAI), vol. 2167, pp. 491–502. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44795-4_42

    Chapter  Google Scholar 

  4. de Silva, N., Dou, D., Huang, J.: Discovering inconsistencies in PubMed abstracts through ontology-based information extraction. In: Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, pp. 362–371. ACM (2017)

    Google Scholar 

  5. National Center for Biotechnology Information: PubMed Help, March 2017

    Google Scholar 

  6. Ratnayaka, G., Rupasinghe, T., de Silva, N., Gamage, V.S., Warushavithana, M., Perera, A.S.: Shift-of-perspective identification within legal cases. In: Proceedings of the 3rd Workshop on Automated Detection, Extraction and Analysis of Semantic Information in Legal Texts (2019)

    Google Scholar 

  7. de Silva, N.: Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language (2019)

    Google Scholar 

  8. Paradis, M., Goldblum, M.C., Abidi, R.: Alternate antagonism with paradoxical translation behavior in two bilingual aphasic patients. Brain Lang. 15(1), 55–69 (1982)

    Article  Google Scholar 

  9. Jiang, J.J., Conrath, D.W.: Semantic similarity based on corpus statistics and lexical taxonomy. In: 10th International Conference on Research in Computational Linguistics, ROCLING 1997 (1997)

    Google Scholar 

  10. Wu, Z., Palmer, M.: Verbs semantics and lexical selection. In: Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics. ACL 1994, pp. 133–138. Association for Computational Linguistics, Stroudsburg (1994)

    Google Scholar 

  11. Mikolov, T., Sutskever, I., et al.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  12. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543 (2014)

    Google Scholar 

  13. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  14. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  15. Mettinger, A.: Aspects of Semantic Opposition in English. Oxford University Press, New York (1994)

    Google Scholar 

  16. Schimmack, U.: Pleasure, displeasure, and mixed feelings: are semantic opposites mutually exclusive? Cogn. Emotion 15(1), 81–97 (2001)

    Article  Google Scholar 

  17. Rothman, L., Parker, M.: Just-about-right (jar) Scales. ASTM International, West Conshohocken (2009)

    Book  Google Scholar 

  18. Mikolov, T., Sutskever, I., et al.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  19. Das, R., Zaheer, M., Dyer, C.: Gaussian LDA for topic models with word embeddings. In: ACL, vol. 1, pp. 795–804 (2015)

    Google Scholar 

  20. Lv, Y., Duan, Y., et al.: Traffic flow prediction with big data: a deep learning approach. IEEE Trans. Intell. Transp. Syst. 16(2), 865–873 (2015)

    Google Scholar 

  21. Alsheikh, M.A., Niyato, D., et al.: Mobile big data analytics using deep learning and apache spark. IEEE Network 30(3), 22–29 (2016)

    Article  Google Scholar 

  22. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  23. Hinton, G.E., Roweis, S.T.: Stochastic neighbor embedding. In: Advances in Neural Information Processing Systems, pp. 857–864 (2003)

    Google Scholar 

  24. Ono, M., Miwa, M., Sasaki, Y.: Word embedding-based antonym detection using thesauri and distributional information. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 984–989 (2015)

    Google Scholar 

  25. Chen, Z., Lin, W., et al.: Revisiting word embedding for contrasting meaning. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 106–115 (2015)

    Google Scholar 

  26. Fung, G.P.C., Yu, J.X., et al.: Text classification without negative examples revisit. IEEE Trans. Knowl. Data Eng. 18(1), 6–20 (2006)

    Article  Google Scholar 

  27. Al-Mubaid, H., Umair, S.A.: A new text categorization technique using distributional clustering and learning logic. IEEE Trans. Knowl. Data Eng. 18(9), 1156–1165 (2006)

    Article  Google Scholar 

  28. Sarinnapakorn, K., Kubat, M.: Combining subclassifiers in text categorization: a DST-based solution and a case study. IEEE Trans. Knowl. Data Eng. 19(12), 1638–1651 (2007)

    Article  Google Scholar 

  29. Blitzer, J., Dredze, M., Pereira, F.: Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. In: Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 440–447 (2007)

    Google Scholar 

  30. de Silva, N.H.N.D.: SAFS3 algorithm: frequency statistic and semantic similarity based semantic classification use case. In: 2015 Fifteenth International Conference on Proceedings of Advances in ICT for Emerging Regions (ICTer), pp. 77–83. IEEE (2015)

    Google Scholar 

  31. Miller, G.A., Beckwith, R., et al.: Introduction to wordnet: an on-line lexical database. Int. J. Lexicography 3(4), 235–244 (1990)

    Article  Google Scholar 

  32. Abadi, M., Barham, P., et al.: Tensorflow: a system for large-scale machine learning. In: OSDI, vol. 16, pp. 265–283 (2016)

    Google Scholar 

  33. Abadi, M., Agarwal, A., et al.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015). tensorflow.org

  34. Damien, A.: Auto-Encoder Example. https://goo.gl/wiBspX (2017). Accessed 06 June 2018

  35. LeCun, Y., Bottou, L., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

Download references

Acknowledgement

This research is partially supported by the NSF grant CNS-1747798 to the IUCRC Center for Big Learning.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nisansa de Silva .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Silva, N., Dou, D. (2019). Semantic Oppositeness Embedding Using an Autoencoder-Based Learning Model. In: Hartmann, S., Küng, J., Chakravarthy, S., Anderst-Kotsis, G., Tjoa, A., Khalil, I. (eds) Database and Expert Systems Applications. DEXA 2019. Lecture Notes in Computer Science(), vol 11706. Springer, Cham. https://doi.org/10.1007/978-3-030-27615-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-27615-7_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-27614-0

  • Online ISBN: 978-3-030-27615-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics