Skip to main content

Dynamic Task-Specific Factors for Meta-Embedding

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11776))

  • 1387 Accesses

Abstract

Meta-embedding is a technology to create a new embedding by combining different existing embeddings, which captures complementary aspects of lexical semantics. The supervised learning of task-specific meta-embedding is a convenient way to make use of accessible pre-trained word embeddings. However, the weights for different word embeddings are hard to calculate. We introduce the dynamic task-specific factors into meta-embedding (DTFME), which are utilized to calculate appropriate weights of different embedding sets without increasing complexity. Then, we evaluate the performance of DTFME on sentence representation tasks. Experiments show that our method outperforms prior works in several benchmark datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Based on facebookresearch/DME: https://github.com/facebookresearch/DME.

  2. 2.

    This is a common distinction, see e.g., the SNLI leaderboard at https://nlp.stanford.edu/projects/snli/.

References

  1. Bansal, M., Gimpel, K., Livescu, K.: Tailoring continuous word representations for dependency parsing. In: ACL, pp. 809–815 (2014)

    Google Scholar 

  2. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. In: TACL, pp. 135–146 (2017)

    Article  Google Scholar 

  3. Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326 (2015)

  4. Chen, X., Liu, Z., Sun, M.: A unified model for word sense representation and disambiguation. In: EMNLP, pp. 1025–1035 (2014)

    Google Scholar 

  5. Choi, H., Cho, K., Bengio, Y.: Context-dependent word representation for neural machine translation. Comput. Speech Lang. 45, 149–160 (2017)

    Article  Google Scholar 

  6. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. arXiv (2017)

    Google Scholar 

  7. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv (2018)

    Google Scholar 

  8. Kiela, D., Wang, C., Cho, K.: Dynamic meta-embeddings for improved sentence representations. In: EMNLP (2018)

    Google Scholar 

  9. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv (2014)

    Google Scholar 

  10. Kocmi, T., Bojar, O.: An exploration of word embedding initialization in deep-learning tasks. arXiv preprint arXiv:1711.09160 (2017)

  11. Levy, O., Goldberg, Y., Dagan, I.: Improving distributional similarity with lessons learned from word embeddings. In: TACL (2015)

    Google Scholar 

  12. Li, J., Li, J., Fu, X., Masud, M.A., Huang, J.Z.: Learning distributed word representation with multi-contextual mixed embedding. Knowl. Based Syst. 106, 220–230 (2016)

    Article  Google Scholar 

  13. Lin, Z., et al.: A structured self-attentive sentence embedding. arXiv (2017)

    Google Scholar 

  14. Liu, P., Qiu, X., Huang, X.: Learning context-sensitive word embeddings with neural tensor skip-gram model. In: IJCAI, pp. 1284–1290 (2015)

    Google Scholar 

  15. McCann, B., Bradbury, J., Xiong, C., Socher, R.: Learned in translation: contextualized word vectors. In: Advances in Neural Information Processing Systems, pp. 6294–6305 (2017)

    Google Scholar 

  16. Melamud, O., Goldberger, J., Dagan, I.: context2vec: Learning generic context embedding with bidirectional LSTM. In: Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 51–61 (2016)

    Google Scholar 

  17. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  18. Muromägi, A., Sirts, K., Laur, S.: Linear ensembles of word embedding models. arXiv preprint arXiv:1704.01419 (2017)

  19. Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: EMNLP, pp. 1532–1543 (2014)

    Google Scholar 

  20. Peters, M.E., et al.: Deep contextualized word representations. arXiv (2018)

    Google Scholar 

  21. Pilehvar, M.T., Camacho-Collados, J., Navigli, R., Collier, N.: Towards a seamless integration of word senses into downstream NLP applications. arXiv (2017)

    Google Scholar 

  22. Qiu, L., Tu, K., Yu, Y.: Context-dependent sense embedding. In: EMNLP (2016)

    Google Scholar 

  23. Rendle, S.: Factorization machines. In: ICDM, pp. 995–1000 (2010)

    Google Scholar 

  24. Schnabel, T., Labutov, I., Mimno, D., Joachims, T.: Evaluation methods for unsupervised word embeddings. In: EMNLP (2015)

    Google Scholar 

  25. Socher, R., et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: EMNLP (2013)

    Google Scholar 

  26. Tsvetkov, Y., Faruqui, M., Ling, W., Lample, G., Dyer, C.: Evaluation of word vector representations by subspace alignment. In: EMNLP (2015)

    Google Scholar 

  27. Wang, Z., Hamza, W., Florian, R.: Bilateral multi-perspective matching for natural language sentences. In: IJCAI (2017)

    Google Scholar 

  28. Yang, Z., Dhingra, B., Yuan, Y., Hu, J., Cohen, W.W., Salakhutdinov, R.: Words or characters? fine-grained gating for reading comprehension. arXiv (2016)

    Google Scholar 

  29. Yin, W., Schütze, H.: Learning meta-embeddings by using ensembles of embedding sets. arXiv preprint arXiv:1508.04257 (2015)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yue Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, Y., Hu, Y., Xing, L., Wei, X. (2019). Dynamic Task-Specific Factors for Meta-Embedding. In: Douligeris, C., Karagiannis, D., Apostolou, D. (eds) Knowledge Science, Engineering and Management. KSEM 2019. Lecture Notes in Computer Science(), vol 11776. Springer, Cham. https://doi.org/10.1007/978-3-030-29563-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-29563-9_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-29562-2

  • Online ISBN: 978-3-030-29563-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics