Skip to main content

SemSeq: A Regime for Training Widely-Applicable Word-Sequence Encoders

  • Conference paper
  • First Online:
Computational Linguistics (PACLING 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1215))

  • 818 Accesses

Abstract

A sentence encoder that can be readily employed in many applications or effectively fine-tuned to a specific task/domain is highly demanded. Such a sentence encoding technique would achieve a broader range of applications if it can deal with almost arbitrary word-sequences. This paper proposes a training regime for enabling encoders that can effectively deal with word-sequences of various kinds, including complete sentences, as well as incomplete sentences and phrases. The proposed training regime can be distinguished from existing methods in that it first extracts word-sequences of an arbitrary length from an unlabeled corpus of ordered or unordered sentences. An encoding model is then trained to predict the adjacency between these word-sequences. Herein an unordered sentence indicates an individual sentence without neighboring contextual sentences. In some NLP tasks, such as sentence classification, the semantic contents of an isolated sentence have to be properly encoded. Further, by employing rather unconstrained word-sequences extracted from a large corpus, without heavily relying on complete sentences, it is expected that linguistic expressions of various kinds are employed in the training. This property contributes to enhancing the applicability of the resulting word-sequence/sentence encoders. The experimental results obtained from supervised evaluation tasks demonstrated that the trained encoder achieved performance comparable to existing encoders while exhibiting superior performance in unsupervised evaluation tasks that involve incomplete sentences and phrases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://swoogle.umbc.edu/umbc_corpus.tar.gz.

  2. 2.

    https://github.com/lajanugen/S2V.

  3. 3.

    https://dl.fbaipublicfiles.com/fasttext/vectors-english/crawl-300d-2M.vec.zip.

References

  1. Arora, S., Liang, Y., Ma, T.: A simple but tough-to-beat baseline for sentence embeddings. In: ICLR, pp. 385–399 (2017)

    Google Scholar 

  2. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. In: TACL, pp. 135–146 (2017)

    Google Scholar 

  3. Brahma, S.: Unsupervised learning of sentence representations using sequence consistency. arxiv:1808.04217 (2018)

  4. Conneau, A., Kiela, D.: SentEval: an evaluation toolkit for universal sentence representations. In: LREC (2018)

    Google Scholar 

  5. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. In: EMNLP, pp. 670–680 (2017)

    Google Scholar 

  6. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL, pp. 4171–4186 (2018)

    Google Scholar 

  7. Han, L., Kashyap, A.L., Finin, T., Mayfield, J., Weese, J.: Umbc\(\_\)ebiquity-core: semantic textual similarity systems. In: ACL, pp. 44–52 (2013)

    Google Scholar 

  8. Hill, F., Korhonen, A.: Learning distributed representations of sentences from unlabelled data. In: NAACL, pp. 1367–1377 (2016)

    Google Scholar 

  9. Howard, J., Ruder, S.: Universal language model fine-tuning for text classification. In: ACL, pp. 328–339 (2018)

    Google Scholar 

  10. Kenter, T., Borisov, A., de Rijke, M.: Siamese CBOW: optimizing word embeddings for sentence representations. In: ACL, pp. 941–951 (2016)

    Google Scholar 

  11. Kiros, R., et al.: Skip-thought vectors. In: NIPS, pp. 3294–3302 (2015)

    Google Scholar 

  12. Lin, Z., et al.: A structured self-attentive sentence embedding. In: ICLR, pp. 1–15 (2017)

    Google Scholar 

  13. Logeswaran, L., Lee, H.: An efficient framework for learning sentence representations. In: ICLR (2018)

    Google Scholar 

  14. Mikolov, T., Yih, W., Zweig, G.: Linguistic regularities in continuous space word representations. In: NAACL, pp. 746–751 (2013)

    Google Scholar 

  15. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, pp. 1532–1543 (2014)

    Google Scholar 

  16. Perone, C.S., Silveira, R., Paula, T.S.: Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv:1806.06259 (2018)

  17. Peters, M.E., et al.: Deep contextualized word representations. In: NAACL, pp. 2227–2237 (2018)

    Google Scholar 

  18. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018). https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf

  19. Ranjan, V., Kwon, H., Balasubramanian, N., Hoai, M.: Fake sentence detection as a training task for sentence encoding. arXiv:1808.03840v2 (2018)

  20. Socher, R., et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: EMNLP, pp. 1631–1642 (2013)

    Google Scholar 

  21. Tang, S., de Sa, V.R.: Improving sentence representations with multi-view frameworks. arxiv:1810.01064 (2018)

  22. Zhao, H., Lu, Z., Poupart, P.: Self-adaptive hierarchical sentence model, pp. 4069–4076 (2015)

    Google Scholar 

Download references

Acknowledgment

The present work was partially supported by JSPS KAKENHI Grants number 17H01831.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoshihiko Hayashi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tsuyuki, H., Ogawa, T., Kobayashi, T., Hayashi, Y. (2020). SemSeq: A Regime for Training Widely-Applicable Word-Sequence Encoders. In: Nguyen, LM., Phan, XH., Hasida, K., Tojo, S. (eds) Computational Linguistics. PACLING 2019. Communications in Computer and Information Science, vol 1215. Springer, Singapore. https://doi.org/10.1007/978-981-15-6168-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-6168-9_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-6167-2

  • Online ISBN: 978-981-15-6168-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics