Skip to main content

MixCL: Mixed Contrastive Learning for Relation Extraction

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14648))

Included in the following conference series:

  • 126 Accesses

Abstract

Entity representation plays a fundamental role in modern relation extraction models. Previous efforts usually explicitly distinguish entities from contextual words, e.g., by introducing position embedding w.r.t. entities or surrounding entities with special tokens. Inspired by this observation, we propose improving relation extraction via a novel entity-level contrastive learning, which contrasts an entity with both other ones and its contextual words in a mini-batch. To generate high-quality negatives for contrast, we equip our entity-level contrastive learning with an innovative Mixup strategy, which interpolates feature representations of negative entities and contextual words to create new diversified negative examples. Extensive experiments on TACRED, TACRED-revisited, and SemEval2010 show that our method delivers robust performance improvements base on a strong relation extraction baseline. Furthermore, we propose a new metric to measure the overall hardness of the negative examples by considering their dissimilarities with the anchor instance as well as their diversities, explaining the superiority of our method in-depth.

Z. Jinglei and L. Bo—Equal Contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alt, C., Gabryszak, A., Hennig, L.: TACRED revisited: a thorough evaluation of the TACRED relation extraction task. In: ACL 2020 (2020)

    Google Scholar 

  2. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT 2019 (2019)

    Google Scholar 

  3. Gao, T., Yao, X., Chen, D.: SimCSE: simple contrastive learning of sentence embeddings. CoRR abs/2104.08821 (2021). https://arxiv.org/abs/2104.08821

  4. Gunel, B., Du, J., Conneau, A., Stoyanov, V.: Supervised contrastive learning for pre-trained language model fine-tuning. In: ICLR 2021 (2021)

    Google Scholar 

  5. Guo, Z., Zhang, Y., Lu, W.: Attention guided graph convolutional networks for relation extraction. In: Korhonen, A., Traum, D.R., Màrquez, L. (eds.) ACL 2019 (2009)

    Google Scholar 

  6. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. In: CVPR 2020 (2020)

    Google Scholar 

  7. Hendrickx, I., et al.: SemEval-2010 task 8: multi-way classification of semantic relations between pairs of nominals. In: SemEval@ACL 2010 (2010)

    Google Scholar 

  8. Inoue, H.: Data augmentation by pairing samples for images classification. CoRR abs/1801.02929 (2018). http://arxiv.org/abs/1801.02929

  9. Joshi, M., Chen, D., Liu, Y., Weld, D.S., Zettlemoyer, L., Levy, O.: SpanBERT: improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguist. (2020)

    Google Scholar 

  10. Kalantidis, Y., Sariyildiz, M.B., Pion, N., Weinzaepfel, P., Larlus, D.: Hard negative mixing for contrastive learning. In: NeurIPS 2020 (2020)

    Google Scholar 

  11. Khosla, P., et al.: Supervised contrastive learning. In: NeurIPS 2020 (2020)

    Google Scholar 

  12. Li, B., Ye, W., Huang, C., Zhang, S.: Multi-view inference for relation extraction with uncertain knowledge. In: AAAI 2021 (2021)

    Google Scholar 

  13. MacDonald, E., Barbosa, D.: Neural relation extraction on Wikipedia tables for augmenting knowledge graphs. In: CIKM 2020 (2020)

    Google Scholar 

  14. Mandya, A., Bollegala, D., Coenen, F.: Graph convolution over multiple dependency sub-graphs for relation extraction. In: COLING 2020 (2020)

    Google Scholar 

  15. Miwa, M., Bansal, M.: End-to-end relation extraction using LSTMs on sequences and tree structures. In: ACL 2016 (2016)

    Google Scholar 

  16. Pan, X., Wang, M., Wu, L., Li, L.: Contrastive learning for many-to-many multilingual neural machine translation. In: ACL/IJCNLP 2021 (2021)

    Google Scholar 

  17. Peters, M.E., et al.: Knowledge enhanced contextual word representations. In: EMNLP-IJCNLP 2019 (2019)

    Google Scholar 

  18. Qin, Y., et al.: ERICA: improving entity and relation understanding for pre-trained language models via contrastive learning. In: ACL/IJCNLP 2021 (2021)

    Google Scholar 

  19. Ren, F., et al.: Neural relation classification with text descriptions. In: COLING 2018 (2018)

    Google Scholar 

  20. Robinson, J.D., Chuang, C., Sra, S., Jegelka, S.: Contrastive learning with hard negative samples. In: ICLR 2021 (2021)

    Google Scholar 

  21. Soares, L.B., FitzGerald, N., Ling, J., Kwiatkowski, T.: Matching the blanks: distributional similarity for relation learning. In: ACL 2019 (2019)

    Google Scholar 

  22. Su, P., Peng, Y., Vijay-Shanker, K.: Improving BERT model using contrastive learning for biomedical relation extraction. In: BioNLP@NAACL-HLT 2021 (2021)

    Google Scholar 

  23. Tian, Y., Chen, G., Song, Y., Wan, X.: Dependency-driven relation extraction with attentive graph convolutional networks. In: ACL/IJCNLP 2021 (2021)

    Google Scholar 

  24. Xiong, L., et al.: Approximate nearest neighbor negative contrastive learning for dense text retrieval. In: ICLR 2021 (2021)

    Google Scholar 

  25. Xu, P., Barbosa, D.: Connecting language and knowledge with heterogeneous representations for neural relation extraction. In: NAACL-HLT 2019 (2019)

    Google Scholar 

  26. Xue, F., Sun, A., Zhang, H., Chng, E.S.: GDPNet: refining latent multi-view graph for relation extraction. In: AAAI 2021 (2021)

    Google Scholar 

  27. Yamada, I., Asai, A., Shindo, H., Takeda, H., Matsumoto, Y.: LUKE: deep contextualized entity representations with entity-aware self-attention. In: EMNLP 2020 (2020)

    Google Scholar 

  28. Yu, M., Yin, W., Hasan, K.S., dos Santos, C.N., Xiang, B., Zhou, B.: Improved neural relation detection for knowledge base question answering. In: ACL 2017 (2017)

    Google Scholar 

  29. Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J.: Relation classification via convolutional deep neural network. In: COLING 2014 (2014)

    Google Scholar 

  30. Zhang, H., Cissé, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. In: ICLR 2018 (2018)

    Google Scholar 

  31. Zhang, Y., Zhong, V., Chen, D., Angeli, G., Manning, C.D.: Position-aware attention and supervised data improve slot filling. In: EMNLP 2017 (2017)

    Google Scholar 

  32. Zhou, W., Chen, M.: An improved baseline for sentence-level relation extraction. CoRR abs/2102.01373 (2021). https://arxiv.org/abs/2102.01373

Download references

Acknowledgment

This work is supported by the Research and Application of Intelligent Regional Industrial Brain Platform.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xixin Cao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J., Li, B., Cao, X., Zhang, M., Zhao, W. (2024). MixCL: Mixed Contrastive Learning for Relation Extraction. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14648. Springer, Singapore. https://doi.org/10.1007/978-981-97-2238-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-2238-9_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-2240-2

  • Online ISBN: 978-981-97-2238-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics