skip to main content
10.1145/3639631.3639665acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacaiConference Proceedingsconference-collections
research-article

A Review: Data and Semantic Augmentation for Relation Classification in Low Resource

Published:16 February 2024Publication History

ABSTRACT

Relation Classification (RC) is a significant study component in Natural Language Processing (NLP) that focuses on matching pairings of entities in natural utterances. Both traditional methods relying on rule matching and statistical features, as well as more contemporary methods utilizing deep learning and Pre-trained Language Model (PLM), excessively depends on vast quantities of data. In reality, numerous domains or subjects sometimes suffer from a scarcity of accessible data. Consequently, numerous academics have shifted their attention towards conducting research in low-resource domains, namely in areas such as semi-supervised learning and weakly supervised learning. However, both of these approaches bring a significant amount of noisy input into the model. Errors may arise in methods utilizing metric learning as a result of inappropriate metric selections. Prompt Learning (PL) has expanded its success in few-shot learning to also include RC tasks. Studies have been carried out to investigate the utilization of PL in enhancing the model’s capacity to comprehend and learn textual content. This includes augmenting the sample data with prompt templates to enhance the model’s ability to learn from a small amount of labeled data. This study presents a comprehensive overview of the latest research advancements in low-resource reading comprehension (RC). Additionally, it provides a summary of the few-shot RC technique based on pre-training and fine-tuning language models (PL). Lastly, the present challenges in research are examined, and the future trajectory of work on few-shot RC based on pre-training and language models are envisioned.

References

  1. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.Google ScholarGoogle Scholar
  2. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.htmlGoogle ScholarGoogle Scholar
  3. Razvan Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of human language technology conference and conference on empirical methods in natural language processing. 724–731.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Dongqi Cai, Yaozong Wu, Haitao Yuan, Shangguang Wang, Felix Xiaozhu Lin, and Mengwei Xu. 2022. AUG-FedPrompt: Practical Few-shot Federated NLP with Data-augmented Prompts. arXiv preprint arXiv:2212.00192 (2022).Google ScholarGoogle Scholar
  5. Xiang Chen, Lei Li, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2443–2448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In Proceedings of the ACM Web conference 2022. 2778–2788.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Yulong Chen, Yang Liu, Li Dong, Shuohang Wang, Chenguang Zhu, Michael Zeng, and Yue Zhang. 2022. Adaprompt: Adaptive model training for prompt-based nlp. arXiv preprint arXiv:2202.04824 (2022).Google ScholarGoogle Scholar
  8. Shunhang CHENG, Zhihua LI, and Tao WEI. 2023. Threat intelligence entity relation extraction method integrating bootstrapping and semantic role labeling. Journal of Computer Applications 43, 5 (2023), 1445.Google ScholarGoogle Scholar
  9. Yew Ken Chia, Lidong Bing, Soujanya Poria, and Luo Si. 2022. RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. arXiv preprint arXiv:2203.09101 (2022).Google ScholarGoogle Scholar
  10. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Jill Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computational Linguistics, 4171–4186. https://doi.org/10.18653/V1/N19-1423Google ScholarGoogle ScholarCross RefCross Ref
  11. Shimin Di, Yanyan Shen, and Lei Chen. 2019. Relation Extraction via Domain-Aware Transfer Learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (Anchorage, AK, USA) (KDD ’19). Association for Computing Machinery, New York, NY, USA, 1348–1357. https://doi.org/10.1145/3292500.3330890Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jeff Donahue, Lisa Anne Hendricks, Marcus Rohrbach, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, and Trevor Darrell. 2017. Long-Term Recurrent Convolutional Networks for Visual Recognition and Description. IEEE Trans. Pattern Anal. Mach. Intell. 39, 4 (2017), 677–691. https://doi.org/10.1109/TPAMI.2016.2599174Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. Graphrel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th annual meeting of the association for computational linguistics. 1409–1418.Google ScholarGoogle ScholarCross RefCross Ref
  14. Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2020. Neural snowball for few-shot relation learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 7772–7779.Google ScholarGoogle ScholarCross RefCross Ref
  15. Aakriti Gupta, Kapil Thadani, and Neil O’Hare. 2020. Effective Few-Shot Classification with Transfer Learning. In Proceedings of the 28th International Conference on Computational Linguistics, Donia Scott, Nuria Bel, and Chengqing Zong (Eds.). International Committee on Computational Linguistics, Barcelona, Spain (Online), 1061–1066. https://doi.org/10.18653/v1/2020.coling-main.92Google ScholarGoogle ScholarCross RefCross Ref
  16. Xu Han, Tianyu Gao, Yankai Lin, Hao Peng, Yaoliang Yang, Chaojun Xiao, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2020. More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction. In AACL. https://api.semanticscholar.org/CorpusID:215238860Google ScholarGoogle Scholar
  17. Kai He, Yucheng Huang, Rui Mao, Tieliang Gong, Chen Li, and Erik Cambria. 2023. Virtual prompt pre-training for prototype-based few-shot relation extraction. Expert Systems with Applications 213 (2023), 118927.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Ying He, Zhixu Li, Guanfeng Liu, Fangfei Cao, Zhigang Chen, Ke Wang, and Jie Ma. 2018. Bootstrapped multi-level distant supervision for relation extraction. In Web Information Systems Engineering–WISE 2018: 19th International Conference, Dubai, United Arab Emirates, November 12-15, 2018, Proceedings, Part I 19. Springer, 408–423.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Zhong Ji, Xingliang Chai, Yunlong Yu, Yanwei Pang, and Zhongfei Zhang. 2020. Improved prototypical networks for few-shot learning. Pattern Recognition Letters 140 (2020), 81–87.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know?Transactions of the Association for Computational Linguistics 8 (2020), 423–438.Google ScholarGoogle Scholar
  21. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the association for computational linguistics 8 (2020), 64–77.Google ScholarGoogle ScholarCross RefCross Ref
  22. Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, 2015. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, Vol. 2. Lille.Google ScholarGoogle Scholar
  23. Christy Y Li, Xiaodan Liang, Zhiting Hu, and Eric P Xing. 2019. Knowledge-driven encode, retrieve, paraphrase for medical image report generation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6666–6673.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Fei Li, Meishan Zhang, Guohong Fu, Tao Qian, and Donghong Ji. 2016. A Bi-LSTM-RNN Model for Relation Classification Using Low-Cost Sequence Features. CoRR abs/1608.07720 (2016). arXiv:1608.07720http://arxiv.org/abs/1608.07720Google ScholarGoogle Scholar
  25. Yusen Lin. 2021. A review on semi-supervised relation extraction. arXiv preprint arXiv:2103.07575 (2021).Google ScholarGoogle Scholar
  26. Chunyang Liu, Wenbo Sun, Wenhan Chao, and Wanxiang Che. 2013. Convolution Neural Network for Relation Extraction. In Advanced Data Mining and Applications - 9th International Conference, ADMA 2013, Hangzhou, China, December 14-16, 2013, Proceedings, Part II(Lecture Notes in Computer Science, Vol. 8347), Hiroshi Motoda, Zhaohui Wu, Longbing Cao, Osmar R. Zaïane, Min Yao, and Wei Wang (Eds.). Springer, 231–242. https://doi.org/10.1007/978-3-642-53917-6_21Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Fangchao Liu, Hongyu Lin, Xianpei Han, Boxi Cao, and Le Sun. 2022. Pre-training to match for unified low-shot relation extraction. arXiv preprint arXiv:2203.12274 (2022).Google ScholarGoogle Scholar
  28. Jie Liu. 2022. Importance-SMOTE: a synthetic minority oversampling method for noisy imbalanced data. Soft Computing 26, 3 (2022), 1141–1163.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Comput. Surveys 55, 9 (2023), 1–35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Yang Liu, Jinpeng Hu, Xiang Wan, and Tsung-Hui Chang. 2022. Learn from relation information: Towards prototype representation rectification for few-shot relation extraction. In Findings of the Association for Computational Linguistics: NAACL 2022. 1822–1831.Google ScholarGoogle ScholarCross RefCross Ref
  31. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692 (2019). arXiv:1907.11692http://arxiv.org/abs/1907.11692Google ScholarGoogle Scholar
  32. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. arXiv preprint arXiv:2203.12277 (2022).Google ScholarGoogle Scholar
  33. R Mooney. 1999. Relational learning of pattern-match rules for information extraction. In Proceedings of the sixteenth national conference on artificial intelligence, Vol. 328. 334.Google ScholarGoogle Scholar
  34. Chongyu Pan, Jian Huang, Jianxing Gong, and Xingsheng Yuan. 2019. Few-Shot Transfer Learning for Text Classification With Lightweight Word Embedding Based Models. IEEE Access 7 (2019), 53296–53304. https://doi.org/10.1109/ACCESS.2019.2911850Google ScholarGoogle ScholarCross RefCross Ref
  35. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. arXiv preprint arXiv:2101.05779 (2021).Google ScholarGoogle Scholar
  36. Da Peng, Zhongmin Pei, and Delin Mo. 2023. MTL-JER: Meta-Transfer Learning for Low-Resource Joint Entity and Relation Extraction. In 2023 3rd International Conference on Neural Networks, Information and Communication Engineering (NNICE). 78–83. https://doi.org/10.1109/NNICE58320.2023.10105766Google ScholarGoogle ScholarCross RefCross Ref
  37. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), Marilyn A. Walker, Heng Ji, and Amanda Stent (Eds.). Association for Computational Linguistics, 2227–2237. https://doi.org/10.18653/V1/N18-1202Google ScholarGoogle ScholarCross RefCross Ref
  38. Pengda Qin, Weiran Xu, and William Yang Wang. 2018. DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction. In Annual Meeting of the Association for Computational Linguistics. https://api.semanticscholar.org/CorpusID:44063972Google ScholarGoogle Scholar
  39. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 conference of the North American chapter of the association for computational linguistics: human language technologies. 74–84.Google ScholarGoogle Scholar
  40. Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth?arXiv preprint arXiv:2103.08493 (2021).Google ScholarGoogle Scholar
  41. Yuming Shang, He-Yan Huang, Xian-Ling Mao, Xin Sun, and Wei Wei. 2020. Are noisy sentences useless for distant supervised relation extraction?. In Proceedings of the AAAI conference on artificial intelligence, Vol. 34. 8799–8806.Google ScholarGoogle ScholarCross RefCross Ref
  42. Yu-Ming Shang, Heyan Huang, Xin Sun, Wei Wei, and Xian-Ling Mao. 2023. Learning relation ties with a force-directed graph in distant supervised relation extraction. ACM Transactions on Information Systems 41, 1 (2023), 1–23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Peng Shi and Jimmy Lin. 2019. Simple bert models for relation extraction and semantic role labeling. arXiv preprint arXiv:1904.05255 (2019).Google ScholarGoogle Scholar
  44. Yong Shi, Yang Xiao, Pei Quan, Minglong Lei, and Lingfeng Niu. 2021. Distant supervision relation extraction via adaptive dependency-path and additional knowledge graph supervision. Neural Networks 134 (2021), 42–53.Google ScholarGoogle ScholarCross RefCross Ref
  45. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Advances in neural information processing systems 30 (2017).Google ScholarGoogle Scholar
  46. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the Blanks: Distributional Similarity for Relation Learning. In Annual Meeting of the Association for Computational Linguistics. https://api.semanticscholar.org/CorpusID:174801632Google ScholarGoogle Scholar
  47. Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Xiangrong Zeng, and Shengping Liu. 2020. Joint entity and relation extraction with set prediction networks. arXiv 2020. arXiv preprint arXiv:2011.01675 (2020).Google ScholarGoogle Scholar
  48. David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. arXiv preprint arXiv:1909.03546 (2019).Google ScholarGoogle Scholar
  49. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation Classification via Multi-Level Attention CNNs. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. https://doi.org/10.18653/V1/P16-1123Google ScholarGoogle ScholarCross RefCross Ref
  50. Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021. KEPLER: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics 9 (2021), 176–194.Google ScholarGoogle ScholarCross RefCross Ref
  51. Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang. 2022. Promda: Prompt-based data augmentation for low-resource nlu tasks. arXiv preprint arXiv:2202.12499 (2022).Google ScholarGoogle Scholar
  52. Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2019. A novel cascade binary tagging framework for relational triple extraction. arXiv preprint arXiv:1909.03227 (2019).Google ScholarGoogle Scholar
  53. Wen Wen, Yongbin Liu, Chunping Ouyang, Qiang Lin, and Tonglee Chung. 2021. Enhanced prototypical network for few-shot relation extraction. Information Processing & Management 58, 4 (2021), 102596.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Shanchan Wu and Yifan He. 2019. Enriching pre-trained language model with entity information for relation classification. In Proceedings of the 28th ACM international conference on information and knowledge management. 2361–2364.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Min Xia, Xiang Cheng, Sen Su, Ming Kuang, and Gang Li. 2022. Bootstrapping Joint Entity and Relation Extraction with Reinforcement Learning. In International Conference on Web Information Systems Engineering. Springer, 418–432.Google ScholarGoogle Scholar
  56. Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. Advances in neural information processing systems 33 (2020), 6256–6268.Google ScholarGoogle Scholar
  57. Sicheng Yang and Dandan Song. 2022. FPC: Fine-tuning with Prompt Curriculum for Relation Extraction. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing. 1065–1077.Google ScholarGoogle Scholar
  58. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. arXiv preprint arXiv:1906.06127 (2019).Google ScholarGoogle Scholar
  59. Jianbo Yuan, Han Guo, Zhiwei Jin, Hongxia Jin, Xianchao Zhang, and Jiebo Luo. 2017. One-shot learning for fine-grained relation extraction via convolutional siamese neural network. In 2017 IEEE International Conference on Big Data (Big Data). 2194–2199. https://doi.org/10.1109/BigData.2017.8258168Google ScholarGoogle ScholarCross RefCross Ref
  60. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research 3, Feb (2003), 1083–1106.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation Classification via Convolutional Deep Neural Network. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland, Jan Hajic and Junichi Tsujii (Eds.). ACL, 2335–2344. https://aclanthology.org/C14-1220/Google ScholarGoogle Scholar
  62. Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. 2021. Differentiable prompt makes pre-trained language models better few-shot learners. arXiv preprint arXiv:2108.13161 (2021).Google ScholarGoogle Scholar
  63. Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, and Chao Zhang. 2022. Prboost: Prompt-based rule discovery and boosting for interactive weakly-supervised learning. arXiv preprint arXiv:2203.09735 (2022).Google ScholarGoogle Scholar
  64. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129 (2019).Google ScholarGoogle Scholar
  65. Xinyu Zhao, Shih-Ting Lin, and Greg Durrett. 2020. Effective Distant Supervision for Temporal Relation Extraction. ArXiv abs/2010.12755 (2020). https://api.semanticscholar.org/CorpusID:225067490Google ScholarGoogle Scholar
  66. Jing Zhou, Yanan Zheng, Jie Tang, Jian Li, and Zhilin Yang. 2021. Flipda: Effective and robust data augmentation for few-shot learning. arXiv preprint arXiv:2108.06332 (2021).Google ScholarGoogle Scholar
  67. Peng Zhou, Jiaming Xu, Zhenyu Qi, Hongyun Bao, Zhineng Chen, and Bo Xu. 2018. Distant supervision for relation extraction with hierarchical selective attention. Neural Networks 108 (2018), 240–247.Google ScholarGoogle ScholarCross RefCross Ref
  68. Tuanfei Zhu, Yaping Lin, and Yonghe Liu. 2017. Synthetic minority oversampling technique for multiclass imbalance problems. Pattern Recognition 72 (2017), 327–340.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A Review: Data and Semantic Augmentation for Relation Classification in Low Resource

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        ACAI '23: Proceedings of the 2023 6th International Conference on Algorithms, Computing and Artificial Intelligence
        December 2023
        371 pages
        ISBN:9798400709203
        DOI:10.1145/3639631

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 16 February 2024

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate173of395submissions,44%
      • Article Metrics

        • Downloads (Last 12 months)8
        • Downloads (Last 6 weeks)3

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format