skip to main content
10.1145/3459637.3482280acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Multi-view Interaction Learning for Few-Shot Relation Classification

Published: 30 October 2021 Publication History

Abstract

Conventional deep learning-based Relation Classification (RC) methods heavily rely on large-scale training dataset and fail to generalize to unseen classes when training data is scant. This work concentrates on RC tasks in few-shot scenarios in which models classify the unlabelled samples given only few labeled samples. Existing few-shot RC models consider the dataset as a series of individual instances and have not fully utilized interaction information among them. Interaction information is conducive to indicate the important areas and produce discriminating representations. So this paper proposes a novel interactive attention network (IAN) which uses inter-instance and intra-instance interactive information to classify the relations. Inter-instance interactive information is first introduced to solve the low-resource problem by capturing the semantic relevance between an instance pair. Intra-instance interactive information is then introduced to address the ambiguous relation classification issue by extracting the entity information inner an instance. Extensive numerical experimental results demonstrate the proposed method promotes the accuracy of down-stream task.

Supplementary Material

MP4 File (Presentation video for CIKM21-rgfp0401.mp4)
We propose a novel interactive attention network (IAN) which can capture both intra-and-inter interactive information for recognizing the correct relations in the few-shot relation classification tasks. And this video talks about our work through five parts? the background of the task, the motivations about the model design, the model details, the experimental results, and the final conclusion.

References

[1]
Antreas Antoniou, Amos J. Storkey, and Harrison Edwards. 2018. Augmenting Image Classifiers Using Data Augmentation Generative Adversarial Networks. In Artificial Neural Networks and Machine Learning. 594--603.
[2]
Luca Bertinetto, João F Henriques, Jack Valmadre, Philip HS Torr, and Andrea Vedaldi. 2016. Learning feed-forward one-shot learners. In Advances in Neural Information Processing Systems. 523--531.
[3]
Qi Cai, Yingwei Pan, Ting Yao, Chenggang Yan, and Tao Mei. 2018. Memory Matching Networks for One-Shot Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4080--4088.
[4]
Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, and Martial Hebert. 2019. Image deformation meta-networks for one-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8680--8689.
[5]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North America Chapter of the Association for Computational Linguistics. 4171--4186.
[6]
Mandar Dixit, Roland Kwitt, Marc Niethammer, and Nuno Vasconcelos. 2017. Aga: Attribute-guided augmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7455--7463.
[7]
Li Fe-Fei et al. 2003. A Bayesian approach to unsupervised one-shot learning of object categories. In Proceedings Ninth IEEE International Conference on Computer Vision. 1134--1141.
[8]
Linhui Feng, Linbo Qiao, Yi Han, Zhigang Kan, Yifu Gao, and Dongsheng Li. 2021. Syntactic Enhanced Projection Network for Few-Shot Chinese Event Extraction (Knowledge Science, Engineering and Management). Springer Interna- tional Publishing, 75--87.
[9]
Michael Fink. 2004. Object classification from a single example utilizing class relevance metrics. In Proceedings of the 17th International Conference on Neural Information Processing Systems. 449--456.
[10]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta- learning for fast adaptation of deep networks. In International Conference on Machine Learning. 1126--1135.
[11]
Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid attention- based prototypical networks for noisy few-shot relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6407--6414.
[12]
Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2019. FewRel 2.0: Towards more challenging few-shot relation classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. 6249--6254.
[13]
Spyros Gidaris and Nikos Komodakis. 2018. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4367--4375.
[14]
Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved Relation Extraction with Feature-Rich Compositional Embedding Models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 1774--1784.
[15]
Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 4803--4809.
[16]
Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. 2019. Cross attention network for few-shot classification. In Advances in Neural Information Processing Systems. 4005--4016.
[17]
Zhigang Kan, Haibo Mi, Sen Yang, Linbo Qiao, Dawei Feng, and Dongsheng Li. 2020. A Distributed Event Extraction Framework for Large-Scale Unstructured Text. In 2020 IEEE International Conference on Joint Cloud Computing.
[18]
Jongmin Kim, Taesup Kim, Sungwoong Kim, and Chang D Yoo. 2019. Edge-labeling graph neural network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11--20.
[19]
Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. 2015. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop.
[20]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural In- formation Processing System. 1106--1114.
[21]
Roland Kwitt, Sebastian Hegenbart, and Marc Niethammer. 2016. One-shot learning of scene locations via feature trajectory transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 78--86.
[22]
Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. 2013. One-shot learning by inverting a compositional causal process. In Proceedings of the 26th International Conference on Neural Information Processing Systems. 2526--2534.
[23]
Huai-Yu Li, Weiming Dong, Xing Mei, Chongyang Ma, Feiyue Huang, and Bao-Gang Hu. 2019. LGM-Net: Learning to Generate Matching Networks for Few- Shot Learning. In International Conference on Machine Learning. 3825--3834.
[24]
Erik G Miller, Nicholas E Matsakis, and Paul A Viola. 2000. Learning from one example through shared densities on transforms. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1. 464--471.
[25]
Siyuan Qiao, Chenxi Liu, Wei Shen, and Alan L. Yuille. 2018. Few-Shot Image Recognition by Predicting Parameters From Activations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7229--7238.
[26]
Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Weizhu Chen, and Jiawei Han. 2021. CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding. In International Conference on Learning Representations.
[27]
Sachin Ravi and Hugo Larochelle. 2017. Optimization as a Model for Few-Shot Learning. In International Conference on Learning Representations.
[28]
Ruslan Salakhutdinov, Joshua Tenenbaum, and Antonio Torralba. 2012. One-shot learning with a hierarchical nonparametric bayesian model. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning. 195--206.
[29]
Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Abhishek Kumar, Rogerio Feris, Raja Giryes, and Alex M Bronstein. 2018. Delta-encoder: an effective sample synthesis method for few-shot object recognition. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2850--2860.
[30]
Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 4080--4090.
[31]
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the Blanks: Distributional Similarity for Relation Learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2895--2905.
[32]
Shengli Sun, Qingfeng Sun, Kevin Zhou, and Tengchao Lv. 2019. Hierarchical attention prototypical networks for few-shot text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 476--485.
[33]
Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1199--1208.
[34]
Kevin D Tang, Marshall F Tappen, Rahul Sukthankar, and Christoph H Lampert. 2010. Optimizing one-shot recognition with micro-set learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3027--3034.
[35]
Kevin D Tang, Marshall F Tappen, Rahul Sukthankar, and Christoph H Lampert. 2010. Optimizing one-shot recognition with micro-set learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3027--3034.
[36]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998--6008.
[37]
Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Matching networks for one shot learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems. 3637--3645.
[38]
Yingyao Wang, Junwei Bao, Guangyi Liu, Youzheng Wu, Xiaodong He, Bowen Zhou, and Tiejun Zhao. 2020. Learning to Decouple Relations: Few-Shot Relation Classification with Entity-Guided Attention and Confusion-Aware Training. In Proceedings of the 28th International Conference on Computational Linguistics. 5799--5809.
[39]
Yu-Xiong Wang and Martial Hebert. 2016. Learning to learn: Model regression networks for easy small sample learning. In European Conference on Computer Vision. 616--634.
[40]
Chen Xing, Negar Rostamzadeh, Boris N. Oreshkin, and Pedro O. Pinheiro. 2019. Adaptive Cross-Modal Few-shot Learning. In Advances in Neural Information Processing Systems. 4848--4858.
[41]
Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, and Dongsheng Li. 2019. Exploring Pre-trained Language Models for Event Extraction and Generation. In Proceedings of ACL. 5284--5294.
[42]
Zhi-Xiu Ye and Zhen-Hua Ling. 2019. Multi-Level Matching and Aggregation Network for Few-Shot Relation Classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2872--2881.
[43]
Xiaodong Yu and Yiannis Aloimonos. 2010. Attribute-based transfer learning for object categorization with zero/one training example. In European conference on computer vision. 127--140.
[44]
Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th international conference on computational linguistics: technical papers. 2335--2344.
[45]
Qi Zhai, Zhigang Kan, Sen Yang, Linbo Qiao, Feng Liu, and Dongsheng Li. 2021. CED-BGFN: Chinese Event Detection via Bidirectional Glyph-Aware Dynamic Fusion Network (Advances in Knowledge Discovery and Data Mining). Springer International Publishing, 295--307.
[46]
GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. 427--434

Cited By

View all
  • (2024)Adaptive class augmented prototype network for few-shot relation extractionNeural Networks10.1016/j.neunet.2023.10.025169:C(134-142)Online publication date: 4-Mar-2024
  • (2024)Sample Feature Enhancement Model Based on Heterogeneous Graph Representation Learning for Few-shot Relation ClassificationInformation Sciences10.1016/j.ins.2024.121583(121583)Online publication date: Oct-2024
  • (2023)Interaction Information Guided Prototype Representation Rectification for Few-Shot Relation ExtractionElectronics10.3390/electronics1213291212:13(2912)Online publication date: 3-Jul-2023
  • Show More Cited By

Index Terms

  1. Multi-view Interaction Learning for Few-Shot Relation Classification

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management
    October 2021
    4966 pages
    ISBN:9781450384469
    DOI:10.1145/3459637
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 30 October 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. few-shot learning
    2. interaction information
    3. relation classification

    Qualifiers

    • Research-article

    Funding Sources

    • National Key Research & Development Program of China
    • National Natural Science Foundation of China

    Conference

    CIKM '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

    Upcoming Conference

    CIKM '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)41
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 07 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Adaptive class augmented prototype network for few-shot relation extractionNeural Networks10.1016/j.neunet.2023.10.025169:C(134-142)Online publication date: 4-Mar-2024
    • (2024)Sample Feature Enhancement Model Based on Heterogeneous Graph Representation Learning for Few-shot Relation ClassificationInformation Sciences10.1016/j.ins.2024.121583(121583)Online publication date: Oct-2024
    • (2023)Interaction Information Guided Prototype Representation Rectification for Few-Shot Relation ExtractionElectronics10.3390/electronics1213291212:13(2912)Online publication date: 3-Jul-2023
    • (2023)Hybrid Enhancement-based prototypical networks for few-shot relation classificationWorld Wide Web10.1007/s11280-023-01184-w26:5(3207-3226)Online publication date: 3-Jul-2023
    • (2023)Learning Discriminative Semantic and Multi-view Context for Domain Adaptive Few-Shot Relation ExtractionNeural Information Processing10.1007/978-981-99-8184-7_22(283-296)Online publication date: 26-Nov-2023
    • (2022)Taxonomy-Aware Prototypical Network for Few-Shot Relation ExtractionMathematics10.3390/math1022437810:22(4378)Online publication date: 21-Nov-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media