Skip to main content

Advertisement

Log in

Self-Supervised Task Augmentation for Few-Shot Intent Detection

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Few-shot intent detection is a practical challenge task, because new intents are frequently emerging and collecting large-scale data for them could be costly. Meta-learning, a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks, has been a popular way to tackle this problem. However, the existing meta-learning models have been evidenced to be overfitting when the meta-training tasks are insufficient. To overcome this challenge, we present a novel self-supervised task augmentation with meta-learning framework, namely STAM. Firstly, we introduce the task augmentation, which explores two different strategies and combines them to extend meta-training tasks. Secondly, we devise two auxiliary losses for integrating self-supervised learning into meta-learning to learn more generalizable and transferable features. Experimental results show that STAM can achieve consistent and considerable performance improvement to existing state-of-the-art methods on four datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Yao H, Zhang L, Finn C. Meta-learning with fewer tasks through task interpolation. arXiv:2106.02695, 2021. https://arxiv.org/abs/2106.02695, June 2021.

  2. Bansal T, Jha R, Munkhdalai T, McCallum A. Self-supervised meta-learning for few-shot natural language classification tasks. In Proc. the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020, pp.522-534. https://doi.org/10.18653/v1/2020.emnlp-main.38.

  3. Yao H, Huang L K, Zhang L et al. Improving generalization in meta-learning via task augmentation. In Proc. the 38th International Conference on Machine Learning, July 2021, pp.11887-11897.

  4. Wang H, Deng Z H. Cross-domain few-shot classification via adversarial task augmentation. In Proc. the 30th International Joint Conference on Artificial Intelligence, August 2021, pp.1075-1081. https://doi.org/10.24963/ijcai.2021/149.

  5. Murty S, Hashimoto T, Manning C D. DReCa: A general task augmentation strategy for few-shot natural language inference. In Proc. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, June 2021, pp.1113-1125. https://doi.org/10.18653/v1/2021.naacl-main.88.

  6. Xia C, Zhang C, Yan X, Chang Y, Yu P S. Zeroshot user intent detection via capsule neural networks. In Proc. the 2018 Conference on Empirical Methods in Natural Language Processing, October 31-November 4, 2018, pp.3090-3099. https://doi.org/10.18653/v1/D18-1348.

  7. Edunov S, Ott M, Auli M, Grangier D. Understanding back-translation at scale. In Proc. the 2018 Conference on Empirical Methods in Natural Language Processing, October 31-November 4, 2018, pp.489-500. https://doi.org/10.18653/v1/D18-1045.

  8. Sennrich R, Haddow B, Birch A. Improving neural machine translation models with monolingual data. In Proc. the 54th Annual Meeting of the Association for Computational Linguistics, August 2016, pp.86-96. https://doi.org/10.18653/v1/P16-1009.

  9. Devlin J, Chang M W, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, June 2019, pp.4171-4186. https://doi.org/10.18653/v1/N19-1423.

  10. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692, 2019. https://arxiv.org/abs/1907.11692, July 2021.

  11. Conneau A, Khandelwal K, Goyal N, Chaudhary V, Wenzek G, Guzmán F, Grave E, Ott M, Zettlemoyer L, Stoyanov V. Unsupervised cross-lingual representation learning at scale. In Proc. the 58th Annual Meeting of the Association for Computational Linguistics, July 2020, pp.8440-8451. https://doi.org/10.18653/v1/2020.acl-main.747.

  12. Zhang Y, He R, Liu Z, Lim K H, Bing L. An unsupervised sentence embedding method by mutual information maximization. In Proc. the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020, pp.1601-1610. https://doi.org/10.18653/v1/2020.emnlp-main.124.

  13. Meng Y, Xiong C, Bajaj P, Tiwary S, Bennett P, Han J, Song X. COCO-LM: Correcting and contrasting text sequences for language model pretraining. In Proc. the 35th International Conference on Neural Information Processing Systems, December 2021.

  14. Gao T, Yao X, Chen D. SimCSE: Simple contrastive learning of sentence embeddings. In Proc. the 2021 Conference on Empirical Methods in Natural Language Processing, November 2021, pp.6894-6910. https://doi.org/10.18653/v1/2021.emnlp-main.552.

  15. Gidaris S, Bursuc A, Komodakis N, Pérez P, Cord M. Boosting few-shot visual learning with self-supervision. In Proc. the 2019 IEEE/CVF International Conference on Computer Vision, October 27-November 2, 2019, pp.8059-8068. https://doi.org/10.1109/ICCV.2019.00815.

  16. Su J C, Maji S, Hariharan B. When does self-supervision improve few-shot learning? In Proc. the 16th European Conference on Computer Vision, August 2020, pp.645-666. https://doi.org/10.1007/978-3-030-58571-6_38.

  17. Zhang M, Zhang J, Lu Z, Xiang T, Ding M, Huang S. IEPT: Instance-level and episode-level pretext tasks for few-shot learning. In Proc. the 9th International Conference on Learning Representations, May 2021.

  18. Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks. In Proc. the 34th International Conference on Machine Learning, August 2017, pp.1126-1135.

  19. Nichol A, Achiam J, Schulman J. On first-order meta-learning algorithms. arXiv:1803.02999, 2018. https://arxiv.org/abs/1803.02999, October 2021.

  20. Santoro A, Bartunov S, Botvinick M, Wierstra D, Lillicrap T. One-shot learning with memory-augmented neural networks. arXiv:1605.06065, 2016. https://arxiv.org/abs/1605.06065, May 2021.

  21. Munkhdalai T, Yu H. Meta networks. In Proc. the 34th International Conference on Machine Learning, August 2017, pp.2554-2563.

  22. Koch G, Zemel R, Salakhutdinov R et al. Siamese neural networks for one-shot image recognition. In Proc. the 32nd International Conference on Machine Learning Deep Learning Workshop, July 2015.

  23. Vinyals O, Blundell C, Lillicrap T, Kavukcuoglu K, Wierstra D. Matching networks for one shot learning. In Proc. the 2016 Annual Conference on Neural Information Processing Systems, December 2016, pp.3630-3638.

  24. Snell J, Swersky K, Zemel R S. Prototypical networks for few-shot learning. In Proc. the 2017 Annual Conference on Neural Information Processing Systems, December 2017, pp.4077-4087.

  25. Sung F, Yang Y, Zhang L, Xiang T, Torr P H, Hospedales T M. Learning to compare: Relation network for few-shot learning. In Proc. the 2018 IEEE Conference on Computer Vision and Pattern Recognition, June 2018, pp.1199-1208. https://doi.org/10.1109/CVPR.2018.00131.

  26. Nagabandi A, Clavera I, Liu S, Fearing R S, Abbeel P, Levine S, Finn C. Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. In Proc. the 7th International Conference on Learning Representations, May 2019.

  27. Rakelly K, Zhou A, Finn C, Levine S, Quillen D. Efficient off-policy meta-reinforcement learning via probabilistic context variables. In Proc. the 36th International Conference on Machine Learning, June 2019, pp.5331-5340.

  28. Liu L, Zhou T, Long G, Jiang J, Zhang C. Learning to propagate for graph meta-learning. In Proc. the 2019 Annual Conference on Neural Information Processing Systems, December 2019, pp.1037-1048.

  29. Chen M, Zhang W, Zhang W, Chen Q, Chen H. Meta relational learning for few-shot link prediction in knowledge graphs. In Proc. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, November 2019, pp.4216-4225. https://doi.org/10.18653/v1/D19-1431.

  30. Zhu K, Zhai W, Cao Y. Self-supervised tuning for few-shot segmentation. In Proc. the 29th International Joint Conference on Artificial Intelligence, July 2020, pp.1019-1025. https://doi.org/10.24963/ijcai.2020/142.

  31. Liu S, Davison A J, Johns E. Self-supervised generalisation with meta auxiliary learning. In Proc. the 2019 Annual Conference on Neural Information Processing Systems, December 2019, pp.1677-1687.

  32. Dopierre T, Gravier C, Subercaze J, Logerais W. Few-shot pseudo-labeling for intent detection. In Proc. the 28th International Conference on Computational Linguistics, December 2020, pp.4993-5003. https://doi.org/10.18653/v1/2020.coling-main.438.

  33. Casanueva I, Temčinas T, Gerz D, Henderson M, Vulić I. Efficient intent detection with dual sentence encoders. In Proc. the 2nd Workshop on Natural Language Processing for Conversational AI, July 2020, pp.38-45. https://doi.org/10.18653/v1/2020.nlp4convai-1.5.

  34. Kumar M, Kumar V, Glaude H, De Lichy C, Alok A, Gupta R. Protoda: Efficient transfer learning for few-shot intent classification. In Proc. the 2021 IEEE Spoken Language Technology Workshop, January 2021, pp.966-972. https://doi.org/10.1109/SLT48900.2021.9383495.

  35. Zhang J, Bui T, Yoon S, Chen X, Liu Z, Xia C, Tran Q H, Chang W, Yu P. Few-shot intent detection via contrastive pre-training and fine-tuning. In Proc. the 2021 Conference on Empirical Methods in Natural Language Processing, November 2021, pp.1906-1912. https://doi.org/10.18653/v1/2021.emnlp-main.144.

  36. Yuan P, Mobiny A, Jahanipour J, Li X, Cicalese P A, Roysam B, Patel V M, Dragan M, Nguyen H V. Few is enough: Task-augmented active meta-learning for brain cell classification. In Proc. the 2020 Medical Image Computing and Computer Assisted Intervention, October 2020, pp.367-377. https://doi.org/10.1007/978-3-030-59710-8_36.

  37. Wu L, Li J, Wang Y, Meng Q et al. R-drop: Regularized dropout for neural networks. In Proc. the 2021 Annual Conference on Neural Information Processing Systems, December 2021.

  38. Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. In Proc. the 37th International Conference on Machine Learning, July 2020, pp.1597-1607.

  39. Coucke A, Saade A, Ball A, Bluche T, Caulier A, Leroy D, Doumouro C, Gisselbrecht T, Caltagirone F, Lavril T, Primet M, Dureau J. Snips voice platform: An embedded spoken language understanding system for private-by-design voice interfaces. arXiv:1805.10190, 2018. https://arxiv.org/abs/1805.10190, December 2021.

  40. Larson S, Mahendran A, Peper J J, Clarke C, Lee A, Hill P, Kummerfeld J K, Leach K, Laurenzano M A, Tang L, Mars J. An evaluation dataset for intent classification and out-of-scope prediction. In Proc. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, November 2019, pp.1311-1316. https://doi.org/10.18653/v1/D19-1131.

  41. Liu X, Eshghi A, Swietojanski P, Rieser V. Benchmarking natural language understanding services for building conversational agents. In Proc. the 10th International Workshop on Spoken Dialogue Systems, April 2019, pp.165-183. https://doi.org/10.1007/978-981-15-9323-9_15.

  42. Bao Y, Wu M, Chang S, Barzilay R. Few-shot text classification with distributional signatures. In Proc. the 8th International Conference on Learning Representations, April 2020.

  43. Li Y, Zhang J. Semi-supervised meta-learning for cross-domain few-shot intent classification. In Proc. the 1st Workshop on Meta Learning and Its Applications to Natural Language Processing, August 2021, pp.67-75. https://doi.org/10.18653/v1/2021.metanlp-1.8.

  44. Dopierre T, Gravier C, Logerais W. A neural few-shot text classification reality check. In Proc. the 16th Conference of the European Chapter of the Association for Computational Linguistics, April 2021, pp.935-943. https://doi.org/10.18653/v1/2021.eacl-main.79.

  45. Kingma D P, Ba J. Adam: A method for stochastic optimization. In Proc. the 3rd International Conference on Learning Representations, May 2015.

  46. Khosla P, Teterwak P, Wang C, Sarna A, Tian Y, Isola P, Maschinot A, Liu C, Krishnan D. Supervised contrastive learning. In Proc. the 2020 Annual Conference on Neural Information Processing Systems, December 2020. pp.18661-18673.

  47. Ott M, Edunov S, Grangier D, Auli M. Scaling neural machine translation. In Proc. the 3rd Conference on Machine Translation: Research Papers, October 31-November 1, 2018, pp.1-9. https://doi.org/10.18653/v1/W18-6301.

  48. Ng N, Yee K, Baevski A, Ott M, Auli M, Edunov S. Facebook FAIR’s WMT19 news translation task submission. In Proc. the 4th Conference on Machine Translation, August 2019, pp.314-319. https://doi.org/10.18653/v1/W19-5333.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin-Yu Dai.

Supplementary Information

ESM 1

(PDF 96 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, PF., Ouyang, YW., Song, DJ. et al. Self-Supervised Task Augmentation for Few-Shot Intent Detection. J. Comput. Sci. Technol. 37, 527–538 (2022). https://doi.org/10.1007/s11390-022-2029-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-022-2029-5

Keywords