Skip to main content

An Early Prediction and Label Smoothing Alignment Strategy for User Intent Classification of Medical Queries

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1637))

Abstract

Deep learning models such as RoBERTa and Bi-LSTM are widely utilized in user intention classification tasks. However, in the medical field, there are difficulties in recognizing user intents due to the complexity of medical query representations and medical-specific terms. In this paper, an alignment strategy based on early prediction and label smoothing named EP-LSA is proposed to classify user intents of medical text queries. The EP-LSA strategy uses a Chinese pre-training model RoBERTa to encode sentence features with rich semantic information, predicts the early features of Bi-LSTM in RCNN and aligns them with output features. The early knowledge from early prediction is processed utilizing cross-entropy loss incorporating label smoothing, which enhances random information to the early knowledge and helps the strategy to extract more fine-grained features related to intention labels. Experiment evaluation was performed based on two publicly available datasets KUAKE and CMID. The results demonstrated that the proposed EP-LSA strategy outperformed other baseline methods and demonstrated the effectiveness of the strategy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Cai, R., Zhu, B., Ji, L., Hao, T., Yan, J., Liu, W.: An CNN-LSTM attention approach to understanding user query intent from online health communities. In: 2017 IEEE International Conference on Data Mining Workshops, pp. 430–437 (2017)

    Google Scholar 

  2. Hao, T., Xie, W., Wu, Q., et al.: Leveraging question target word features through semantic relation expansion for answer type classification. Knowl. Based Syst. 133, 43–52 (2017)

    Article  Google Scholar 

  3. Xie, W., Gao, D., Hao, T.: A feature extraction and expansion-based approach for question target identification and classification. In: Wen, J., Nie, J., Ruan, T., Liu, Y., Qian, T. (eds.) CCIR 2017. LNCS, vol. 10390, pp. 249–260. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68699-8_20

    Chapter  Google Scholar 

  4. Shimura, K., Li, J., Fukumoto, F.: Text categorization by learning predominant sense of words as auxiliary task. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1109–1119 (2019)

    Google Scholar 

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 4171–4186 (2019)

    Google Scholar 

  6. Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  7. Guo, H., Liu, T., Liu, F., Li, Y., Hu, W.: Chinese text classification model based on bert and capsule network structure. In: 2021 7th IEEE International Conference on Big Data Security on Cloud, pp. 105–110 (2021)

    Google Scholar 

  8. Liu, Y., Liu, H., Wong, L.-P., Lee, L.-K., Zhang, H., Hao, T.: A hybrid neural network RBERT-C based on pre-trained RoBERTa and CNN for user intent classification. In: Zhang, H., Zhang, Z., Wu, Z., Hao, T. (eds.) NCAA 2020. CCIS, vol. 1265, pp. 306–319. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-7670-6_26

    Chapter  Google Scholar 

  9. Chen, J., Hu, Y., Liu, J., Xiao, Y., Jiang, H.: Deep short text classification with knowledge powered attention. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 6252–6259 (2019)

    Google Scholar 

  10. He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. arXiv preprint arXiv:1906.06906 (2019)

  11. Zhao, S., Liu, T., Zhao, S., Wang, F.: A neural multi-task learning framework to jointly model medical named entity recognition and normalization. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 817–824 (2019)

    Google Scholar 

  12. Sun, K., Zhang, R., Mensah, S., Mao, Y., Liu, X.: Progressive multi-task learning with controlled information flow for joint entity and relation extraction. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 13851–13859 (2021)

    Google Scholar 

  13. Lai, S., Xu, L., Liu, K, et al.: Recurrent convolutional neural networks for text classification. In: Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 2267–2273. (2015)

    Google Scholar 

  14. Zhao, Y., Shen, Y., Yao, J.: Recurrent neural network for text classification with hierarchical multiscale dense connections. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 5450–5456 (2019)

    Google Scholar 

  15. Johnson, R., Zhang, T.: Deep pyramid convolutional neural networks for text categorization. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 562–570 (2017)

    Google Scholar 

  16. Zhang, X., Wang, H.: A joint model of intent determination and slot filling for spoken language understanding. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence, pp. 2993–2999 (2016)

    Google Scholar 

  17. Sun, X., Lu, W.: Understanding attention for text classification. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3418–3428 (2020)

    Google Scholar 

  18. Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 2267–2273 (2015)

    Google Scholar 

  19. Wang, S., Huang, M., Deng, Z.: Densely connected CNN with multi-scale feature attention for text classification. In: Twenty-Seventh International Joint Conference on Artificial Intelligence, pp. 4468–4474 (2018)

    Google Scholar 

  20. Wu, H., He, Z., Zhang, W., Hu, Y., Wu, Y., Yue, Y.: Multi-class text classification model based on weighted word vector and BiLSTM-attention optimization. In: Huang, D.-S., Jo, K.-H., Li, J., Gribova, V., Bevilacqua, V. (eds.) ICIC 2021. LNCS, vol. 12836, pp. 393–400. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-84522-3_32

    Chapter  Google Scholar 

  21. He, C., Chen, S., Huang, S., Zhang, J., Song, X.: Using convolutional neural network with BERT for intent determination. In: 2019 International Conference on Asian Language Processing, pp. 65–70 (2019)

    Google Scholar 

  22. Lin, Y., et al.: BertGCN: transductive text classification by combining GCN and BERT. arXiv preprint arXiv:2105.05727 (2021)

  23. Cui, Y., et al.: Pre-training with whole word masking for Chinese BERT. IEEE/ACM Trans. Audio Speech Lang. Process. 29, 3504–3514 (2021)

    Article  Google Scholar 

  24. Wang, X., et al.: Learning Intents behind Interactions with Knowledge Graph for Recommendation. In: Proceedings of the Web Conference, pp. 878–887 (2021)

    Google Scholar 

  25. Zhong, Y., Zhang, Z., Zhang, W., Zhu, J.: BERT-KG: a short text classification model based on knowledge graph and deep semantics. In: CCF International Conference on Natural Language Processing and Chinese Computing, pp. 721–733 (2021)

    Google Scholar 

  26. Chen, N., Su, X., Liu, T., Hao, Q., Wei, M.: A Benchmark dataset and case study for Chinese medical question intent classification. BMC Med. Inform. Decis. Mak. 20, 1–7 (2020)

    Article  Google Scholar 

  27. Müller, R., Kornblith, S., Hinton, G.: When does label smoothing help. arXiv preprint arXiv:1906.02629 (2019)

Download references

Acknowledgements

This work was supported by Natural Science Foundation of Guangdong Province (2021A1515011339).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tianyong Hao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Luo, Y., Huang, Z., Wong, LP., Zhan, C., Wang, F.L., Hao, T. (2022). An Early Prediction and Label Smoothing Alignment Strategy for User Intent Classification of Medical Queries. In: Zhang, H., et al. Neural Computing for Advanced Applications. NCAA 2022. Communications in Computer and Information Science, vol 1637. Springer, Singapore. https://doi.org/10.1007/978-981-19-6142-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-6142-7_9

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-6141-0

  • Online ISBN: 978-981-19-6142-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics