Skip to main content

Real-Time Information Extraction for Phone Review in Car Loan Audit

  • Conference paper
  • First Online:
Database Systems for Advanced Applications (DASFAA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13946))

Included in the following conference series:

  • 1456 Accesses

Abstract

Phone review is important in car loan audits, in which auditors contact applicants to make risk assessments by how applicants act to a sequence of questions. Due to the length of dialogues, auditors tend to miss important details, thus requiring an aiding system to record the dialogues in a compact form. Existing methods that utilize slot-value pairs to track the latest dialogue states fail to record the intermediate process which is critical for risk assessment. In this paper, we propose quadruples which consist of a dialogue act and a triple in a concept graph to represent the dialogue process, and model the dialogue recording task as a quadruple extraction problem for each utterance. To concisely construct quadruples, we convert slot-value pairs into a concept graph by disentangling domains from slots. In order to extract quadruples in real time, we design a model incorporating multi-head cross-attention mechanism and embedding sharing while considering parameter size and inference speed. Experiments on our real-world dialogue dataset show that our model achieves an accuracy of \(\sim \)82.7% which is similar to the best baseline with only \(\sim \)30 M parameters while performing real-time inference \(\sim \)3.6 times faster on an 8-core CPU with \(\sim \)90 ms per utterance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, L., Lv, B., Wang, C., Zhu, S., Tan, B., Yu, K.: Schema-guided multi-domain dialogue state tracking with graph attention neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 05, pp. 7521–7528 (2020)

    Google Scholar 

  2. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  3. Hosseini-Asl, E., McCann, B., Wu, C.S., Yavuz, S., Socher, R.: A simple language model for task-oriented dialogue. Adv. Neural Inf. Process. Syst. 33, 20179–20191 (2020)

    Google Scholar 

  4. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. In: International Conference on Learning Representations (2020)

    Google Scholar 

  5. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  6. Paranjape, B., Neubig, G.: Contextualized representations for low-resource utterance tagging. In: Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pp. 68–74 (2019)

    Google Scholar 

  7. Pareti, S., Lando, T.: Dialog intent structure: a hierarchical schema of linked dialog acts. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) (2018)

    Google Scholar 

  8. Poria, S., Majumder, N., Mihalcea, R., Hovy, E.: Emotion recognition in conversation: research challenges, datasets, and recent advances. IEEE Access 7, 100943–100953 (2019)

    Article  Google Scholar 

  9. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019)

    Google Scholar 

  10. Su, H., et al.: MovieChats: chat like humans in a closed domain. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6605–6619, November 2020

    Google Scholar 

  11. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  12. Wen, T.H., et al.: A network-based end-to-end trainable task-oriented dialogue system. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp. 438–449 (2017)

    Google Scholar 

  13. Williams, J.D., Raux, A., Henderson, M.: The dialog state tracking challenge series: a review. Dialogue Discourse 7(3), 4–33 (2016)

    Article  Google Scholar 

  14. Wu, C.S., Hoi, S.C., Socher, R., Xiong, C.: TOD-BERT: pre-trained natural language understanding for task-oriented dialogue. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 917–929, November 2020

    Google Scholar 

  15. Wu, W., et al.: Proactive human-machine conversation with explicit conversation goal. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3794–3804 (2019)

    Google Scholar 

  16. Xu, L., Zhang, X., Dong, Q.: CLUECorpus2020: a large-scale chinese corpus for pre-training language model. arXiv preprint arXiv:2003.01355 (2020)

  17. Ye, F., Manotumruksa, J., Zhang, Q., Li, S., Yilmaz, E.: Slot self-attentive dialogue state tracking. In: Proceedings of the Web Conference 2021, pp. 1598–1608 (2021)

    Google Scholar 

  18. Zhang, J., et al.: Few-shot intent detection via contrastive pre-training and fine-tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 1906–1912, November 2021

    Google Scholar 

  19. Zhao, Z., et al.: UER: an open-source toolkit for pre-training models. EMNLP-IJCNLP 2019, 241 (2019)

    Google Scholar 

  20. Zhou, H., Zheng, C., Huang, K., Huang, M., Zhu, X.: KdConv: a Chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7098–7108, July 2020

    Google Scholar 

  21. Zhu, Q., Huang, K., Zhang, Z., Zhu, X., Huang, M.: CrossWOZ: a large-scale Chinese cross-domain task-oriented dialogue dataset. Trans. Assoc. Comput. Linguist. 8, 281–295 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by Chery HuiYin Motor Finance Service Co., Ltd. and in part by National Nature Science Foundations of China grants U19B2026, 62021001, 61836011, and 61836006, and the Fundamental Research Funds for the Central Universities grant WK3490000004.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, H., Wang, J., Wang, Y., Yang, S., Chen, H., Fang, B. (2023). Real-Time Information Extraction for Phone Review in Car Loan Audit. In: Wang, X., et al. Database Systems for Advanced Applications. DASFAA 2023. Lecture Notes in Computer Science, vol 13946. Springer, Cham. https://doi.org/10.1007/978-3-031-30678-5_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30678-5_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30677-8

  • Online ISBN: 978-3-031-30678-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics