ISCA Archive Interspeech 2018
ISCA Archive Interspeech 2018

User Information Augmented Semantic Frame Parsing Using Progressive Neural Networks

Yilin Shen, Xiangyu Zeng, Yu Wang, Hongxia Jin

Semantic frame parsing is a crucial component in spoken language understanding (SLU) to build spoken dialog systems. It has two main tasks: intent detection and slot filling. Although state-of-the-art approaches showed good results, they require large annotated training data and long training time. In this paper, we aim to alleviate these drawbacks for semantic frame parsing by utilizing the ubiquitous user information. We design a novel progressive deep neural network model to incorporate prior knowledge of user information intermediately to better and quickly train a semantic frame parser. Due to the lack of benchmark dataset with real user information, we synthesize the simplest type of user information (location and time) on ATIS benchmark data. The results show that our approach leverages such simple user information to outperform state-of-the-art approaches by 0.25% for intent detection and 0.31% for slot filling using standard training data. When using smaller training data, the performance improvement on intent detection and slot filling reaches up to 1.35% and 1.20% respectively. We also show that our approach can achieve similar performance as state-of-the-art approaches by using less than 80% annotated training data. Moreover, the training time to achieve the similar performance is also reduced by over 60%.


doi: 10.21437/Interspeech.2018-1149

Cite as: Shen, Y., Zeng, X., Wang, Y., Jin, H. (2018) User Information Augmented Semantic Frame Parsing Using Progressive Neural Networks. Proc. Interspeech 2018, 3464-3468, doi: 10.21437/Interspeech.2018-1149

@inproceedings{shen18_interspeech,
  author={Yilin Shen and Xiangyu Zeng and Yu Wang and Hongxia Jin},
  title={{User Information Augmented Semantic Frame Parsing Using Progressive Neural Networks}},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3464--3468},
  doi={10.21437/Interspeech.2018-1149}
}