Loading [a11y]/accessibility-menu.js
Semi-supervised slot tagging in spoken language understanding using recurrent transductive support vector machines | IEEE Conference Publication | IEEE Xplore

Semi-supervised slot tagging in spoken language understanding using recurrent transductive support vector machines


Abstract:

In this paper, we propose a recurrent transductive support vector machine (rtsvm) for semi-supervised slot tagging. Taking advantage of the superior sequence representati...Show More

Abstract:

In this paper, we propose a recurrent transductive support vector machine (rtsvm) for semi-supervised slot tagging. Taking advantage of the superior sequence representation capability of recurrent neural networks (rnns) and the semi-supervised learning capability of transductive support vector machines (tsvms), the rtsvm is stacking a tsvm on top of a rnn. The performance of the traditional tsvm is sensitive to the regularization weight for unlabeled data in semi-supervised learning. In practice, a suitable unlabeled data regularization weight is difficult to determine. To make the rtsvm semi-supervised learning robust, we propose a confident subset regularization method enforcing that the new decision boundaries learned from unlabeled data would not separate the confident clusters learned from labeled data. The experiments based on two datasets show that without using unlabeled data, the supervised version of rtsvm achieves significant F1 score improvement over previous best methods. By taking the unlabeled data into account, the semi-supervised rtsvm gets significant improvement over its supervised opponent.
Date of Conference: 13-17 December 2015
Date Added to IEEE Xplore: 11 February 2016
ISBN Information:
Conference Location: Scottsdale, AZ, USA

References

References is not available for this document.