Training Augmented Models Using SVMs

Mark J.F. GALES
Martin I. LAYTON

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E89-D    No.3    pp.892-899
Publication Date: 2006/03/01
Online ISSN: 1745-1361
DOI: 10.1093/ietisy/e89-d.3.892
Print ISSN: 0916-8532
Type of Manuscript: Special Section INVITED PAPER (Special Section on Statistical Modeling for Speech Processing)
Category: 
Keyword: 
speech recognition,  hidden Markov models,  support vector machines,  augmented statistical models,  

Full Text: PDF(232.9KB)>>
Buy this Article



Summary: 
There has been significant interest in developing new forms of acoustic model, in particular models which allow additional dependencies to be represented than those contained within a standard hidden Markov model (HMM). This paper discusses one such class of models, augmented statistical models. Here, a local exponential approximation is made about some point on a base model. This allows additional dependencies within the data to be modelled than are represented in the base distribution. Augmented models based on Gaussian mixture models (GMMs) and HMMs are briefly described. These augmented models are then related to generative kernels, one approach used for allowing support vector machines (SVMs) to be applied to variable length data. The training of augmented statistical models within an SVM, generative kernel, framework is then discussed. This may be viewed as using maximum margin training to estimate statistical models. Augmented Gaussian mixture models are then evaluated using rescoring on a large vocabulary speech recognition task.


open access publishing via