What HMMs Can Do

Jeff A. BILMES

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E89-D    No.3    pp.869-891
Publication Date: 2006/03/01
Online ISSN: 1745-1361
DOI: 10.1093/ietisy/e89-d.3.869
Print ISSN: 0916-8532
Type of Manuscript: Special Section INVITED PAPER (Special Section on Statistical Modeling for Speech Processing)
Category: 
Keyword: 
automatic speech recognition,  hidden Markov models,  HMMs,  time-series processes,  hand-writing recognition,  graphical models,  dynamic Bayesian networks,  dynamic graphical models,  stochastic processes,  time-series densities,  bio-informatics,  

Full Text: PDF(972.7KB)>>
Buy this Article



Summary: 
Since their inception almost fifty years ago, hidden Markov models (HMMs) have have become the predominant methodology for automatic speech recognition (ASR) systems--today, most state-of-the-art speech systems are HMM-based. There have been a number of ways to explain HMMs and to list their capabilities, each of these ways having both advantages and disadvantages. In an effort to better understand what HMMs can do, this tutorial article analyzes HMMs by exploring a definition of HMMs in terms of random variables and conditional independence assumptions. We prefer this definition as it allows us to reason more throughly about the capabilities of HMMs. In particular, it is possible to deduce that there are, in theory at least, no limitations to the class of probability distributions representable by HMMs. This paper concludes that, in search of a model to supersede the HMM (say for ASR), rather than trying to correct for HMM limitations in the general case, new models should be found based on their potential for better parsimony, computational requirements, and noise insensitivity.


open access publishing via