ABSTRACT
In addition to being a modern technique used in speech recognition applications, Hidden Markov Models (HMMs) are widely used in other areas to predict equipment life cycles and optimize maintenance, for example. Problems of this type have a very limited and fragmented set of observable data, as well as limited information on the possible states of the system. This article proposes a strategy for organizing HMM parallel learning, which is effectively implemented using OpenCL on GPU devices. The originality of this approach lies in the parallel implementation of the learning algorithm for a model with an indefinite number of states and heterogeneous observed data: sometimes only the observed signal is available, and sometimes the state of the system is known. The code presented in this article are parallelized on several GPU devices.
Index Terms
- Parallel Algorithm for a Hidden Markov Model with an Indefinite Number of States and Heterogeneous Observation Data
Recommendations
A hidden semi-Markov model with missing data and multiple observation sequences for mobility tracking
A hidden Markov model (HMM) encompasses a large class of stochastic process models and has been successfully applied to a number of scientific and engineering problems, including speech and other pattern recognition problems, and DNA sequence ...
Hidden semi-Markov models
As an extension to the popular hidden Markov model (HMM), a hidden semi-Markov model (HSMM) allows the underlying stochastic process to be a semi-Markov chain. Each state has variable duration and a number of observations being produced while in the ...
Practical implementation of an efficient forward-backward algorithm for an explicit-duration hidden Markov model
This correspondence addresses several practical problems in implementing a forward-backward (FB) algorithm for an explicit-duration hidden Markov model. First, the FB variables are redefined in terms of posterior probabilities to avoid possible ...
Comments