language-icon Old Web
English
Sign In

Hidden Markov model

Hidden Markov Model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobservable (i.e. hidden) states. The hidden Markov model can be represented as the simplest dynamic Bayesian network. The mathematics behind the HMM were developed by L. E. Baum and coworkers. HMM is closely related to earlier work on the optimal nonlinear filtering problem by Ruslan L. Stratonovich, who was the first to describe the forward-backward procedure. In simpler Markov models (like a Markov chain), the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters, while in the hidden Markov model, the state is not directly visible, but the output (in the form of data or 'token' in the following), dependent on the state, is visible. Each state has a probability distribution over the possible output tokens. Therefore, the sequence of tokens generated by an HMM gives some information about the sequence of states; this is also known as pattern theory, a topic of grammar induction. The adjective hidden refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a hidden Markov model even if these parameters are known exactly. Hidden Markov models are especially known for their application in reinforcement learning and temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics. A hidden Markov model can be considered a generalization of a mixture model where the hidden variables (or latent variables), which control the mixture component to be selected for each observation, are related through a Markov process rather than independent of each other. Recently, hidden Markov models have been generalized to pairwise Markov models and triplet Markov models which allow consideration of more complex data structures and the modeling of nonstationary data. Let X n {displaystyle X_{n}} and Y n {displaystyle Y_{n}} be discrete-time stochastic processes and n ≥ 1 {displaystyle ngeq 1} . The pair ( X n , Y n ) {displaystyle (X_{n},Y_{n})} is a hidden markov model if for every n ≥ 1 , {displaystyle ngeq 1,} x 1 , … , x n , {displaystyle x_{1},ldots ,x_{n},} and an arbitrary measurable set A {displaystyle A} .

[ "Speech recognition", "Machine learning", "Artificial intelligence", "Pattern recognition", "Score following", "Hierarchical hidden Markov model", "TIMIT", "mandarin speech recognition", "timit database" ]
Parent Topic
Child Topic
    No Parent Topic