HIDDEN MARKOV MODELS OVERVIEW Markov models Hidden Markov

  • Slides: 31
Download presentation
HIDDEN MARKOV MODELS

HIDDEN MARKOV MODELS

OVERVIEW Markov models Hidden Markov models(HMM) Issues Regarding HMM Algorithmic approach to Issues of

OVERVIEW Markov models Hidden Markov models(HMM) Issues Regarding HMM Algorithmic approach to Issues of HMM

MARKOV MODELS A Markov model is a finite state machine with N distint states

MARKOV MODELS A Markov model is a finite state machine with N distint states begins at (Time t = 1) in initial state. It moves from current state to Next state according to the transition probabilities associated with the Current state This kind of system is called Finite or Discrete Markov model.

HIDDEN MARKOV MODELS A Hidden Markov model is a statistical model in which the

HIDDEN MARKOV MODELS A Hidden Markov model is a statistical model in which the system being modelled is assumed to be markov process with unobserved hidden states. In Regular Markov models the state is clearly visible to others in which the state transition probabilities are the parameters only where as in HMM the state is not visible but the output is visible.

HIDDEN MARKOV MODELS - HMM Hidden variables H 1 H 2 Hi HL-1 HL

HIDDEN MARKOV MODELS - HMM Hidden variables H 1 H 2 Hi HL-1 HL X 1 X 2 Xi XL-1 XL Observed data

DESCRIPTION Formally, HMM is defined by an alphabet M( ∑, Q, A, E ).

DESCRIPTION Formally, HMM is defined by an alphabet M( ∑, Q, A, E ). ∑ is an alphabet of symbols. Q is a set of states, each state will emit symbols from the alphabet ∑. A = ( ak, l) is a Q × Q matrix describing the probability of changing to state l after the HMM is in state k. E = (ek(b)) is Q × ∑ matrix describing the probability of emitting the symbol b during a step in which the HMM is in state k.

FAIR BET CASINO PROBLEM Given a sequence of coin tosses, with a sequence of

FAIR BET CASINO PROBLEM Given a sequence of coin tosses, with a sequence of x 1, x 2, x 3……xn of coin tosses can be either heads or tails made by two possible coins(F or B) as input. We need to find a sequence with each ∏ = ∏ 1, ∏ 2, ∏ 3… ∏n being either F or B indicating that xi is the result of tossing the fair or biased coin.

EXPLAINATION The above problem is ambiguous because the sequence of coins generated may be

EXPLAINATION The above problem is ambiguous because the sequence of coins generated may be FFFFF. . Or BBBB. We need to design the way to grade different coin sequences. This ill-defined problem should be converted into Decoding problem based on HMM paradigm.

FAIR BET CASINO PROBLEM This is an sample HMM designed for Fair Bet casino

FAIR BET CASINO PROBLEM This is an sample HMM designed for Fair Bet casino problem. There are two states F( Fair ) and B ( Biased ). Each state can emit either heads( H ) or tails ( T ) with probabilities

For above problem we define the parameters as probability of getting head or tail

For above problem we define the parameters as probability of getting head or tail when we used fair coin is 0. 5 Probability of getting head is 0. 75 when we used biased coin and getting tail is 0. 25. If the resulting sequences of tosses is X = x 1, x 2, x 3…. xn, the probability that x was generated by fair coin is P ( X|fair coin ) = ∏i=1 n( p( xi )) = 1/ 2 n. P ( X|biased coin ) = 3 k/ 4 n where k is the no of heads in X.

If P( X|fair coin) > P(X|biased coin), then the dealer most likely used a

If P( X|fair coin) > P(X|biased coin), then the dealer most likely used a fair coin. If P( X|fair coin) < P(X|biased coin), then the dealer most likely used a biased coin. The probabilities of getting fair coin and biased coin will be equal at k = n/ log 23. If k < n/ log 23 dealer uses fair coin else dealer uses biased coin. log 2( P(X|fair coin) / P( X|biased coin))= n-k log 23

A path ∏ = ∏ 1, ∏ 2, ∏ 3… ∏n is the sequence

A path ∏ = ∏ 1, ∏ 2, ∏ 3… ∏n is the sequence of states. If dealer uses fair coin for first 3 times and last 3 tosses and the corresponding ∏ would be FFFBBBBBFFF and the resulting sequence of tosses is 01011101001. Probability of Xi being generated by ∏I is

We denote P( Xi/ ∏I ) to denote the probability that symbol Xi was

We denote P( Xi/ ∏I ) to denote the probability that symbol Xi was emitted from state ∏I We write the transition probability as P(∏i ----> ∏i+1) The transition probabilities for above matrix is defined as

 The probability of generating X through the path ∏ can be calculated as

The probability of generating X through the path ∏ can be calculated as (12*12)(12*910)……. . (12*910) is 2. 66* 10 -6. This probability should be maximum. If it is not maximum then it is not the most probable path We need to select another sequence for ∏ which will get maximum probabilty. If we select ∏ = FFFBBBFFFFF then the probability is 3. 54* 10 -6

 The probability that sequence x was generated by the path ∏ given the

The probability that sequence x was generated by the path ∏ given the model M, is Since the above solution is not optimal solution because only the dealer knows the real sequence of states by ∏ that emitted X, we say that ∏ is hidden and attempt to solve the following decoding problem.

MAIN ISSUES ? Evaluation problem: Given the HMM M = ( ∑, Q, A,

MAIN ISSUES ? Evaluation problem: Given the HMM M = ( ∑, Q, A, E )and observation sequence X = x 1, x 2 ……xk, Caluculate the probability that model m has generated sequence X. Decoding problem : Given the HMM M =( ∑, Q, A, E ) and observation sequence X = x 1, x 2 ……xk, Caluculate the most likely sequence of hidden states ∏i that generated sequence X.

SOLUTION TO DECODING PROBLEM ? Decoding problem: Viterbi Algorithm In this algorithm we go

SOLUTION TO DECODING PROBLEM ? Decoding problem: Viterbi Algorithm In this algorithm we go through the observations from start to end referring a state of hidden machine for each observation. We also record the values of Overall Probability, Viterbi path (sequence of states) and the viterbi probability( Probability of observed state sequences in viterbi path ) The probability of possible step given its corresponding observation is probability of transmission times the emission probability.

VITERBI ALGORITHM FOR DYNAMIC PROGRAMMING Overall Probability : Multiply each new probability with the

VITERBI ALGORITHM FOR DYNAMIC PROGRAMMING Overall Probability : Multiply each new probability with the oldone and then add together. Viterbi probability : Take the highest next step probability and multiply with the next step viterbi probability. Viterbi path : Add the next step path to viterbi path.

VITERBI ALGORITHM Here we use HMM-inspired analog of Manhattan grid for Decoding problem

VITERBI ALGORITHM Here we use HMM-inspired analog of Manhattan grid for Decoding problem

To calculate P(X| ∏) we need to set the edge weights in this graph

To calculate P(X| ∏) we need to set the edge weights in this graph such that product of edge weights will generate the sequence. There are |Q|2( n-1) edges in the graph where weight of each edge from (k, i) to (l, i+1) given by el(xi=1) * akl Probability of path ending at any particular vertex is caluculated as

Decoding problem is now reduced to finding a longest path in the directed acyclic

Decoding problem is now reduced to finding a longest path in the directed acyclic graph ( DAG) The length of path is defined as product of its edges weights instead of caluculating sum of weights in dynamic programming algorithms. Application of logarithms to the solution makes the same as to previous case. To calculate the probability of path that ends at state k we need to calculate the most likely path ending at the state k.

The computations in the viterbi algorithm are usualy done using logarithmic scores Sk, I

The computations in the viterbi algorithm are usualy done using logarithmic scores Sk, I = log Sk, i to avoid the overflow. Viterbi algorithm is essentially a search through the space of all possible paths in that graph for the one that maximizes the value of P ( X| ∏ )

 We can also caluculate the probability of HMM was in state k at

We can also caluculate the probability of HMM was in state k at time i/ P(X, ∏i=k) = ∑ all with ∏ i=k. P( X| ∏ ) Probability that the dealer had a biased coin at moment I is given by

HMM PARAMETER ESTIMATION Previously we know the transition probabilities and emission probabilities of HMM

HMM PARAMETER ESTIMATION Previously we know the transition probabilities and emission probabilities of HMM so it is easy to caluculate the hidden states using an observed sequence and probabilities. It is much difficult to calculate the probable sequence or path when both the probabilities are unknown. Let Θ be vector combining unknown transition and emission probabilities of the HMM.

We define P( X| Θ) as the maximum probability of x given the assignment

We define P( X| Θ) as the maximum probability of x given the assignment of parameters Θ. Our goal is to find max Θ P( X| Θ) Instead of getting single string x, we can obtain a sample of training sequencies x 1, x 2…. . xm max Θ ∏ i=1 m. P( Xi| Θ)

The common algorithms used for the above approach is heuristics for parameter optimization. If

The common algorithms used for the above approach is heuristics for parameter optimization. If we know the number of transitions from state k to l and Ek(b) is the number of times b is emitted from state k, then the reasonable estimators are ak, l = Ak, l/ ∑q€Q Akq ek(b) = Ek(b)/ ∑σ€∑Ek( σ )

PROFILE HMM ALIGNMENT This can be used for searching for new family members from

PROFILE HMM ALIGNMENT This can be used for searching for new family members from a database using paiwise alignments when the functionally related biological sequences are given. This approach may fail because distant sequences may have weak similarities which will not pass the statistical significance test. The representation of the family of related proteins is given by their multiple allignment and corresponding profile

 A profile represented in terms of frequencies of nucleotides. HMM can also be

A profile represented in terms of frequencies of nucleotides. HMM can also be used for sequence comparision in particular for alidning a sequence against a profile. It contains n sequentially linked match states M 1, M 2, …. Mn.

PROFILE HMM

PROFILE HMM

REFERENCES www. cedar. buffalo. edu/~govind/CS 661/Lec 12. ppt www. bios. niu. edu/johns/bioinf. . .

REFERENCES www. cedar. buffalo. edu/~govind/CS 661/Lec 12. ppt www. bios. niu. edu/johns/bioinf. . . /Hidden%20 Marko v%20 Models. ppt www. mathcs. emory. edu/~cs 153000/share/0123/bo ok 123. pdf

Thank you

Thank you