HMM for multiple sequences Pair HMM for pairwise

  • Slides: 58
Download presentation
HMM for multiple sequences

HMM for multiple sequences

Pair HMM for pairwise sequence alignment, which incorporates affine gap scores. “Hidden” States •

Pair HMM for pairwise sequence alignment, which incorporates affine gap scores. “Hidden” States • Match (M) • Insertion in x (X) • insertion in y (Y) Observation Symbols • Match (M): {(a, b)| a, b in ∑ }. • Insertion in x (X): {(a, -)| a in ∑ }. • Insertion in y (Y): {(-, a)| a in ∑ }.

Pair HMMs 1 - -2 1 -2 Begin M X 1 - Y End

Pair HMMs 1 - -2 1 -2 Begin M X 1 - Y End

Alignment: a path a hidden state sequence A T - G T T A

Alignment: a path a hidden state sequence A T - G T T A T C G T - A C M M Y M M X M M

Multiple sequence alignment (Globin family)

Multiple sequence alignment (Globin family)

Profile model (PSSM) • A natural probabilistic model for a conserved region would be

Profile model (PSSM) • A natural probabilistic model for a conserved region would be to specify independent probabilities ei(a) of observing nucleotide (amino acid) a in position i • The probability of a new sequence x according to this model is

Profile / PSSM • DNA / proteins Segments of the same length L; •

Profile / PSSM • DNA / proteins Segments of the same length L; • Often represented as Positional frequency matrix; LTMTRGDIGNYLGLTVETISRLLGRFQKSGML LTMTRGDIGNYLGLTIETISRLLGRFQKSGMI LTMTRGDIGNYLGLTVETISRLLGRFQKSEIL LTMTRGDIGNYLGLTVETISRLLGRLQKMGIL LAMSRNEIGNYLGLAVETVSRVFSRFQQNELI LAMSRNEIGNYLGLAVETVSRVFTRFQQNGLI LPMSRNEIGNYLGLAVETVSRVFTRFQQNGLL VRMSREEIGNYLGLTLETVSRLFSRFGREGLI LRMSREEIGSYLGLKLETVSRTLSKFHQEGLI LPMCRRDIGDYLGLTLETVSRALSQLHTQGIL LPMSRRDIADYLGLTVETVSRAVSQLHTDGVL LPMSRQDIADYLGLTIETVSRTFTKLERHGAI

Searching profiles: inference • Give a sequence S of length L, compute the likelihood

Searching profiles: inference • Give a sequence S of length L, compute the likelihood ratio of being generated from this profile vs. from background model: – R(S|P)= – Searching motifs in a sequence: sliding window approach

Match states for profile HMMs • Match states – Emission probabilities Begin . .

Match states for profile HMMs • Match states – Emission probabilities Begin . . Mj . . End

Components of profile HMMs • Insert states – Emission prob. • Usually back ground

Components of profile HMMs • Insert states – Emission prob. • Usually back ground distribution qa. – Transition prob. • Mi to Ii, Ii to itself, Ii to Mi+1 – Log-odds score for a gap of length k (no logg-odds from emission) Ij Begin Mj End

Components of profile HMMs • Delete states – No emission prob. – Cost of

Components of profile HMMs • Delete states – No emission prob. – Cost of a deletion • M→D, D→M • Each D→D might be different Dj Begin Mj End

Full structure of profile HMMs Dj Ij Begin Mj End

Full structure of profile HMMs Dj Ij Begin Mj End

Deriving HMMs from multiple alignments • Key idea behind profile HMMs – Model representing

Deriving HMMs from multiple alignments • Key idea behind profile HMMs – Model representing the consensus for the alignment of sequence from the same family – Not the sequence of any particular member HBA_HUMAN HBB_HUMAN MYG_PHYCA GLB 3_CHITP GLB 5_PETMA LGB 2_LUPLU GLB 1_GLYDI . . . VGA--HAGEY. . . V----NVDEV. . . VEA--DVAGH. . . VKG------D. . . VYS--TYETS. . . FNA--NIPKH. . . IAGADNGAGV. . . *****

Deriving HMMs from multiple alignments • Basic profile HMM parameterization – Aim: making the

Deriving HMMs from multiple alignments • Basic profile HMM parameterization – Aim: making the higher probability for sequences from the family • Parameters – the probabilities values : trivial if many of independent alignment sequences are given. – length of the model: heuristics or systematic way

Sequence conservation: entropy profile of the emission probability distributions

Sequence conservation: entropy profile of the emission probability distributions

Searching with profile HMMs • Main usage of profile HMMs – Detecting potential sequences

Searching with profile HMMs • Main usage of profile HMMs – Detecting potential sequences in a family – Matching a sequence to the profile HMMs • Viterbi algorithm or forward algorithm – Comparing the resulting probability with random model

Searching with profile HMMs • Viterbi algorithm (optimal log-odd alignment)

Searching with profile HMMs • Viterbi algorithm (optimal log-odd alignment)

Searching with profile HMMs • Forward algorithm: summing over all potent alignments

Searching with profile HMMs • Forward algorithm: summing over all potent alignments

Variants for non-global alignments • Local alignments (flanking model) – Emission prob. in flanking

Variants for non-global alignments • Local alignments (flanking model) – Emission prob. in flanking states use background values qa. – Looping prob. close to 1, e. g. (1 - ) for some small . Dj Ij Mj Begin End Q Q

Variants for non-global alignments • Overlap alignments – Only transitions to the first model

Variants for non-global alignments • Overlap alignments – Only transitions to the first model state are allowed. – When expecting to find either present as a whole or absent – Transition to first delete state allows missing first residue Dj Q Ij Begin Mj Q End

Variants for non-global alignments • Repeat alignments – Transition from right flanking state back

Variants for non-global alignments • Repeat alignments – Transition from right flanking state back to random model – Can find multiple matching segments in query string Dj Ij Mj Begin Q End

Estimation of prob. • Maximum likelihood (ML) estimation – given observed freq. cja of

Estimation of prob. • Maximum likelihood (ML) estimation – given observed freq. cja of residue a in position j. • Simple pseudocounts – qa: background distribution – A: weight factor

Optimal model construction: mark columns (a) Multiple alignment: x x. . . x bat

Optimal model construction: mark columns (a) Multiple alignment: x x. . . x bat A G - - - C rat A - A G - C cat A G - A A gnat - - A A A C goat A G - - - C 1 2. . . 3 (c) Observed emission/transition counts match emissions insert emissions (b) Profile-HMM architecture: D D D I I beg M M M end 0 1 2 3 4 state transitions A C G T M-M M-D M-I I-M I-D I-I D-M D-D D-I 0 0 0 4 1 0 0 - 1 4 0 0 0 0 3 1 0 0 0 1 0 2 0 0 3 0 6 0 1 0 2 0 1 2 1 4 0 0 2 3 0 4 0 0 0 0 0 1 0 0

Optimal model construction • MAP (match-insert assignment) – Recursive calculation of a number Sj

Optimal model construction • MAP (match-insert assignment) – Recursive calculation of a number Sj • Sj: log prob. of the optimal model for alignment up to and including column j, assuming j is marked. • Sj is calculated from Si and summed log prob. between i and j. • Tij: summed log prob. of all the state transitions between marked i and j. – cxy are obtained from partial state paths implied by marking i and j.

Optimal model construction • Algorithm: MAP model construction – Initialization: • S 0 =

Optimal model construction • Algorithm: MAP model construction – Initialization: • S 0 = 0, ML+1 = 0. – Recurrence: for j = 1, . . . , L+1: – Traceback: from j = L+1, while j > 0: • Mark column j as a match column • j = j.

Weighting training sequences • Input sequences are random? • “Assumption: all examples are independent

Weighting training sequences • Input sequences are random? • “Assumption: all examples are independent samples” might be incorrect • Solutions – Weight sequences based on similarity

Weighting training sequences • Simple weighting schemes derived from a tree – Phylogenetic tree

Weighting training sequences • Simple weighting schemes derived from a tree – Phylogenetic tree is given. – [Thompson, Higgins & Gibson 1994 b] – [Gerstein, Sonnhammer & Chothia 1994]

Weighting training sequences 7 t 6 = 3 V 7 I 1+I 2+I 3

Weighting training sequences 7 t 6 = 3 V 7 I 1+I 2+I 3 6 V 6 t 5 = 3 t 3 = 5 5 1 2 I 1 3 I 4 V 5 t 2 = 2 t 1 = 2 I 1+I 2 t 4 = 8 I 2 I 3 4 w 1: w 2: w 3: w 4 = 35: 50: 64 I 1: I 2: I 3: I 4 = 20: 32: 47

Multiple alignment by training profile HMM • Sequence profiles could be represented as probabilistic

Multiple alignment by training profile HMM • Sequence profiles could be represented as probabilistic models like profile HMMs. – Profile HMMs could simply be used in place of standard profiles in progressive or iterative alignment methods. – ML methods for building (training) profile HMM (described previously) are based on multiple sequence alignment. – Profile HMMs can also be trained from initially unaligned sequences using the Baum-Welch (EM) algorithm

Multiple alignment by profile HMM training. Multiple alignment with a known profile HMM •

Multiple alignment by profile HMM training. Multiple alignment with a known profile HMM • Before we estimate a model and a multiple alignment simultaneously, we consider as simpler problem: derive a multiple alignment from a known profile HMM model. – This can be applied to align a large member of sequences from the same family based on the HMM model built from the (seed) multiple alignment of a small representative set of sequences in the family.

Multiple alignment with a known profile HMM • Align a sequence to a profile

Multiple alignment with a known profile HMM • Align a sequence to a profile HMM Viterbi algorithm • Construction a multiple alignment just requires calculating a Viterbi alignment for each individual sequence. – Residues aligned to the same match state in the profile HMM should be aligned in the same columns.

Multiple alignment with a known profile HMM • Given a preliminary alignment, HMM can

Multiple alignment with a known profile HMM • Given a preliminary alignment, HMM can align additional sequences.

Multiple alignment with a known profile HMM

Multiple alignment with a known profile HMM

Multiple alignment with a known profile HMM • Important difference with other MSA programs

Multiple alignment with a known profile HMM • Important difference with other MSA programs – Viterbi path through HMM identifies inserts – Profile HMM does not align inserts – Other multiple alignment algorithms align the whole sequences.

Profile HMM training from unaligned sequences • Harder problem – estimating both a model

Profile HMM training from unaligned sequences • Harder problem – estimating both a model and a multiple alignment from initially unaligned sequences. – Initialization: Choose the length of the profile HMM and initialize parameters. – Training: estimate the model using the Baum. Welch algorithm (iteratively). – Multiple Alignment: Align all sequences to the final model using the Viterbi algorithm and build a multiple alignment as described in the previous section.

Profile HMM training from unaligned sequences • Initial Model – The only decision that

Profile HMM training from unaligned sequences • Initial Model – The only decision that must be made in choosing an initial structure for Baum-Welch estimation is the length of the model M. – A commonly used rule is to set M be the average length of the training sequence. – We need some randomness in initial parameters to avoid local maxima.

Multiple alignment by profile HMM training • Avoiding Local maxima – Baum-Welch algorithm is

Multiple alignment by profile HMM training • Avoiding Local maxima – Baum-Welch algorithm is guaranteed to find a LOCAL maxima. • Models are usually quite long and there are many opportunities to get stuck in a wrong solution. – Solution • Start many times from different initial models. • Use some form of stochastic search algorithm, e. g. simulated annealing.

Multiple alignment by profile HMM similar to Gibbs sampling • The ‘Gibbs sampler’ algorithm

Multiple alignment by profile HMM similar to Gibbs sampling • The ‘Gibbs sampler’ algorithm described by Lawrence et al. [1993] has substantial similarities. – The problem was to simultaneously find the motif positions and to estimate the parameters for a consensus statistical model of them. – The statistical model used is essentially a profile HMM with no insert or delete states.

Multiple alignment by profile HMM training-Model surgery • We can modify the model after

Multiple alignment by profile HMM training-Model surgery • We can modify the model after (or during) training a model by manually checking the alignment produced from the model. – Some of the match states are redundant – Some insert states absorb too many sequences • Model surgery – If a match state is used by less than ½ of training sequences, delete its module (match-insert-delete states) – If more than ½ of training sequences use a certain insert state, expand it into n new modules, where n is the average length of insertions – ad hoc, but works well

Phylo-HMMs: model multiple alignments of syntenic sequences • A phylo-HMM is a probabilistic machine

Phylo-HMMs: model multiple alignments of syntenic sequences • A phylo-HMM is a probabilistic machine that generates a multiple alignment, column by column, such that each column is defined by a phylogenetic model • Unlike single-sequence HMMs, the emission probabilities of phylo-HMMs are complex distributions defined by phylogenetic models

Applications of Phylo-HMMs • Improving phylogenetic modeling that allow for variation among sites in

Applications of Phylo-HMMs • Improving phylogenetic modeling that allow for variation among sites in the rate of substitution (Felsenstein & Churchill, 1996; Yang, 1995) • Protein secondary structure prediction (Goldman et al. , 1996; Thorne et al. , 1996) • Detection of recombination from DNA multiple alignments (Husmeier & Wright, 2001) • Recently, comparative genomics (Siepel, et. al. Haussler, 2005)

Phylo-HMMs: combining phylogeny and HMMs • Molecular evolution can be viewed as a combination

Phylo-HMMs: combining phylogeny and HMMs • Molecular evolution can be viewed as a combination of two Markov processes – One that operates in the dimension of space (along a genome) – One that operates in the dimension of time (along the branches of a phylogenetic tree) • Phylo-HMMs model this combination

Single-sequence HMM Phylo-HMM

Single-sequence HMM Phylo-HMM

Phylogenetic models • Stochastic process of substitution that operates independently at each site in

Phylogenetic models • Stochastic process of substitution that operates independently at each site in a genome • A character is first drawn at random from the background distribution and assigned to the root of the tree; character substitutions then occur randomly along the tree branches, from root to leaves • The characters at the leaves define an alignment column

Phylogenetic Models • The different phylogenetic models associated with the states of a phylo-HMM

Phylogenetic Models • The different phylogenetic models associated with the states of a phylo-HMM may reflect different overall rates of substitution (e. g. in conserved and non-conserved regions), different patterns of substitution or background distributions, or even different tree topologies (as with recombination)

Phylo-HMMs: Formal Definition • A phylo-HMM is a 4 -tuple – – : :

Phylo-HMMs: Formal Definition • A phylo-HMM is a 4 -tuple – – : : set of hidden states : set of associated phylogenetic models – – : transition probabilities : initial probabilities

The Phylogenetic Model • : – – : substitution rate matrix : background frequencies

The Phylogenetic Model • : – – : substitution rate matrix : background frequencies : binary tree : branch lengths

The Phylogenetic Model • The model is defined with respect to an alphabet whose

The Phylogenetic Model • The model is defined with respect to an alphabet whose size is denoted d • The substitution rate matrix has dimension dxd • The background frequencies vector has dimension d • The tree has n leaves, corresponding to n extant taxa • The branch lengths are associated with the tree

Probability of the Data • Let X be an alignment consisting of L columns

Probability of the Data • Let X be an alignment consisting of L columns and n rows, with the ith column denoted Xi • The probability that column Xi is emitted by state sj is simply the probability of Xi under the corresponding phylogenetic model, • This is the likelihood of the column given the tree, which can be computed efficiently using Felsenstein’s “pruning” algorithm (which we will describe in later lectures)

Substitution Probabilities • Felsenstein’s algorithm requires the conditional probabilities of substitution for all bases

Substitution Probabilities • Felsenstein’s algorithm requires the conditional probabilities of substitution for all bases a, b and branch lengths t j • The probability of substitution of a base b for a base a along a branch of length t, denoted is based on a continuous-time Markov model of substitution, defined by the rate matrix Qj

Substitution Probabilities • In particular, for any given non-negative value t, the conditional probabilities

Substitution Probabilities • In particular, for any given non-negative value t, the conditional probabilities for all a, b are given the dxd matrix , where

Example: HKY model represents the transition/transversion rate ratio for ‘-’s indicate quantities required to

Example: HKY model represents the transition/transversion rate ratio for ‘-’s indicate quantities required to normalize each row.

State sequences in Phylo-HMMs • A state sequence through the phylo-HMM is a sequence

State sequences in Phylo-HMMs • A state sequence through the phylo-HMM is a sequence such that • The joint probability of a path and alignment is

Phylo-HMMs • The likelihood is given by the sum over all paths (forward algorithm)

Phylo-HMMs • The likelihood is given by the sum over all paths (forward algorithm) • The maximum-likelihood path is (Vertebi’s)

Computing the Probabilities • The likelihood can be computed efficiently using the forward algorithm

Computing the Probabilities • The likelihood can be computed efficiently using the forward algorithm • The maximum-likelihood path can be computed efficiently using the Viterbi algorithm • The forward and backward algorithms can be combined to compute the posterior probability

Higher-order Markov Models for Emissions • It is common with gene-finding HMMs to condition

Higher-order Markov Models for Emissions • It is common with gene-finding HMMs to condition the emission probability of each observation on the observations that immediately precede it in the sequence • For example, in a 3 -rd-codon-position state, the emission of a base xi=“A” might have a fairly high probability if the previous two bases are xi-2=“G” and xi-1=“A” (GAA=Glu), but should have zero probability if the previous two bases are xi-2=“T” and xi-1=“A” (TAA=stop)

Higher-order Markov Models for Emission • Considering the N observations preceding each xi corresponds

Higher-order Markov Models for Emission • Considering the N observations preceding each xi corresponds to using an Nth order Markov model for emissions • An Nth order model for emissions is typically parameterized in terms of (N+1)-tuples of observations, and conditional probabilities are computed as

Nth Order Phylo-HMMs Probability of the N-tuple Sum over all possible alignment columns Y

Nth Order Phylo-HMMs Probability of the N-tuple Sum over all possible alignment columns Y (can be calculated efficiently by a slight modification of Felsenstein’s “pruning” algorithm)