Introduction to Bayesian statistics Yves Moreau Overview n
Introduction to Bayesian statistics Yves Moreau
Overview n n n The Cox-Jaynes axioms Bayes’ rule Probabilistic models n n n Maximum likelihood Maximum a posteriori Bayesian inference Multinomial en Dirichlet distributions Estimation of frequency matrices n n Pseudocounts Dirichlet mixture
The Cox-Jaynes axioms and Bayes’ rule
Probability vs. belief n What is a probability? n Frequentist point of view n n n Probabilities are what frequency counts (coin, die) and histograms (height of people) Such definitions are somewhat circular because of the dependency on the Central Limit Theorem Measure theory point of view n n n Probabilities satisfy Kolmogorov’s s-algebra axioms Rigorous definition fits well within measure and integration theory But definition is ad hoc to fit within this framework
n Bayesian point of view n n Probabilities are models of the uncertainty regarding propositions within a given domain Induction vs. deduction n Deduction n n Induction IF ( A B AND B = TRUE ) THAN A becomes more plausible Probabilities satisfy Bayes’ rule n n IF ( A B AND A = TRUE ) THEN B = TRUE
The Cox-Jaynes axioms n The Cox-Jaynes axioms allow the buildup of a large probabilistic framework with minimal assumptions n Firstly, some concepts n A is a proposition n n D is a domain n n A TRUE or FALSE Information available about the current situation BELIEF: P(A=TRUE |D) n Belief that we have regarding the proposition given the domain knowledge
n Secondly, some assumptions 1. Suppose we can compare beliefs n P(A|D) > P(B|D) A is more plausible than B given D and suppose the comparison is transitive n We have an ordering relation, so P is a number
2. Suppose there exists a fixed relation between the belief in a proposition and the belief in the negation of this proposition 3. Suppose there exists a fixed relation between on the one hand the belief in the union of two propositions and on the other hand the belief in the first proposition and the belief in the second proposition given the first one
Bayes’ rule n THEN it can be shown (after rescaling of the beliefs) that n Bayes’ rule n If we accept the Cox-Jaynes axions, we can always apply Bayes’ rule, independently of the specific definition of the probabilities
Bayes’ rule n n n Bayes’ rule will be our main tool for building probabilistic models and to estimate them Bayes’ rule holds not only for statements (TRUE/FALSE) but for any random variables (discrete or continuous) Bayes’ rule holds for specific realizations of the random variables as well as for the whole distribution
Importance of the domain D n n n The domain D is a flexible concept that encapsulates the background information that is relevant for the problem It is important to set up the problem within the right domain Example n n Diagnosis of Tay-Sachs’ disease Rare disease that appears more frequently for Ashkenazi Jews With the same symptoms, the probability of the disease will be smaller if we are in a hospital in Brussels that if we are in Mount Sinai Hospital in New York If we try to build a model with all the patients in the world, this model will not be more efficient
Probabilistic models and inference
Probabilistic models n We have a domain D n We have observations D We have a model M with parameters q n n Example 1 n Domain D: the genome of a given organism n Data D: a DNA sequence S = ’ACCTGATCACCCT’ n Model M: the sequences are generated by a discrete distribution over the alphabet {A, C, G, T} n Parameters q:
n Example 2 n Domain D: all European people n n n Data D: the length of people from a given group Model M: the length is normally distributed N(m, s) Parameters q: the mean m and the standard deviation s
Generative models n It is often possible to set up a model of the likelihood of the data n n More sophisticated models are possible n n For example, for the DNA sequence HMMs Gibbs sampling for motif finding Bayesian networks We want to find the model that describes our observations
Maximum likelihood n Maximum likelihood (ML) n n n Consistent: if the observation were generated by the model M with parameters q*, then q. ML will converge to q* when the number of observations goes to infinity Note that the data might not be generated by any instance of the model If the data set is small, there might be a large difference between q. ML en q*
Maximum a posteriori probability n Maximum a posteriori probability (MAP) n Bayes’ rule posterior n Thus likelihood of the data priori knowledge plays no role in optimization over q
Posterior mean estimate n Posterior mean estimate
Distributions over parameters n n Let us look carefully to P(q|M) (or to P(q|D, M)) P(q|M) is a probability distribution over the PARAMETERS We have to handle both distributions over observations and over parameters at the same time Example n Distribution of the length of people P(D|q, M) n Prior P(q|M) 200 175 150 15 10 5 200 175 150 Mean length Standard deviation length Length
Bayesian inference n If we want to update the probability of the parameters with new observations D 3 1 2 1. Choose a reasonable prior 2. Add the information from the data 3. Get the updated distributions of the parameters (We often work with logarithms)
Bayesian inference n Example 200 Mean length 200 175 150 100 Belgian men Mean length 100 Dutch men 200 175 150 Mean length 175 150
Marginalization n A major technique for working with probabilistic models is to introduce or remove a variable through marginalization wherever appropriate If a variable Y can take only k mutually exclusive outcomes, we have If the variables are continuous
Multinomial and Dirichlet distributions
Multinomial distribution n Discrete distribution n K independent outcomes with probabilities qi n Example n n Die K=6 DNA sequence K=4 Amino acid sequence K=20 For K=2 we have a Bernoulli variable (giving rise to a binomial distribution)
n The multinomial distribution gives the number of times that the different outcomes were observed n The multinomial distribution is the natural distribution for the modeling of biological sequences
Dirichlet distribution n Distribution over the region of the parameter space where n The distribution has parameters The Dirichlet distribution gives the probability of q n n The distribution is like a ‘dice factory’
Dirichlet distribution n n Z(a) is a normalization factor such that G is de gamma function n n Generalization of the factorial function to real numbers The Dirichlet distribution is the natural prior for sequence analysis because this distribution is conjugate to the multinomial distribution, which means that if we have a Dirichlet prior and we update this prior with multinomial observations, the posterior will also have the form of a Dirichlet distribution n Computationally very attractive
Estimation of frequency matrices n n n Estimation on the basis of counts e. g. , Position-Specific Scoring Matrix in PSI-BLAST Example: matrix model of a local motif GACGTG CTCGAG CGCGTG AACGTG Count the number of instances in each column
n If there are many aligned sites (N>>), we can estimate the frequencies as n This is the maximum likelihood estimate for q
Proof n We want to show that n This is equivalent to n Further
Pseudocounts n n If we have a limited number of counts, the maximum likelihood estimate will not be reliable (e. g. , for symbols not observed in the data) In such a situation, we can combine the observations with prior knowledge Suppose we use a Dirichlet prior q: Let us compute the Bayesian update
Bayesian update =1 because both distributions are normalized Computation of the posterior mean estimate Normalization integral Z(. )
Pseudocounts n The prior contributes to the estimation through pseudoobservations If few observations are available, then the prior plays an important role If many observations are available, then the pseudocounts play a negligible role n n
Dirichlet mixture n Sometimes the observations are generated by a heterogeneous process (e. g. , hydrophobic vs. hydrophilic domains in proteins) n In such situations, we should use different priors in function of the context n But we do not necessarily know the context beforehand n A possibility is the use of a Dirichlet mixture n The frequency parameter q can be generated from m different sources S with different Dirichlet parameters ak
Dirichlet mixture n Posterior n Via Bayes’ rule
Dirichlet mixture n Posterior mean estimate n The different components of the Dirichlet mixture are first considered as separate pseudocounts These components are then combined with a weight depending on the likelihood of the Dirichlet component n
Summary n n n The Cox-Jaynes axioms Bayes’ rule Probabilistic models n n n Maximum likelihood Maximum a posteriori Bayesian inference Multinomial and Dirichlet distributions Estimation of frequency matrices n n Pseudocounts Dirichlet mixture
- Slides: 39