Environmental Data Analysis with Mat Lab Lecture 4
Environmental Data Analysis with Mat. Lab Lecture 4: Multivariate Distributions
SYLLABUS Lecture 01 Lecture 02 Lecture 03 Lecture 04 Lecture 05 Lecture 06 Lecture 07 Lecture 08 Lecture 09 Lecture 10 Lecture 11 Lecture 12 Lecture 13 Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Using Mat. Lab Looking At Data Probability and Measurement Error Multivariate Distributions Linear Models The Principle of Least Squares Prior Information Solving Generalized Least Squares Problems Fourier Series Complex Fourier Series Lessons Learned from the Fourier Transform Power Spectra Filter Theory Applications of Filters Factor Analysis Orthogonal functions Covariance and Autocorrelation Cross-correlation Smoothing, Correlation and Spectra Coherence; Tapering and Spectral Analysis Interpolation Hypothesis testing Hypothesis Testing continued; F-Tests Confidence Limits of Spectra, Bootstraps
purpose of the lecture understanding propagation of error from many data to several inferences
probability with several variables
example 100 birds live on an island 30 tan pigeons 20 white pigeons 10 tan gulls 40 white gulls treat the species and color of the birds as random variables
Joint Probability, P(s, c) probability that a bird has a species, s, and a color, c. species, s color, c tan, t white, w pigeon, p 30% 20% gull, g 10% 40%
probabilities must add up to 100%
Univariate probabilities can be calculated by summing the rows and columns of P(s, c) probability of species, irrespective of color, c P(s) tan, t white, w pigeon, p 30% 20% gull, g 10% 40% sum columns P(c) probability of color, irrespective of species color, c tan, t white, w 40% 60% sum rows species, s P(s, c) pigeon, p 50% gull, g 50%
probability of color, irrespective of species probability of species, irrespective of color
Conditional probabilities: probability of one thing, given that you know another probability of color, given species probability of color and species, s tan, t white, w pigeon, p 30% 20% gull, g 10% 40% probability of species, given color P(c|s) divide by column sums color, c P(s|c) species, s divide by row sums species, s color, c P(s, c) tan, t white, w pigeon, p 75% 33% gull, g 25% 67% color, c tan, t white, w pigeon, p 60% 40% gull, g 20% 80%
calculation of conditional probabilities divide each species by fraction of birds of that color divide each color by fraction of birds of that species
Bayes Theorem same, so solving for P(s, c) rearrange
3 ways to write P(c) and P(s)
so 3 ways to write Bayes Therem the last way seems the most complicated, but it is also the most useful
Beware! major cause of error both among scientists and the general public
example probability that a dead person succumbed to pancreatic cancer (as contrasted to some other cause of death) P(cancer|death) = 1. 4% probability that a person diagnosed with pancreatic cancer will die of it in the next five years P(death|cancer) = 90% vastly different numbers
Bayesian Inference “updating information” An observer on the island has sighted a bird. We want to know whether it’s a pigeon. when the observer says, “bird sighted”, the probability that it’s a pigeon is: P(s=p) = 50% since pigeons comprise half of the birds on the island.
Now the observer says, “the bird is tan”. The probability that it’s a pigeon changes. We now want to know P(s=p|c=t) The conditional probability that it’s a pigeon, given that we have observed its color to be tan.
we use the formula % of tan pigeons % of tan birds
observation of the bird’s color changed the probability that is was a pigeon from 50% to 75% thus Bayes Theorem offers a way to assess the value of an observation
continuous variables joint probability density function, p(d 1, d 2)
p(d 1, d 2) d 1 L d 1 R d 1 d 2 L d 2 if the probability density function , p(d 1, d 2), is though of as a cloud made of water vapor the probability that (d 1, d 2) is in the box given by the total mass of water vapor in the box
normalized to unit total probability
univariate p. d. f. ’s “integrate away” one of the variables the p. d. f. of d 1 irrespective of d 2 the p. d. f. of d 2 irrespective of d 1
p(d 1, d 2) p(d 1) d 2 integrate over d 2 d 1 p(d 2) integrate over d 1 d 2 d 1
mean and variance calculated in usual way
correlation tendency of random variable d 1 to be large/small when random variable d 2 is large/small
positive correlation: tall people tend to weigh more than short people … negative correlation: long-time smokers tend to die young …
shape of p. d. f. positive correlation d 1 negative correlation d 2 d 1 uncorrelated d 1 d 2
quantifying correlation p(d 1, d 2) d 1 p. d. f. s(d 1, d 2) d 2 + - + d 1 4 -quadrant function d 2 s(d 1, d 2) p(d 1, d 2) d 2 d 1 now multiply and integrate
covariance quantifies correlation
combine variance and covariance into a matrix, C σ 12 σ1, 2 2 C= σ2 note that C is symmetric
many random variables d 1, d 2, d 3 … d N write d’s a a vector d = [d 1, d 2, d 3 … d. N ]T
the mean is then a vector, too d = [d 1, d 2, d 3 … d. N ]T
and the covariance is an N×N matrix, C 2 C= σ1 σ1, 2 σ1, 3 … σ1, 2 σ1, 3 2 σ2 σ2, 3 σ3 … … … variance on the main diagonal
multivariate Normal p. d. f. data minus its mean square root of determinant of covariance matrix inverse of covariance matrix
compare with univariate Normal p. d. f.
corresponding terms
error propagation p(d) is Normal with mean d and covariance, Cd. given model parameters m where m is a linear function of d m = Md Q 1. What is p(m)? Q 2. What is its mean m and covariance Cm?
Answer Q 1: What is p(m)? A 1: p(m) is Normal Q 2: What is its mean m and covariance Cm? A 2: m = Md and C m = M C d MT
where the answer comes from transform p(d) to p(m) starting with a Normal p. d. f. for p(d) : and the multivariate transformation rule: determinant
this is not as hard as it looks because the Jacobian determinant J(m) is constant: so, starting with p(d), replaces every occurrence of d with M-1 m and multiply the result by |M-1|. This yields:
Normal p. d. f. for model parameters p(m) where rule for error propagation
example A B A d 1 d 2 measurement 1: weight of A measurement 2: combined weight of A and B suppose the measurements are uncorrelated and that both have the same variance, σd 2
model parameters m 1 weight of B B = + weight of B minus weight of A m 2 = d 2 – 2 d 1 - A m 1 = d 2 – d 1 m 2 B B - A = B + A A A - + A
linear rule relating model parameters to data m=Md with
so the means of the model parameters are
and the covariance matrix is
model parameters are correlated, even though data are uncorrelated bad variance of model parameters different than variance of data bad if bigger, good if smaller
example with specific values of d and σd 2 0 0 p(d 1, d 2) 40 d 40 2 -20 p(m 1, m 2) 20 m 2 20 d 1 m 1 The data, (d 1, d 2), have mean (15, 25), variance (5, 5) and zero covariance The model parameters, (m 1, m 2), have mean (10, -5), variance (10, 25) and covariance, 15.
- Slides: 50