Maximum Likelihood Estimation Methods of Economic Investigation Lecture

  • Slides: 21
Download presentation
Maximum Likelihood Estimation Methods of Economic Investigation Lecture 17

Maximum Likelihood Estimation Methods of Economic Investigation Lecture 17

Last Time p IV estimation Issues n Heterogeneous Treatment Effects The assumptions p LATE

Last Time p IV estimation Issues n Heterogeneous Treatment Effects The assumptions p LATE interpretation p n Weak Instruments Bias in Finite Samples p F-statistics test p

Today’s Class p Maximum Likelihood Estimators n n n p You’ve seen this in

Today’s Class p Maximum Likelihood Estimators n n n p You’ve seen this in the context of OLS Can make other assumptions on the form of likelihood function This is how we estimate discrete choice models like probit and logit This is a very useful form of estimation n n Has nice properties Can be very robust to mis-specification

Our Standard OLS p Standard OLS Yi = Xi’β + εi p Focus on

Our Standard OLS p Standard OLS Yi = Xi’β + εi p Focus on minimizing mean squared error with an assumption that εi|Xi ~ N(0, σ2)

Another way to motivate linear models p “Extremum Estimators”: maximize/minimize some function n n

Another way to motivate linear models p “Extremum Estimators”: maximize/minimize some function n n p OLS Minimize Mean-Squared Error Could also imagine minimizing some other types of functions We often use a “likelihood function” n n This approach is more general, allowing us to deal with more complex nonlinear models Useful properties in terms of consistency and asymptotic convergence

What is a likelihood function p Suppose we have independent and identically distributed random

What is a likelihood function p Suppose we have independent and identically distributed random variables {Zi, . . . , ZN} drawn from a density function f(z; θ). Then the likelihood function given a sample p Because it is sometimes convenient, we often use this in logarithmic form

Consistency - 1 p Consider the population likelihood function with the “true” parameter θ

Consistency - 1 p Consider the population likelihood function with the “true” parameter θ 0 p Think of L 0 as the population average and log L as the sample estimate, so that in the usual way

Consistency - 2 p The population likelihood function is maximized L 0(θ) at the

Consistency - 2 p The population likelihood function is maximized L 0(θ) at the true value, θ 0. Why? n n n think of the sample likelihood function as telling us how likely it is one would observe the sample if the parameter value θ is really the true parameter value. Similarly, the population likelihood function L 0(θ) will be the largest at the value of θ that makes it most likely to “observe the population” That value is true parameter value. ie θ 0 = argmax. L 0(θ).

Consistency - 3 p We now know that the population likelihood L 0(θ) is

Consistency - 3 p We now know that the population likelihood L 0(θ) is maximized at θ 0 n p Can use Jensen’s inequality to apply this to the log function the sample likelihood function log L(θ; z) gets closer to L 0(θ) as N increases n n i. e. log(L) will start having the same shape as L 0 For large N, the sample likelihood will be maximized at θ 0

Information Matrix Equality p An additional useful property from the MLE comes from: n

Information Matrix Equality p An additional useful property from the MLE comes from: n Define the score function as the vector of derivatives of the log likelihood function n Define the Hessian as the matrix of second derivatives of the log likelihood function

Asymptotic Distribution p Define the following: p Then the MLE estimate will converge in

Asymptotic Distribution p Define the following: p Then the MLE estimate will converge in distribution to: p Where the information matrix I(θ) has the property that i. e. there does not exist a consistent estimate of θ with a smaller variance

Computation p Can be quite complex because need to numerically maximize p General procedure

Computation p Can be quite complex because need to numerically maximize p General procedure n n n Re-scale variables so they have roughly similar variances Choose some starting value and estimated maximum in that areas do this over and over across different grids Get an approximation of the underlying objective function If this converges to a single maximum—you’re done

Test Statistics p Define our likelihood function L(z; θ 0, θ 1) p Suppose

Test Statistics p Define our likelihood function L(z; θ 0, θ 1) p Suppose we want to test H 0: θ 0 = 0 against the alternative HA: θ 0 ≠ 0 p We could estimate a restricted an unrestricted likelihood function

Test Statistics - 1 p We can test how “close” our restricted and unrestricted

Test Statistics - 1 p We can test how “close” our restricted and unrestricted models might be p We could test if the restricted log likelihood function is maximized at θ 0 = 0, the derivative of the log likelihood function with respect to 0 at that point should be close to zero.

Test Statistics - 2 The restricted and unrestricted estimates of θ should be close

Test Statistics - 2 The restricted and unrestricted estimates of θ should be close together if the null hypothesis is correct p Partition the information matrix as follows p p Define the Wald Test as:

Comparing test statistics p In large samples, these test statistics should converge in probability

Comparing test statistics p In large samples, these test statistics should converge in probability n n p In finite samples, the three will tend to generate somewhat different test statistics, Will generally come to the same conclusion The difference between the tests is how they go about answering that question. n n n The LR test requires estimates of both of the models The W and LM tests approximate the LR test but require that only one model be estimated. When model is linear the three test statistics have the following relationship W ≥ LR ≥ LM

OLS in the MLE context p Linear Model log Likelihood Function p Choose parameter

OLS in the MLE context p Linear Model log Likelihood Function p Choose parameter values which maximize this:

Example 1: Discrete choice p Latent Variable Model: n n n p True variable

Example 1: Discrete choice p Latent Variable Model: n n n p True variable of interest is: Y*= X’β + ε We don’t observe Y* but we can observe Y = 1[Y*>0] Pr[Y=1] = Pr[Y*>0] = Pr[ε<X’β] What to assume about ε? n n n Linear Probability Model: Pr[Y=1] = X’β Probit Model: Pr[Y=1] = Ф(X’β) Logit Model: Pr[Y=1] = exp(X’β)/ [1 + exp(X’β)]

Likelihood Functions p Probit p Logit

Likelihood Functions p Probit p Logit

Marginal Effects p In the linear function we can interpret our coefficients as the

Marginal Effects p In the linear function we can interpret our coefficients as the change in the likelihood function with respect to the relevant variable, i. e. p In non-linear functions, things are a bit trickier. We get n n n We get the parameter estimate of β But we want: These are the “marginal effects” and are typically evaluated at the mean values of X

Next Time p Time Series Processes n n n p AR MA ARMA Model

Next Time p Time Series Processes n n n p AR MA ARMA Model Selection n n Return to MLE Various Criterion for Model Choice