Skip to content

Lecture Note: Econ 703 (week 2)

🕒 Published at: 2 years ago

Delta method (Univariate form)

Extended from CLT: from CLT we know n(Xnμ)dN(0,var(Xn))

by continuous mapping theorem we have n(g(Xn)g(μ))dN(0,var(Xn)(g(μ)2))

for any function g which has first derivative, and g should be continuous and var(Xn)(g(μ)2)) should be non-zero.

Former notes include core math tools for this course.

Notes after will introduce Theory of Statistics.

How to pick a good estimator

Let's say for θ we wanna propose θ^ to estimate it.

Two factors to consider:

  • Closeness: θ^ is close to θ
  • Precision: $\hat{\theta} $ should be precise

Closeness/Bias

Bias should be:

Bias(θ^,p)=E(θ^)θ=Ep(θ(X1,X2,...Xn))θ(P)

Precision

variance:

var(θ^,p)=varP[θ^(X1,X2,..Xn)]

Balance Bias and Precision

Use the square loss form (MSE, mean square error):

MSE(θ^,p)=E(θ^(X)θ)2=Bias(θ^,p)2+var(θ^,p)

Example

Suppose we know P(Xi[0,1]), and define loss function as L(P,a)=(aθ(P))2

A good estimator can be:

θ^=λnXn+(1λn)0.5λn=n1+n

This estimator will be better than Xn only since it uses 0.5, which contains information for the distribution [0,1]

Closeness

Different notation for estimator θ^ is close to θ.

Generally speaking, the following three are correlated.

  • consistent

    • θ^Pθ (for all θ,P,θ(P))
    • say θ^ is consistent for θ
  • unbiased and the limit

    • E(θ^)θ
    • say θ^ is unbiased and the limit
  • asymptotically unbiased

    • rn(θ^θ)dZ if Z=0

Precision

0 comments
Anonymous
Markdown is supported

Be the first person to leave a comment!