Archive

Author Archive

A simulation-based explanation of consistency and asymptotic normality

Overview

In the frequentist approach to statistics, estimators are random variables because they are functions of random data. The finite-sample distributions of most of the estimators used in applied work are not known, because the estimators are complicated nonlinear functions of random data. These estimators have large-sample convergence properties that we use to approximate their behavior in finite samples.

Two key convergence properties are consistency and asymptotic normality. A consistent estimator gets arbitrarily close in probability to the true value. The distribution of an asymptotically normal estimator gets arbitrarily close to a normal distribution as the sample size increases. We use a recentered and rescaled version of this normal distribution to approximate the finite-sample distribution of our estimators.

I illustrate the meaning of consistency and asymptotic normality by Monte Carlo simulation (MCS). I use some of the Stata mechanics I discussed in Monte Carlo simulations using Stata.

Consistent estimator

A consistent estimator gets arbitrarily close in probability to the true value as you increase the sample size. In other words, the probability that a consistent estimator is outside a neighborhood of the true value goes to zero as the sample size increases. Figure 1 illustrates this convergence for an estimator \(\theta\) at sample sizes 100, 1,000, and 5,000, when the true value is 0. As the sample size increases, the density is more tightly distributed around the true value. As the sample size becomes infinite, the density collapses to a spike at the true value.

Figure 1: Densities of an estimator for sample sizes 100, 1,000, 5,000, and \(\infty\)
graph1

I now illustrate that the sample average is a consistent estimator for the mean of an independently and identically distributed (i.i.d.) random variable with a finite mean and a finite variance. In this example, the data are i.i.d. draws from a \(\chi^2\) distribution with 1 degree of freedom. The true value is 1, because the mean of a \(\chi^2(1)\) is 1.

Code block 1 implements an MCS of the sample average for the mean from samples of size 1,000 of i.i.d. \(\chi^2(1)\) variates.

Code block 1: mean1000.do

clear all
set seed 12345
postfile sim m1000 using sim1000, replace

forvalues i = 1/1000 {
        quietly capture drop y
        quietly set obs 1000
        quietly generate y = rchi2(1)
        quietly summarize y
        quietly post sim  (r(mean))
}
postclose sim

Line 1 clears Stata, and line 2 sets the seed of the random number generator. Line 3 uses postfile to create a place in memory named sim, in which I store observations on the variable m1000, which will be the new dataset sim1000. Note that the keyword using separates the name of the new variable from the name of the new dataset. The replace option specifies that sim1000.dta be replaced, if it already exists.

Lines 5 and 11 use forvalues to repeat the code in lines 6–10 1,000 times. Each time through the forvalues loop, line 6 drops y, line 7 sets the number of observations to 1,000, line 8 generates a sample of size 1,000 of i.i.d. \(\chi^2(1)\) variates, line 9 estimates the mean of y in this sample, and line 10 uses post to store the estimated mean in what will be the new variable m1000. Line 12 writes everything stored in sim to the new dataset sim100.dta. See Monte Carlo simulations using Stata for more details about using post to implement an MCS in Stata.

In example 1, I run mean1000.do and then summarize the results.

Example 1: Estimating the mean from a sample of size 1,000

. do mean1000

. clear all

. set seed 12345

. postfile sim m1000 using sim1000, replace

.
. forvalues i = 1/1000 {
  2.         quietly capture drop y
  3.         quietly set obs 1000
  4.         quietly generate y = rchi2(1)
  5.         quietly summarize y
  6.         quietly post sim  (r(mean))
  7. }

. postclose sim

.
.
end of do-file

. use sim1000, clear

. summarize m1000

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
       m1000 |      1,000     1.00017    .0442332   .8480308   1.127382

The mean of the 1,000 estimates is close to 1. The standard deviation of the 1,000 estimates is 0.0442, which measures how tightly the estimator is distributed around the true value of 1.

Code block 2 contains mean100000.do, which implements the analogous MCS with
a sample size of 100,000.

Code block 2: mean100000.do

clear all
// no seed, just keep drawing
postfile sim m100000 using sim100000, replace

forvalues i = 1/1000 {
        quietly capture drop y
        quietly set obs 100000
        quietly generate y = rchi2(1)
        quietly summarize y
        quietly post sim  (r(mean))
}
postclose sim

Example 2 runs mean100000.do and summarizes the results.

Example 2: Estimating the mean from a sample of size 100,000

. do mean100000

. clear all

. // no seed, just keep drawing
. postfile sim m100000 using sim100000, replace

.
. forvalues i = 1/1000 {
  2.         quietly capture drop y
  3.         quietly set obs 100000
  4.         quietly generate y = rchi2(1)
  5.         quietly summarize y
  6.         quietly post sim  (r(mean))
  7. }

. postclose sim

.
.
end of do-file

. use sim100000, clear

. summarize m100000

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
     m100000 |      1,000    1.000008    .0043458   .9837129   1.012335

The standard deviation of 0.0043 indicates that the distribution of the estimator with a sample size 100,000 is much more tightly distributed around the true value of 1 than the estimator with a sample size of 1,000.

Example 3 merges the two datasets of estimates and plots the densities of the estimator for the two sample sizes in figure 2. The distribution of the estimator for the sample size of 100,000 is much tighter around 1 than the estimator for the sample size of 1,000.

Example 3: Densities of sample-average estimator for 1,000 and 100,000

. merge 1:1 _n using sim1000

    Result                           # of obs.
    -----------------------------------------
    not matched                             0
    matched                             1,000  (_merge==3)
    -----------------------------------------

. kdensity m1000, n(500) generate(x_1000 f_1000) kernel(gaussian) nograph

. label variable f_1000 "N=1000"

. kdensity m100000, n(500) generate(x_100000 f_100000) kernel(gaussian) nograph

. label variable f_100000 "N=100000"

. graph twoway (line f_1000 x_1000) (line f_100000 x_100000)

Figure 2: Densities of the sample-average estimator for sample sizes 1,000 and 100,000
graph1

The sample average is a consistent estimator for the mean of an i.i.d. \(\chi^2(1)\) random variable because a weak law of large numbers applies. This theorem specifies that the sample average converges in probability to the true mean if the data are i.i.d., the mean is finite, and the variance is finite. Other versions of this theorem weaken the i.i.d. assumption or the moment assumptions, see Cameron and Trivedi (2005, sec. A.3), Wasserman (2003, sec. 5.3), and Wooldridge (2010, 41–42) for details.

Asymptotic normality

So the good news is that distribution of a consistent estimator is arbitrarily tight around the true value. The bad news is the distribution of the estimator changes with the sample size, as illustrated in figures 1 and 2.

If I knew the distribution of my estimator for every sample size, I could use it to perform inference using this finite-sample distribution, also known as the exact distribution. But the finite-sample distribution of most of the estimators used in applied research is unknown. Fortunately, the distributions of a recentered and rescaled version of these estimators gets arbitrarily close to a normal distribution as the sample size increases. Estimators for which a recentered and rescaled version converges to a normal distribution are said to be asymptotically normal. We use this large-sample distribution to approximate the finite-sample distribution of the estimator.

Figure 2 shows that the distribution of the sample average becomes increasingly tight around the true value as the sample size increases. Instead of looking at the distribution of the estimator \(\widehat{\theta}_N\) for sample size \(N\), let’s look at the distribution of \(\sqrt{N}(\widehat{\theta}_N – \theta_0)\), where \(\theta_0\) is the true value for which \(\widehat{\theta}_N\) is consistent.

Example 4 estimates the densities of the recentered and rescaled estimators, which are shown in figure 3.

Example 4: Densities of the recentered and rescaled estimator

. generate double m1000n   =   sqrt(1000)*(m1000   - 1)

. generate double m100000n = sqrt(100000)*(m100000 - 1)

. kdensity m1000n, n(500) generate(x_1000n f_1000n) kernel(gaussian) nograph

. label variable f_1000n "N=1000"

. kdensity m100000n, n(500) generate(x_100000n f_100000n) kernel(gaussian) ///
>       nograph

. label variable f_100000n "N=100000"

. graph twoway (line f_1000n x_1000n) (line f_100000n x_100000n)

Figure 3: Densities of the recentered and rescaled estimator for sample sizes 1,000 and 100,000
graph1

The densities of the recentered and rescaled estimators in figure 3 are indistinguishable from each and look close to a normal density. The Lindberg–Levy central limit theorem guarantees that the distribution of the recentered and rescaled sample average of i.i.d. random variables with finite mean \(\mu\) and finite variance \(\sigma^2\) gets arbitrarily closer to a normal distribution with mean 0 and variance \(\sigma^2\) as the sample size increases. In other words, the distribution of \(\sqrt{N}(\widehat{\theta}_N-\mu)\) gets arbitrarily close to a \(N(0,\sigma^2)\) distribution as \(\rightarrow\infty\), where \(\widehat{\theta}_N=1/N\sum_{i=1}^N y_i\) and \(y_i\) are realizations of the i.i.d. random variable. This convergence in distribution justifies our use of the distribution \(\widehat{\theta}_N\sim N(\mu,\frac{\sigma^2}{N})\) in practice.

Given that \(\sigma^2=2\) for the \(\chi^2(1)\) distribution, in example 5, we add a plot of a normal density with mean 0 and variance 2 for comparison.

Example 5: Densities of the recentered and rescaled estimator

. twoway (line f_1000n x_1000n)                        ///
>        (line f_100000n x_100000n)                    ///
>        (function normalden(x, sqrt(2)), range(-4 4)) ///
>        ,legend( label(3 "Normal(0, 2)") cols(3))

We see that the densities of recentered and rescaled estimators are indistinguishable from the density of a normal distribution with mean 0 and variance 2, as predicted by the theory.

Figure 4: Densities of the recentered and rescaled estimates and a Normal(0,2)
graph1

Other versions of the central limit theorem weaken the i.i.d. assumption or the moment assumptions, see Cameron and Trivedi (2005, sec. A.3), Wasserman (2003, sec. 5.3), and Wooldridge (2010, 41–42) for details.

Done and undone

I used MCS to illustrate that the sample average is consistent and asymptotically normal for data drawn from an i.i.d. process with finite mean and variance.

Many method-of-moments estimators, maximum likelihood estimators, and M-estimators are consistent and asymptotically normal under assumptions about the true data-generating process and the estimators themselves. See Cameron and Trivedi (2005, sec. 5.3), Newey and McFadden (1994), Wasserman (2003, chap. 9), and Wooldridge (2010, chap. 12) for discussions.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and Applications. Cambridge: Cambridge University Press.

Newey, W. K., and D. McFadden. 1994. Large sample estimation and hypothesis testing. In Handbook of Econometrics, ed. R. F. Engle and D. McFadden, vol. 4, 2111–2245. Amsterdam: Elsevier.

Wasserman, L. A. 2003. All of Statistics: A Concise Course in Statistical Inference. New York: Springer.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Programming an estimation command in Stata: Certifying your command

\(\newcommand{\xb}{{\bf x}}
\newcommand{\betab}{\boldsymbol{\beta}}\)Before you use or distribute your estimation command, you should verify that it produces correct results and write a do-file that certifies that it does so. I discuss the processes of verifying and certifying an estimation command, and I present some techniques for writing a do-file that certifies mypoisson5, which I discussed in previous posts.

This is the twenty-fifth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Verification versus certification

Verification is the process of establishing Read more…

Programming an estimation command in Stata: Making predict work

I make predict work after mypoisson5 by writing an ado-command that computes the predictions and by having mypoisson5 store the name of this new ado-command in e(predict). The ado-command that computes predictions using the parameter estimates computed by ado-command mytest should be named mytest_p, by convention. In the next section, I discuss mypoisson5_p, which computes predictions after mypoisson5. In section Storing the name of the prediction command in e(predict), I show that storing the name mypoisson5_p in e(predict) requires only a one-line change to mypoisson4.ado, which I discussed in Programming an estimation command in Stata: Adding analytical derivatives to a poisson command using Mata.

This is the twenty-fourth post in the Read more…

Programming an estimation command in Stata: Adding analytical derivatives to a poisson command using Mata

\(\newcommand{\xb}{{\bf x}}
\newcommand{\betab}{\boldsymbol{\beta}}\)Using analytically computed derivatives can greatly reduce the time required to solve a nonlinear estimation problem. I show how to use analytically computed derivatives with optimize(), and I discuss mypoisson4.ado, which uses these analytically computed derivatives. Only a few lines of mypoisson4.ado differ from the code for mypoisson3.ado, which I discussed in Programming an estimation command in Stata: Allowing for robust or cluster–robust standard errors in a poisson command using Mata.

This is the twenty-third post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Analytically computed derivatives for Poisson

The contribution of the i(th) observation to the log-likelihood function for the Poisson maximum-likelihood estimator is Read more…

Programming an estimation command in Stata: Allowing for robust or cluster–robust standard errors in a poisson command using Mata

mypoisson3.ado adds options for a robust or a cluster–robust estimator of the variance–covariance of the estimator (VCE) to mypoisson2.ado, which I discussed in Programming an estimation command in Stata: Handling factor variables in a poisson command using Mata. mypoisson3.ado parses the vce() option using the techniques I discussed in Programming an estimation command in Stata: Adding robust and cluster–robust VCEs to our Mata based OLS command. Below, I show how to use optimize() to compute the robust or cluster–robust VCE.

I only discuss what is new in the code for mypoisson3.ado, assuming that you are familiar with mypoisson2.ado.

This is the twenty-second post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

A poisson command with options for a robust or a cluster–robust VCE

mypoisson3 computes Poisson-regression results in Mata. The syntax of the mypoisson3 command is

mypoisson3 depvar indepvars [if] [in] [, vce(robust | cluster clustervar) noconstant]

where indepvars can contain factor variables or time-series variables.

In the remainder of this post, I discuss Read more…

Programming an estimation command in Stata: Handling factor variables in a poisson command using Mata

mypoisson2.ado handles factor variables and computes its Poisson regression results in Mata. I discuss the code for mypoisson2.ado, which I obtained by adding the method for handling factor variables discussed in Programming an estimation command in Stata: Handling factor variables in optimize() to mypoisson1.ado, discussed in Programming an estimation command in Stata: A poisson command using Mata.

This is the twenty-first post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

A Poisson command with Mata computations

mypoisson2 computes Poisson regression results in Mata. The syntax of the mypoisson2 command is

mypoisson2 depvar indepvars [if] [in] [, noconstant]

where indepvars can contain factor variables or time-series variables.

In the remainder of this post, I discuss Read more…

Programming an estimation command in Stata: Handling factor variables in optimize()

\(
\newcommand{\xb}{{\bf x}}
\newcommand{\betab}{\boldsymbol{\beta}}\)I discuss a method for handling factor variables when performing nonlinear optimization using optimize(). After illustrating the issue caused by factor variables, I present a method and apply it to an example using optimize().

This is the twenty post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

How poisson handles factor variables

Consider the Poisson regression in which I include a full set of indicator variables created from Read more…

Programming an estimation command in Stata: A poisson command using Mata

\(
\newcommand{\xb}{{\bf x}}
\newcommand{\betab}{\boldsymbol{\beta}}\)I discuss mypoisson1, which computes Poisson-regression results in Mata. The code in mypoisson1.ado is remarkably similar to the code in myregress11.ado, which computes ordinary least-squares (OLS) results in Mata, as I discussed in Programming an estimation command in Stata: An OLS command using Mata.

I build on previous posts. I use the structure of Stata programs that use Mata work functions that I discussed previously in Programming an estimation command in Stata: A first ado-command using Mata and Programming an estimation command in Stata: An OLS command using Mata. You should be familiar with Read more…

Programming an estimation command in Stata: Using optimize() to estimate Poisson parameters

\(
\newcommand{\xb}{{\bf x}}
\newcommand{\betab}{\boldsymbol{\beta}}\)I show how to use optimize() in Mata to maximize a Poisson log-likelihood function and to obtain estimators of the variance–covariance of the estimator (VCE) based on independent and identically distributed (IID) observations or on robust methods.

This is the eighteenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Using optimize()

There are many optional choices that one may make when solving a nonlinear optimization problem, but there are very few that one must make. The optimize*() functions in Mata handle this problem by making a set of default choices for you, requiring that you specify a few things, and allowing you to change any of the default choices.

When I use optimize() to solve a Read more…

Programming an estimation command in Stata: A review of nonlinear optimization using Mata

\(\newcommand{\betab}{\boldsymbol{\beta}}
\newcommand{\xb}{{\bf x}}
\newcommand{\yb}{{\bf y}}
\newcommand{\gb}{{\bf g}}
\newcommand{\Hb}{{\bf H}}
\newcommand{\thetab}{\boldsymbol{\theta}}
\newcommand{\Xb}{{\bf X}}
\)I review the theory behind nonlinear optimization and get more practice in Mata programming by implementing an optimizer in Mata. In real problems, I recommend using the optimize() function or moptimize() function instead of the one I describe here. In subsequent posts, I will discuss optimize() and moptimize(). This post will help you develop your Mata programming skills and will improve your understanding of how optimize() and moptimize() work.

This is the seventeenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

A quick review of nonlinear optimization

We want to maximize a real-valued function \(Q(\thetab)\), where \(\thetab\) is a \(p\times 1\) vector of parameters. Minimization is done by maximizing \(-Q(\thetab)\). We require that \(Q(\thetab)\) is twice, continuously differentiable, so that we can use a second-order Taylor series to approximate \(Q(\thetab)\) in a neighborhood of the point \(\thetab_s\),

\[
Q(\thetab) \approx Q(\thetab_s) + \gb_s'(\thetab -\thetab_s)
+ \frac{1}{2} (\thetab -\thetab_s)’\Hb_s (\thetab -\thetab_s)
\tag{1}
\]

where \(\gb_s\) is the \(p\times 1\) vector of first derivatives of \(Q(\thetab)\) evaluated at \(\thetab_s\) and \(\Hb_s\) is the \(p\times p\) matrix of second derivatives of \(Q(\thetab)\) evaluated at \(\thetab_s\), known as the Hessian matrix.

Nonlinear maximization algorithms start with Read more…