### Archive

Posts Tagged ‘simulation’

## regress, probit, or logit?

In a previous post I illustrated that the probit model and the logit model produce statistically equivalent estimates of marginal effects. In this post, I compare the marginal effect estimates from a linear probability model (linear regression) with marginal effect estimates from probit and logit models.

My simulations show that when the true model is a probit or a logit, using a linear probability model can produce inconsistent estimates of the marginal effects of interest to researchers. The conclusions hinge on the probit or logit model being the true model.

Simulation results

For all simulations below, I use a sample size of 10,000 and 5,000 replications. The true data-generating processes (DGPs) are constructed using one discrete covariate and one continuous covariate. I study the average effect of a change in the continuous variable on the conditional probability (AME) and the average effect of a change in the discrete covariate on the conditional probability (ATE). I also look at the effect of a change in the continuous variable on the conditional probability, evaluated at the mean value of the covariates (MEM), and the effect of a change in the discrete covariate on the conditional probability, evaluated at the mean value of the covariates (TEM).

In Table 1, I present the results of a simulation when the true DGP satisfies the assumptions of a logit model. I show the average of the AME and the ATE estimates and the 5% rejection rate of the true null hypotheses. I also provide an approximate true value of the AME and ATE. I obtain the approximate true values by computing the ATE and AME, at the true values of the coefficients, using a sample of 20 million observations. I will provide more details on the simulation in a later section.

Table 1: Average Marginal and Treatment Effects: True DGP Logit

Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Logit Regress (LPM)
AME of x1 -.084 -.084 -.094
5% Rejection Rate .050 .99
ATE of x2 .092 .091 .091
5% Rejection Rate .058 .058

From Table 1, we see that the logit model estimates are close to the true value and that the rejection rate of the true null hypothesis is close to 5%. For the linear probability model, the rejection rate is 99% for the AME. For the ATE, the rejection rate and point estimates are close to what is estimated using a logit.

For the MEM and TEM, we have the following:

Table 2: Marginal and Treatment E ects at Mean Values: True DGP Logit

Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Logit Regress (LPM)
MEM of x1 -.099 -.099 -.094
5% Rejection Rate .054 .618
TEM of x2 .109 .109 .092
5% Rejection Rate .062 .073

Again, logit estimates behave as expected. For the linear probability model, the rejection rate of the true null hypothesis is 62% for the MEM. For the TEM the rejection rate is 7.3%, and the estimated effect is smaller than the true effect.

For the AME and ATE, when the true GDP is a probit, we have the following:

Table 3: Average Marginal and Treatment Effects: True DGP Probit

Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Probit Regress (LPM)
AME of x1 -.094 -.094 -.121
5% Rejection Rate .047 1
ATE of x2 .111 .111 .111
5% Rejection Rate .065 .061

The probit model estimates are close to the true value, and the rejection rate of the true null hypothesis is close to 5%. For the linear probability model, the rejection rate is 100% for the AME. For the ATE, the rejection rate and point estimates are close to what is estimated using a probit.

For the MEM and TEM, we have the following:

Table 4: Marginal and Treatment Effects at Mean Values: True DGP Probit

Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Probit Regress (LPM)
MEM of x1 -.121 -.122 -.121
5% Rejection Rate .063 .054
TEM of x2 .150 .150 .110
5% Rejection Rate .059 .158

For the MEM, the probit and linear probability model produce reliable inference. For the TEM, the probit marginal effects behave as expected, but the linear probability model has a rejection rate of 16%, and the point estimates are not close to the true value.

Simulation design

Below is the code I used to generate the data for my simulations. In the first part, lines 6 to 13, I generate outcome variables that satisfy the assumptions of the logit model, y, and the probit model, yp. In the second part, lines 15 to 19, I compute the marginal effects for the logit and probit models. I have a continuous and a discrete covariate. For the discrete covariate, the marginal effect is a treatment effect. In the third part, lines 21 to 29, I compute the marginal effects evaluated at the means. I will use these estimates later to compute approximations to the true values of the effects.

program define mkdata
syntax, [n(integer 1000)]
clear
quietly set obs n'
// 1. Generating data from probit, logit, and misspecified
generate x1    = rchi2(2)-2
generate x2    = rbeta(4,2)>.2
generate u     = runiform()
generate e     = ln(u) -ln(1-u)
generate ep    = rnormal()
generate xb    = .5*(1 - x1 + x2)
generate y     =  xb + e > 0
generate yp    = xb + ep > 0
// 2. Computing probit & logit marginal and treatment effects
generate m1   = exp(xb)*(-.5)/(1+exp(xb))^2
generate m2   = exp(1 -.5*x1)/(1+ exp(1 -.5*x1 )) - ///
exp(.5 -.5*x1)/(1+ exp(.5 -.5*x1 ))
generate m1p  = normalden(xb)*(-.5)
generate m2p  = normal(1 -.5*x1 ) - normal(.5 -.5*x1)
// 3. Computing marginal and treatment effects at means
quietly mean x1 x2
matrix A        = r(table)
scalar a        = .5 -.5*A[1,1] + .5*A[1,2]
scalar b1       =  1 -.5*A[1,1]
scalar b0       = .5 -.5*A[1,1]
generate mean1  = exp(a)*(-.5)/(1+exp(a))^2
generate mean2  = exp(b1)/(1+ exp(b1)) - exp(b0)/(1+ exp(b0))
generate mean1p = normalden(a)*(-.5)
generate mean2p = normal(b1) - normal(b0)
end


I approximate the true marginal effects using a sample of 20 million observations. This is a reasonable strategy in this case. For example, take the average marginal effect for a continuous covariate, $$x_{k}$$, in the case of the probit model:

$\begin{equation*} \frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k} \end{equation*}$

The expression above is an approximation of $$E\left(\phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\right)$$. To obtain this expected value, we would need to integrate over the distribution of all the covariates. This is not practical and would limit my choice of covariates. Instead, I draw a sample of 20 million observations, compute $$\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}$$, and take it to be the true value. I follow the same logic for the other marginal effects.

Below is the code I use to compute the approximate true marginal effects. I draw the 20 million observations, compute the averages that I wil use in my simulation, and create locals for each approximate true value.

. mkdata, n(L')
(2 missing values generated)

. local values "m1 m2 mean1 mean2 m1p m2p mean1p mean2p"

. local means  "mx1 mx2 meanx1 meanx2 mx1p mx2p meanx1p meanx2p"

. local n : word count values'

.
. forvalues i= 1/n' {
2.         local a: word i' of values'
3.         local b: word i' of means'
4.         sum a', meanonly
5.         local b' = r(mean)
6. }


Now, I am ready to run all the simulations that I used to produce the results in the previous section. The code that I used for the simulations for the TEM and the MEM when the true DGP is a logit is given by:

. postfile lpm y1l y1l_r y1lp y1lp_r y2l y2l_r y2lp y2lp_r ///
>                 using simslpm, replace

. forvalues i=1/R' {
2.         quietly {
3.                 mkdata, n(N')
4.                 logit  y x1 i.x2, vce(robust)
5.                 margins, dydx(*) atmeans post  vce(unconditional)
6.                 local y1l = _b[x1]
7.                 test _b[x1] = meanx1'
8.                 local y1l_r   = (r(p)<.05)
9.                 local y2l = _b[1.x2]
10.                 test _b[1.x2] = meanx2'
11.                 local y2l_r   = (r(p)<.05)
12.                 regress  y x1 i.x2, vce(robust)
13.                 margins, dydx(*) atmeans post  vce(unconditional)
14.                 local y1lp = _b[x1]
15.                 test _b[x1] = meanx1'
16.                 local y1lp_r   = (r(p)<.05)
17.                 local y2lp = _b[1.x2]
18.                 test _b[1.x2] = meanx2'
19.                 local y2lp_r   = (r(p)<.05)
20.                 post lpm (y1l') (y1l_r') (y1lp') (y1lp_r') ///
>                          (y2l') (y2l_r') (y2lp') (y2lp_r')
21.         }
22. }

. postclose lpm

. use simslpm, clear

. sum

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
y1l |      5,000   -.0985646      .00288  -.1083639  -.0889075
y1l_r |      5,000       .0544     .226828          0          1
y1lp |      5,000   -.0939211    .0020038  -.1008612  -.0868043
y1lp_r |      5,000       .6182    .4858765          0          1
y2l |      5,000    .1084959     .065586  -.1065291   .3743112
-------------+---------------------------------------------------------
y2l_r |      5,000       .0618     .240816          0          1
y2lp |      5,000    .0915894     .055462  -.0975456   .3184061
y2lp_r |      5,000       .0732    .2604906          0          1


For the results for the AME and the ATE when the true DGP is a logit, I use margins without the atmeans option. The other cases are similar. I use robust standard errors for all computations because my likelihood model is an approximation to the true likelihood, and I use the option vce(unconditional) to account for the fact that I am using two-step M-estimation. See Wooldridge (2010) for more details on two-step M-estimation.

You can obtain the code used to produce these results here.

Conclusion

Using a probit or a logit model yields equivalent marginal effects. I provide evidence that the same cannot be said of the marginal effect estimates of the linear probability model when compared with those of the logit and probit models.

Acknowledgment

This post was inspired by a question posed by Stephen Jenkins after my previous post.

Reference

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

We often use probit and logit models to analyze binary outcomes. A case can be made that the logit model is easier to interpret than the probit model, but Stata’s margins command makes any estimator easy to interpret. Ultimately, estimates from both models produce similar results, and using one or the other is a matter of habit or preference.

I show that the estimates from a probit and logit model are similar for the computation of a set of effects that are of interest to researchers. I focus on the effects of changes in the covariates on the probability of a positive outcome for continuous and discrete covariates. I evaluate these effects on average and at the mean value of the covariates. In other words, I study the average marginal effects (AME), the average treatment effects (ATE), the marginal effects at the mean values of the covariates (MEM), and the treatment effects at the mean values of the covariates (TEM).

First, I present the results. Second, I discuss the code used for the simulations.

Results

In Table 1, I present the results of a simulation with 4,000 replications when the true data generating process (DGP) satisfies the assumptions of a probit model. I show the average of the AME and the ATE estimates and the 5% rejection rate of the true null hypothesis that arise after probit and logit estimation. I also provide an approximate true value of the AME and ATE. I obtain the approximate true values by computing the ATE and AME, at the true values of the coefficients, using a sample of 20 million observations. I will provide more details on the simulation in a later section.

Table 1: Average Marginal and Treatment Effects: True DGP Probit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
AME of x1 -.1536 -.1537 -.1537
5% Rejection Rate .050 .052
ATE of x2 .1418 .1417 .1417
5% Rejection Rate .050 .049

For the MEM and TEM, we have the following:

Table 2: Marginal and Treatment Effects at Mean Values: True DGP Probit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
MEM of x1 -.1672 -.1673 -.1665
5% Rejection Rate .056 .06
TEM of x2 .1499 .1498 .1471
5% Rejection Rate .053 .058

The logit estimates are close to the true value and have a rejection rate that is close to 5%. Fitting the parameters of our model using logit when the true DGP satisfies the assumptions of a probit model does not lead us astray.

If the true DGP satisfies the assumptions of the logit model, the conclusions are the same. I present the results in the next two tables.

Table 3: Average Marginal and Treatment Effects: True DGP Logit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
AME of x1 -.1090 -.1088 -.1089
5% Rejection Rate .052 .052
ATE of x2 .1046 .1044 .1045
5% Rejection Rate .053 .051

Table 4: Marginal and Treatment Effects at Mean Values: True DGP Logit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
MEM of x1 -.1146 -.1138 -.1146
5% Rejection Rate .050 .051
TEM of x2 .1086 .1081 .1085
5% Rejection Rate .058 .058

Why?

Maximum likelihood estimators find the parameters that maximize the likelihood that our data will fit the distributional assumptions that we make. The likelihood chosen is an approximation to the true likelihood, and it is a helpful approximation if the true likelihood and our approximating are close to each other. Viewing likelihood-based models as useful approximations, instead of as models of a true likelihood, is the basis of quasilikelihood theory. For more details, see White (1996) and Wooldridge (2010).

It is assumed that the unobservable random variable in the probit model and logit model comes from a standard normal and logistic distribution, respectively. The cumulative distribution functions (CDFs) in these two cases are close to each other, especially around the mean. Therefore, estimators under these two sets of assumptions produce similar results. To illustrate these arguments, we can plot the two CDFs and their differences as follows:

Graph 1: Normal and Logistic CDF’s and their Difference

The difference between the CDFs approaches zero as you get closer to the mean, from the right or from the left, and it is always smaller than .15.

Simulation design

Below is the code I used to generate the data for my simulations. In the first part, lines 4 to 12, I generate outcome variables that satisfy the assumptions of the probit model, y1, and the logit model, y2. In the second part, lines 13 to 16, I compute the marginal effects for the logit and probit models. I have a continuous and a discrete covariate. For the discrete covariate, the marginal effect is a treatment effect. In the third part, lines 17 to 25, I compute the marginal effects evaluated at the means. I will use these estimates later to compute approximations to the true values of the effects.

program define mkdata
syntax, [n(integer 1000)]
clear
quietly set obs n'
// 1. Generating data from probit, logit, and misspecified
generate x1    = rnormal()
generate x2    = rbeta(2,4)>.5
generate e1    = rnormal()
generate u     = runiform()
generate e2    = ln(u) -ln(1-u)
generate xb    = .5*(1 -x1 + x2)
generate y1    =  xb + e1 > 0
generate y2    =  xb + e2 > 0
// 2. Computing probit & logit marginal and treatment effects
generate m1    = normalden(xb)*(-.5)
generate m2    = normal(1 -.5*x1 ) - normal(.5 -.5*x1)
generate m1l   = exp(xb)*(-.5)/(1+exp(xb))^2
generate m2l   = exp(1 -.5*x1)/(1+ exp(1 -.5*x1 )) - ///
exp(.5 -.5*x1)/(1+ exp(.5 -.5*x1 ))
// 3. Computing probit & logit marginal and treatment effects at means
quietly mean x1 x2
matrix A         = r(table)
scalar a         = .5 -.5*A[1,1] + .5*A[1,2]
scalar b1        =  1 -.5*A[1,1]
scalar b0        = .5 -.5*A[1,1]
generate mean1   = normalden(a)*(-.5)
generate mean2   = normal(b1) - normal(b0)
generate mean1l  = exp(a)*(-.5)/(1+exp(a))^2
generate mean2l  = exp(b1)/(1+ exp(b1)) - exp(b0)/(1+ exp(b0))
end


I approximate the true marginal effects using a sample of 20 million observations. This is a reasonable strategy in this case. For example, take the average marginal effect for a continuous covariate, $$x_{k}$$, in the case of the probit model:

$\begin{equation*} \frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k} \end{equation*}$

The expression above is an approximation to $$E\left(\phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\right)$$. To obtain this expected value, we would need to integrate over the distribution of all the covariates. This is not practical and would limit my choice of covariates. Instead, I draw a sample of 20 million observations, compute $$\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}$$, and take it to be the true value. I follow the same logic for the other marginal effects.

Below is the code I use to compute the approximate true marginal effects. I draw the 20 million observations, then I compute the averages that I am going to use in my simulation, and I create locals for each approximate true value.

. mkdata, n(20000000)

. local values "m1 m2 m1l m2l mean1 mean2 mean1l mean2l"

. local means  "mx1 mx2 mx1l mx2l meanx1 meanx2 meanx1l meanx2l"

. local n : word count values'

. forvalues i= 1/n' {
2.         local a: word i' of values'
3.         local b: word i' of means'
4.         sum a', meanonly
5.         local b' = r(mean)
6. }


Now I am ready to run all the simulations that I used to produce the results in the previous section. The code that I used for the simulations for the ATE and the AME when the true DGP is a probit is given by

. postfile mprobit y1p y1p_r y1l y1l_r y2p y2p_r y2l y2l_r ///
>                 using simsmprobit, replace

. forvalues i=1/4000 {
2.         quietly {
3.                 mkdata, n(10000)
4.                 probit y1 x1 i.x2, vce(robust)
5.                 margins, dydx(*) atmeans post
6.                 local y1p = _b[x1]
7.                 test _b[x1] = meanx1'
8.                 local y1p_r   = (r(p)<.05)
9.                 local y2p = _b[1.x2]
10.                 test _b[1.x2] = meanx2'
11.                 local y2p_r   = (r(p)<.05)
12.                 logit  y1 x1 i.x2, vce(robust)
13.                 margins, dydx(*) atmeans post
14.                 local y1l = _b[x1]
15.                 test _b[x1] = meanx1'
16.                 local y1l_r   = (r(p)<.05)
17.                 local y2l = _b[1.x2]
18.                 test _b[1.x2] = meanx2'
19.                 local y2l_r   = (r(p)<.05)
20.                 post mprobit (y1p') (y1p_r') (y1l') (y1l_r') ///
>                            (y2p') (y2p_r') (y2l') (y2l_r')
21.         }
22. }

. use simsprobit
. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
y1p |      4,000   -.1536812    .0038952  -.1697037  -.1396532
y1p_r |      4,000         .05    .2179722          0          1
y1l |      4,000   -.1536778    .0039179  -.1692524  -.1396366
y1l_r |      4,000      .05175    .2215496          0          1
y2p |      4,000     .141708    .0097155   .1111133   .1800973
-------------+---------------------------------------------------------
y2p_r |      4,000       .0495    .2169367          0          1
y2l |      4,000    .1416983    .0097459   .1102069   .1789895
y2l_r |      4,000        .049     .215895          0          1


For the results in the case of the MEM and the TEM when the true DGP is a probit, I use margins with the option atmeans. The other cases are similar. I use robust standard error for all computations to account for the fact that my likelihood model is an approximation to the true likelihood, and I use the option vce(unconditional) to account for the fact that I am using two-step M-estimation. See Wooldridge (2010) for more details on two-step M-estimation.

Concluding Remarks

I provided simulation evidence that illustrates that the differences between using estimates of effects after probit or logit is negligible. The reason lies in the theory of quasilikelihood and, specifically, in that the cumulative distribution functions of the probit and logit models are similar, especially around the mean.

References

White, H. 1996. Estimation, Inference, and Specification Analysis>. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Understanding the generalized method of moments (GMM): A simple example

$$\newcommand{\Eb}{{\bf E}}$$This post was written jointly with Enrique Pinzon, Senior Econometrician, StataCorp.

The generalized method of moments (GMM) is a method for constructing estimators, analogous to maximum likelihood (ML). GMM uses assumptions about specific moments of the random variables instead of assumptions about the entire distribution, which makes GMM more robust than ML, at the cost of some efficiency. The assumptions are called moment conditions.

GMM generalizes the method of moments (MM) by allowing the number of moment conditions to be greater than the number of parameters. Using these extra moment conditions makes GMM more efficient than MM. When there are more moment conditions than parameters, the estimator is said to be overidentified. GMM can efficiently combine the moment conditions when the estimator is overidentified.

We illustrate these points by estimating the mean of a $$\chi^2(1)$$ by MM, ML, a simple GMM estimator, and an efficient GMM estimator. This example builds on Efficiency comparisons by Monte Carlo simulation and is similar in spirit to the example in Wooldridge (2001).

GMM weights and efficiency

GMM builds on the ideas of expected values and sample averages. Moment conditions are expected values that specify the model parameters in terms of the true moments. The sample moment conditions are the sample equivalents to the moment conditions. GMM finds the parameter values that are closest to satisfying the sample moment conditions.

The mean of a $$\chi^2$$ random variable with $$d$$ degree of freedom is $$d$$, and its variance is $$2d$$. Two moment conditions for the mean are thus

$\begin{eqnarray*} \Eb\left[Y – d \right]&=& 0 \\ \Eb\left[(Y – d )^2 – 2d \right]&=& 0 \end{eqnarray*}$

The sample moment equivalents are

$\begin{eqnarray} 1/N\sum_{i=1}^N (y_i – \widehat{d} )&=& 0 \tag{1} \\ 1/N\sum_{i=1}^N\left[(y_i – \widehat{d} )^2 – 2\widehat{d}\right] &=& 0 \tag{2} \end{eqnarray}$

We could use either sample moment condition (1) or sample moment condition (2) to estimate $$d$$. In fact, below we use each one and show that (1) provides a much more efficient estimator.

When we use both (1) and (2), there are two sample moment conditions and only one parameter, so we cannot solve this system of equations. GMM finds the parameters that get as close as possible to solving weighted sample moment conditions.

Uniform weights and optimal weights are two ways of weighting the sample moment conditions. The uniform weights use an identity matrix to weight the moment conditions. The optimal weights use the inverse of the covariance matrix of the moment conditions.

We begin by drawing a sample of a size 500 and use gmm to estimate the parameters using sample moment condition (1), which we illustrate is the sample as the sample average.

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate double y = rchi2(1)

. gmm (y - {d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .82949186
Iteration 1:   GMM criterion Q(b) =  1.262e-32
Iteration 2:   GMM criterion Q(b) =  9.545e-35

note: model is exactly identified

GMM estimation

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .9107644   .0548098    16.62   0.000     .8033392     1.01819
------------------------------------------------------------------------------
Instruments for equation 1: _cons

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------


The sample moment condition is the product of an observation-level error function that is specified inside the parentheses and an instrument, which is a vector of ones in this case. The parameter $$d$$ is enclosed in curly braces {}. We specify the onestep option because the number of parameters is the same as the number of moment conditions, which is to say that the estimator is exactly identified. When it is, each sample moment condition can be solved exactly, and there are no efficiency gains in optimally weighting the moment conditions.

We now illustrate that we could use the sample moment condition obtained from the variance to estimate $$d$$.

. gmm ((y-{d})^2 - 2*{d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  5.4361161
Iteration 1:   GMM criterion Q(b) =  .02909692
Iteration 2:   GMM criterion Q(b) =  .00004009
Iteration 3:   GMM criterion Q(b) =  5.714e-11
Iteration 4:   GMM criterion Q(b) =  1.172e-22

note: model is exactly identified

GMM estimation

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .7620814   .1156756     6.59   0.000     .5353613    .9888015
------------------------------------------------------------------------------
Instruments for equation 1: _cons


While we cannot say anything definitive from only one draw, we note that this estimate is further from the truth and that the standard error is much larger than those based on the sample average.

Now, we use gmm to estimate the parameters using uniform weights.

. matrix I = I(2)

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608
Iteration 1:   GMM criterion Q(b) =  .05343812
Iteration 2:   GMM criterion Q(b) =  .01852592
Iteration 3:   GMM criterion Q(b) =   .0185221
Iteration 4:   GMM criterion Q(b) =   .0185221

GMM estimation

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .7864099   .1050692     7.48   0.000     .5804781    .9923418
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons


The first set of parentheses specifies the first sample moment condition, and the second set of parentheses specifies the second sample moment condition. The options winitial(I) and onestep specify uniform weights.

Finally, we use gmm to estimate the parameters using two-step optimal weights. The weights are calculated using first-step consistent estimates.

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608
Iteration 1:   GMM criterion Q(b) =  .05343812
Iteration 2:   GMM criterion Q(b) =  .01852592
Iteration 3:   GMM criterion Q(b) =   .0185221
Iteration 4:   GMM criterion Q(b) =   .0185221

Step 2
Iteration 0:   GMM criterion Q(b) =  .02888076
Iteration 1:   GMM criterion Q(b) =  .00547223
Iteration 2:   GMM criterion Q(b) =  .00546176
Iteration 3:   GMM criterion Q(b) =  .00546175

GMM estimation

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500
GMM weight matrix:     Robust

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .9566219   .0493218    19.40   0.000     .8599529    1.053291
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons


All four estimators are consistent. Below we run a Monte Carlo simulation to see their relative efficiencies. We are most interested in the efficiency gains afforded by optimal GMM. We include the sample average, the sample variance, and the ML estimator discussed in Efficiency comparisons by Monte Carlo simulation. Theory tells us that the optimally weighted GMM estimator should be more efficient than the sample average but less efficient than the ML estimator.

The code below for the Monte Carlo builds on Efficiency comparisons by Monte Carlo simulation, Maximum likelihood estimation by mlexp: A chi-squared example, and Monte Carlo simulations using Stata. Click gmmchi2sim.do to download this code.

. clear all
. set seed 12345
. matrix I = I(2)
. postfile sim  d_a d_v d_ml d_gmm d_gmme using efcomp, replace
. forvalues i = 1/2000 {
2.     quietly drop _all
3.     quietly set obs 500
4.     quietly generate double y = rchi2(1)
5.
.     quietly mean y
6.     local d_a         =  _b[y]
7.
.     quietly gmm ( (y-{d=d_a'})^2 - 2*{d}) , instruments( )  ///
8.     if e(converged)==1 {
9.             local d_v = _b[d:_cons]
10.     }
11.     else {
12.             local d_v = .
13.     }
14.
.     quietly mlexp (ln(chi2den({d=d_a'},y)))
15.     if e(converged)==1 {
16.             local d_ml  =  _b[d:_cons]
17.     }
18.     else {
19.             local d_ml  = .
20.     }
21.
.     quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
>         winitial(I) onestep conv_maxiter(200)
22.     if e(converged)==1 {
23.             local d_gmm = _b[d:_cons]
24.     }
25.     else {
26.             local d_gmm = .
27.     }
28.
.     quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
29.     if e(converged)==1 {
30.             local d_gmme = _b[d:_cons]
31.     }
32.     else {
33.             local d_gmme = .
34.     }
35.
.     post sim (d_a') (d_v') (d_ml') (d_gmm') (d_gmme')
36.
. }
. postclose sim
. use efcomp, clear
. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
d_a |      2,000     1.00017    .0625367   .7792076    1.22256
d_v |      1,996    1.003621    .1732559   .5623049   2.281469
d_ml |      2,000    1.002876    .0395273   .8701175   1.120148
d_gmm |      2,000    .9984172    .1415176   .5947328   1.589704
d_gmme |      2,000    1.006765    .0540633   .8224731   1.188156


The simulation results indicate that the ML estimator is the most efficient (d_ml, std. dev. 0.0395), followed by the efficient GMM estimator (d_gmme}, std. dev. 0.0541), followed by the sample average (d_a, std. dev. 0.0625), followed by the uniformly-weighted GMM estimator (d_gmm, std. dev. 0.1415), and finally followed by the sample-variance moment condition (d_v, std. dev. 0.1732).

The estimator based on the sample-variance moment condition does not converge for 4 of 2,000 draws; this is why there are only 1,996 observations on d_v when there are 2,000 observations for the other estimators. These convergence failures occurred even though we used the sample average as the starting value of the nonlinear solver.

For a better idea about the distributions of these estimators, we graph the densities of their estimates.

Figure 1: Densities of the estimators

The density plots illustrate the efficiency ranking that we found from the standard deviations of the estimates.

The uniformly weighted GMM estimator is less efficient than the sample average because it places the same weight on the sample average as on the much less efficient estimator based on the sample variance.

In each of the overidentified cases, the GMM estimator uses a weighted average of two sample moment conditions to estimate the mean. The first sample moment condition is the sample average. The second moment condition is the sample variance. As the Monte Carlo results showed, the sample variance provides a much less efficient estimator for the mean than the sample average.

The GMM estimator that places equal weights on the efficient and the inefficient estimator is much less efficient than a GMM estimator that places much less weight on the less efficient estimator.

We display the weight matrix from our optimal GMM estimator to see how the sample moments were weighted.

. quietly gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

. matlist e(W), border(rows)

-------------------------------------
| 1         | 2
|     _cons |     _cons
-------------+-----------+-----------
1            |           |
_cons |  1.621476 |
-------------+-----------+-----------
2            |           |
_cons | -.2610053 |  .0707775
-------------------------------------


The diagonal elements show that the sample-mean moment condition receives more weight than the less efficient sample-variance moment condition.

Done and undone

We used a simple example to illustrate how GMM exploits having more equations than parameters to obtain a more efficient estimator. We also illustrated that optimally weighting the different moments provides important efficiency gains over an estimator that uniformly weights the moment conditions.

Our cursory introduction to GMM is best supplemented with a more formal treatment like the one in Cameron and Trivedi (2005) or Wooldridge (2010).

Graph code appendix

use efcomp
local N = _N
kdensity d_a,     n(N') generate(x_a    den_a)    nograph
kdensity d_v,     n(N') generate(x_v    den_v)    nograph
kdensity d_ml,    n(N') generate(x_ml   den_ml)   nograph
kdensity d_gmm,   n(N') generate(x_gmm  den_gmm)  nograph
kdensity d_gmme,  n(N') generate(x_gmme den_gmme) nograph
twoway (line den_a x_a,       lpattern(solid))        ///
(line den_v x_v,       lpattern(dash))         ///
(line den_ml x_ml,     lpattern(dot))          ///
(line den_gmm x_gmm,   lpattern(dash_dot))     ///
(line den_gmme x_gmme, lpattern(shordash))


References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2001. Applications of generalized method of moments estimation. Journal of Economic Perspectives 15(4): 87-100.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Efficiency comparisons by Monte Carlo simulation

Overview

In this post, I show how to use Monte Carlo simulations to compare the efficiency of different estimators. I also illustrate what we mean by efficiency when discussing statistical estimators.

I wrote this post to continue a dialog with my friend who doubted the usefulness of the sample average as an estimator for the mean when the data-generating process (DGP) is a $$\chi^2$$ distribution with $$1$$ degree of freedom, denoted by a $$\chi^2(1)$$ distribution. The sample average is a fine estimator, even though it is not the most efficient estimator for the mean. (Some researchers prefer to estimate the median instead of the mean for DGPs that generate outliers. I will address the trade-offs between these parameters in a future post. For now, I want to stick to estimating the mean.)

In this post, I also want to illustrate that Monte Carlo simulations can help explain abstract statistical concepts. I show how to use a Monte Carlo simulation to illustrate the meaning of an abstract statistical concept. (If you are new to Monte Carlo simulations in Stata, you might want to see Monte Carlo simulations using Stata.)

Consistent estimator A is said to be more asymptotically efficient than consistent estimator B if A has a smaller asymptotic variance than B; see Wooldridge (2010, sec. 14.4.2) for an especially useful discussion. Theoretical comparisons can sometimes ascertain that A is more efficient than B, but the magnitude of the difference is rarely identified. Comparisons of Monte Carlo simulation estimates of the variances of estimators A and B give both sign and magnitude for specific DGPs and sample sizes.

The sample average versus maximum likelihood

Many books discuss the conditions under which the maximum likelihood (ML) estimator is the efficient estimator relative to other estimators; see Wooldridge (2010, sec. 14.4.2) for an accessible introduction to the modern approach. Here I compare the ML estimator with the sample average for the mean when the DGP is a $$\chi^2(1)$$ distribution.

Example 1 below contains the commands I used. For an introduction to Monte Carlo simulations see Monte Carlo simulations using Stata, and for an introduction to using mlexp to estimate the parameter of a $$\chi^2$$ distribution see Maximum likelihood estimation by mlexp: A chi-squared example. In short, the commands do the following $$5,000$$ times:

1. Draw a sample of 500 observations from a $$\chi^2(1)$$ distribution.
2. Estimate the mean of each sample by the sample average, and store this estimate in m_a in the dataset efcomp.dta.
3. Estimate the mean of each sample by ML, and store this estimate in m_ml in the dataset efcomp.dta.

Example 1: The distributions of the sample average and the ML estimators

. clear all
. set seed 12345
. postfile sim  mu_a mu_ml using efcomp, replace
. forvalues i = 1/5000 {
2.     quietly drop _all
3.     quietly set obs 500
4.     quietly generate double y = rchi2(1)
5.     quietly mean y
6.     local mu_a         =  _b[y]
7.     quietly mlexp (ln(chi2den({d=1},y)))
8.     local mu_ml   =  _b[d:_cons]
9.     post sim (mu_a') (mu_ml')
10. }
. postclose sim
. use efcomp, clear
. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mu_a |      5,000    .9989277    .0620524   .7792076   1.232033
mu_ml |      5,000    1.000988    .0401992   .8660786   1.161492


The mean of the $$5,000$$ sample average estimates and the mean of the $$5,000$$ ML estimates are each close to the true value of $$1.0$$. The standard deviation of the $$5,000$$ sample average estimates is $$0.062$$, and it approximates the standard deviation of the sampling distribution of the sample average for this DGP and sample size. Similarly, the standard deviation of the $$5,000$$ ML estimates is $$0.040$$, and it approximates the standard deviation of the sampling distribution of the ML estimator for this DGP and sample size.

We conclude that the ML estimator has a lower variance than the sample average for this DGP and this sample size, because $$0.040$$ is smaller than $$0.062$$.

To get a picture of this difference, we plot the density of the sample average and the density of the ML estimator. (Each of these densities is estimated from $$5,000$$ observations, but estimation error can be ignored because more data would not change the key results.)

Example 2: Plotting the densities of the estimators

. kdensity mu_a,   n(5000) generate(x_a   den_a)   nograph

. kdensity mu_ml,  n(5000) generate(x_ml  den_ml)  nograph

. twoway (line den_a x_a) (line den_ml x_ml)


Densities of the sample average and ML estimators

The plots show that the ML estimator is more tightly distributed around the true value than the sample average.

That the ML estimator is more tightly distributed around the true value than the sample average is what it means for one consistent estimator to be more efficient than another.

Done and undone

I used Monte Carlo simulation to illustrate what it means for one estimator to be more efficient than another. In particular, we saw that the ML estimator is more efficient than the sample average for the mean of a $$\chi^2(1)$$ distribution.

Many other estimators fall between these two estimators in an efficiency ranking. Generalized method of moments estimators and some quasi-maximum likelihood estimators come to mind and might be worth adding to these simulations.

Reference

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Monte Carlo simulations using Stata

Overview

A Monte Carlo simulation (MCS) of an estimator approximates the sampling distribution of an estimator by simulation methods for a particular data-generating process (DGP) and sample size. I use an MCS to learn how well estimation techniques perform for specific DGPs. In this post, I show how to perform an MCS study of an estimator in Stata and how to interpret the results.

Large-sample theory tells us that the sample average is a good estimator for the mean when the true DGP is a random sample from a $$\chi^2$$ distribution with 1 degree of freedom, denoted by $$\chi^2(1)$$. But a friend of mine claims this estimator will not work well for this DGP because the $$\chi^2(1)$$ distribution will produce outliers. In this post, I use an MCS to see if the large-sample theory works well for this DGP in a sample of 500 observations.

A first pass at an MCS

I begin by showing how to draw a random sample of size 500 from a $$\chi^2(1)$$ distribution and how to estimate the mean and a standard error for the mean.

Example 1: The mean of simulated data

. drop _all
. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345
. generate y = rchi2(1)
. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------


I specified set seed 12345 to set the seed of the random-number generator so that the results will be reproducible. The sample average estimate of the mean from this random sample is $$0.91$$, and the estimated standard error is $$0.055$$.

If I had many estimates, each from an independently drawn random sample, I could estimate the mean and the standard deviation of the sampling distribution of the estimator. To obtain many estimates, I need to repeat the following process many times:

1. Draw from the DGP
2. Compute the estimate
3. Store the estimate.

I need to know how to store the many estimates to proceed with this process. I also need to know how to repeat the process many times and how to access Stata estimates, but I put these details into appendices I and II, respectively, because many readers are already familiar with these topics and I want to focus on how to store the results from many draws.

I want to put the many estimates someplace where they will become part of a dataset that I can subsequently analyze. I use the commands postfile, post, and postclose to store the estimates in memory and write all the stored estimates out to a dataset when I am done. Example 2 illustrates the process, when there are three draws.

Example 2: Estimated means of three draws

. set seed 12345

. postfile buffer mhat using mcs, replace

. forvalues i=1/3 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y])
7. }

. postclose buffer

. use mcs, clear

. list

+----------+
|     mhat |
|----------|
1. | .9107645 |
2. |  1.03821 |
3. | 1.039254 |
+----------+


The command

postfile buffer mhat using mcs, replace


creates a place in memory called buffer in which I can store the results that will eventually be written out to a dataset. mhat is the name of the variable that will hold the estimates in the new dataset called mcs.dta. The keyword using separates the new variable name from the name of the new dataset. I specified the option replace to replace any previous versions of msc.dta with the one created here.

I used

forvalues i=1/3 {


to repeat the process three times. (See appendix I if you want a refresher on this syntax.) The commands

quietly drop _all
quietly set obs 500
quietly generate y = rchi2(1)
quietly mean y


drop the previous data, draw a sample of size 500 from a $$\chi^2(1)$$ distribution, and estimate the mean. (The quietly before each command suppresses the output.) The command

post buffer (_b[y])


stores the estimated mean for the current draw in buffer for what will be the next observation on mhat. The command

postclose buffer


writes the stuff stored in buffer to the file mcs.dta. The commands

use mcs, clear
list


drop the last $$\chi^2(1)$$ sample from memory, read in the msc dataset, and list out the dataset.

Example 3 below is a modified version of example 2; I increased the number of draws and summarized the results.

Example 3: The mean of 2,000 estimated means

. set seed 12345

. postfile buffer mhat using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y])
7. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256


The average of the $$2,000$$ estimates is an estimator for the mean of the sampling distribution of the estimator, and it is close to the true value of $$1.0$$. The sample standard deviation of the $$2,000$$ estimates is an estimator for the standard deviation of the sampling distribution of the estimator, and it is close to the true value of $$\sqrt{\sigma^2/N}=\sqrt{2/500}\approx 0.0632$$, where $$\sigma^2$$ is the variance of the $$\chi^2(1)$$ random variable.

Including standard errors

The standard error of the estimator reported by mean is an estimate of the standard deviation of the sampling distribution of the estimator. If the large-sample distribution is doing a good job of approximating the sampling distribution of the estimator, the mean of the estimated standard
errors should be close to the sample standard deviation of the many mean estimates.

To compare the standard deviation of the estimates with the mean of the estimated standard errors, I modify example 3 to also store the standard errors.

Example 4: The mean of 2,000 standard errors

. set seed 12345

. postfile buffer mhat sehat using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y]) (_se[y])
7. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256
sehat |      2,000    .0629644    .0051703   .0464698   .0819693


Mechanically, the command

postfile buffer mhat sehat using mcs, replace


makes room in buffer for the new variables mhat and sehat, and

post buffer (_b[y]) (_se[y])


stores each estimated mean in the memory for mhat and each estimated standard error in the memory for sehat. (As in example 3, the command postclose buffer writes what is stored in memory to the new dataset.)

The sample standard deviation of the $$2,000$$ estimates is $$0.0625$$, and it is close to the mean of the $$2,000$$ estimated standard errors, which is $$0.0630$$.

You may be thinking I should have written “very close”, but how close is $$0.0625$$ to $$0.0630$$? Honestly, I cannot tell if these two numbers are sufficiently close to each other because the distance between them does not automatically tell me how reliable the resulting inference will be.

Estimating a rejection rate

In frequentist statistics, we reject a null hypothesis if the p-value is below a specified size. If the large-sample distribution approximates the finite-sample distribution well, the rejection rate of the test against the true null hypothesis should be close to the specified size.

To compare the rejection rate with the size of 5%, I modify example 4 to compute and store an indicator for whether I reject a Wald test against the true null hypothesis. (See appendix III for a discussion of the mechanics.)

Example 5: Estimating the rejection rate

. set seed 12345

. postfile buffer mhat sehat reject using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         quietly test _b[y]=1
7.         local r = (r(p)<.05)
8.         post buffer (_b[y]) (_se[y]) (r')
9. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256
sehat |      2,000    .0629644    .0051703   .0464698   .0819693
reject |      2,000       .0475     .212759          0          1


The rejection rate of $$0.048$$ is very close to the size of $$0.05$$.

Done and undone

In this post, I have shown how to perform an MCS of an estimator in Stata. I discussed the mechanics of using the post commands to store the many estimates and how to interpret the mean of the many estimates and the mean of the many estimated standard errors. I also recommended using an estimated rejection rate to evaluate the usefulness of the large-sample approximation to the sampling distribution of an estimator for a given DGP and sample size.

The example illustrates that the sample average performs as predicted by large-sample theory as an estimator for the mean. This conclusion does not mean that my friend's concerns about outliers were entirely misplaced. Other estimators that are more robust to outliers may have better properties. I plan to illustrate some of the trade-offs in future posts.

Appendix I: Repeating a process many times

This appendix provides a quick introduction to local macros and how to use them to repeat some commands many times; see [P] macro and [P] forvalues for more details.

I can store and access string information in local macros. Below, I store the string hello" in the local macro named value.

local value "hello"


To access the stored information, I adorn the name of the local macro. Specifically, I precede it with the single left quote () and follow it with the single right quote ('). Below, I access and display the value stored in the local macro value.

. display "value'"
hello


I can also store numbers as strings, as follows

. local value "2.134"
. display "value'"
2.134


To repeat some commands many times, I put them in a {\tt forvalues} loop. For example, the code below repeats the display command three times.

. forvalues i=1/3 {
2.    display "i is now i'"
3. }
i is now 1
i is now 2
i is now 3


The above example illustrates that forvalues defines a local macro that takes on each value in the specified list of values. In the above example, the name of the local macro is i, and the specified values are 1/3=$$\{1, 2, 3\}$$.

Appendix II: Accessing estimates

After a Stata estimation command, you can access the point estimate of a parameter named y by typing _b[y], and you can access the estimated standard error by typing _se[y]. The example below illustrates this process.

Example 6: Accessing estimated values

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate y = rchi2(1)

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------

. display  _b[y]
.91076444

. display _se[y]
.05486467


Appendix III: Getting a p-value computed by test

This appendix explains the mechanics of creating an indicator for whether a Wald test rejects the null hypothesis at a specific size.

I begin by generating some data and performing a Wald test against the true null hypothesis.

Example 7: Wald test results

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate y = rchi2(1)

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------

. test _b[y]=1

( 1)  y = 1

F(  1,   499) =    2.65
Prob > F =    0.1045


The results reported by test are stored in r(). Below, I use return list to see them, type help return list for details.

Example 8: Results stored by test

. return list

scalars:
r(drop) =  0
r(df_r) =  499
r(F) =  2.645393485924886
r(df) =  1
r(p) =  .1044817353734439


The p-value reported by test is stored in r(p). Below, I store a 0/1 indicator for whether the p-value is less than $$0.05|0 in the local macro r. (See appendix II for an introduction to local macros.) I complete the illustration by displaying that the local macro contains the value \(0$$.

. local r = (r(p)<.05)
. display "r'"
0

Categories: Programming Tags:

## How to simulate multilevel/longitudinal data

I was recently talking with my friend Rebecca about simulating multilevel data, and she asked me if I would show her some examples. It occurred to me that many of you might also like to see some examples, so I decided to post them to the Stata Blog.

### Introduction

We simulate data all the time at StataCorp and for a variety of reasons.

One reason is that real datasets that include the features we would like are often difficult to find. We prefer to use real datasets in the manual examples, but sometimes that isn’t feasible and so we create simulated datasets.

We also simulate data to check the coverage probabilities of new estimators in Stata. Sometimes the formulae published in books and papers contain typographical errors. Sometimes the asymptotic properties of estimators don’t hold under certain conditions. And every once in a while, we make coding mistakes. We run simulations during development to verify that a 95% confidence interval really is a 95% confidence interval.

Simulated data can also come in handy for presentations, teaching purposes, and calculating statistical power using simulations for complex study designs.

And, simulating data is just plain fun once you get the hang of it.

Some of you will recall Vince Wiggins’s blog entry from 2011 entitled “Multilevel random effects in xtmixed and sem — the long and wide of it” in which he simulated a three-level dataset. I’m going to elaborate on how Vince simulated multilevel data, and then I’ll show you some useful variations. Specifically, I’m going to talk about:

1. How to simulate single-level data
2. How to simulate two- and three-level data
3. How to simulate three-level data with covariates
4. How to simulate longitudinal data with random slopes
5. How to simulate longitudinal data with structured errors

### How to simulate single-level data

Let’s begin by simulating a trivially simple, single-level dataset that has the form

$y_i = 70 + e_i$

We will assume that e is normally distributed with mean zero and variance $$\sigma^2$$.

We’d want to simulate 500 observations, so let’s begin by clearing Stata’s memory and setting the number of observations to 500.

. clear
. set obs 500


Next, let’s create a variable named e that contains pseudorandom normally distributed data with mean zero and standard deviation 5:

. generate e = rnormal(0,5)


The variable e is our error term, so we can create an outcome variable y by typing

. generate y = 70 + e

. list y e in 1/5

+----------------------+
|        y           e |
|----------------------|
1. | 78.83927     8.83927 |
2. | 69.97774   -.0222647 |
3. | 69.80065   -.1993514 |
4. | 68.11398    -1.88602 |
5. | 63.08952   -6.910483 |
+----------------------+


We can fit a linear regression for the variable y to determine whether our parameter estimates are reasonably close to the parameters we specified when we simulated our dataset:

. regress y

Source |       SS       df       MS              Number of obs =     500
-------------+------------------------------           F(  0,   499) =    0.00
Model |           0     0           .           Prob > F      =       .
Residual |  12188.8118   499  24.4264766           R-squared     =  0.0000
Total |  12188.8118   499  24.4264766           Root MSE      =  4.9423

------------------------------------------------------------------------------
y |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
_cons |   69.89768    .221027   316.24   0.000     69.46342    70.33194
------------------------------------------------------------------------------


The estimate of _cons is 69.9, which is very close to 70, and the Root MSE of 4.9 is equally close to the error’s standard deviation of 5. The parameter estimates will not be exactly equal to the underlying parameters we specified when we created the data because we introduced randomness with the rnormal() function.

This simple example is just to get us started before we work with multilevel data. For familiarity, let’s fit the same model with the mixed command that we will be using later:

. mixed y, stddev

Mixed-effects ML regression                     Number of obs      =       500

Wald chi2(0)       =         .
Log likelihood = -1507.8857                     Prob > chi2        =         .

------------------------------------------------------------------------------
y |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
_cons |   69.89768   .2208059   316.56   0.000     69.46491    70.33045
------------------------------------------------------------------------------

------------------------------------------------------------------------------
Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
sd(Residual) |    4.93737   .1561334      4.640645    5.253068
------------------------------------------------------------------------------
`

The output is organized with the parameter estimates for the fixed part in the top table and the estimated standard deviations for the random effects in the bottom table. Just as previously, the estimate of _cons is 69.9, and the estimate of the standard deviation of the residuals is 4.9.

Okay. That really was trivial, wasn’t it? Simulating two- and three-level data is almost as easy.

### How to simulate two- and three-level data

I posted a blog entry last year titled “Multilevel linear models in Stata, part 1: Components of variance“. In that posting, I showed a diagram for a residual of a three-level model.

The equation for the variance-components model I fit had the form

$y_{ijk} = mu + u_i.. + u_{ij.} + e_{ijk}$

This model had three residuals, whereas the one-level model we just fit above had only one.