## Understanding the generalized method of moments (GMM): A simple example

$$\newcommand{\Eb}{{\bf E}}$$This post was written jointly with Enrique Pinzon, Senior Econometrician, StataCorp.

The generalized method of moments (GMM) is a method for constructing estimators, analogous to maximum likelihood (ML). GMM uses assumptions about specific moments of the random variables instead of assumptions about the entire distribution, which makes GMM more robust than ML, at the cost of some efficiency. The assumptions are called moment conditions.

GMM generalizes the method of moments (MM) by allowing the number of moment conditions to be greater than the number of parameters. Using these extra moment conditions makes GMM more efficient than MM. When there are more moment conditions than parameters, the estimator is said to be overidentified. GMM can efficiently combine the moment conditions when the estimator is overidentified.

We illustrate these points by estimating the mean of a $$\chi^2(1)$$ by MM, ML, a simple GMM estimator, and an efficient GMM estimator. This example builds on Efficiency comparisons by Monte Carlo simulation and is similar in spirit to the example in Wooldridge (2001).

GMM weights and efficiency

GMM builds on the ideas of expected values and sample averages. Moment conditions are expected values that specify the model parameters in terms of the true moments. The sample moment conditions are the sample equivalents to the moment conditions. GMM finds the parameter values that are closest to satisfying the sample moment conditions.

The mean of a $$\chi^2$$ random variable with $$d$$ degree of freedom is $$d$$, and its variance is $$2d$$. Two moment conditions for the mean are thus

$\begin{eqnarray*} \Eb\left[Y – d \right]&=& 0 \\ \Eb\left[(Y – d )^2 – 2d \right]&=& 0 \end{eqnarray*}$

The sample moment equivalents are

$\begin{eqnarray} 1/N\sum_{i=1}^N (y_i – \widehat{d} )&=& 0 \tag{1} \\ 1/N\sum_{i=1}^N\left[(y_i – \widehat{d} )^2 – 2\widehat{d}\right] &=& 0 \tag{2} \end{eqnarray}$

We could use either sample moment condition (1) or sample moment condition (2) to estimate $$d$$. In fact, below we use each one and show that (1) provides a much more efficient estimator.

When we use both (1) and (2), there are two sample moment conditions and only one parameter, so we cannot solve this system of equations. GMM finds the parameters that get as close as possible to solving weighted sample moment conditions.

Uniform weights and optimal weights are two ways of weighting the sample moment conditions. The uniform weights use an identity matrix to weight the moment conditions. The optimal weights use the inverse of the covariance matrix of the moment conditions.

We begin by drawing a sample of a size 500 and use gmm to estimate the parameters using sample moment condition (1), which we illustrate is the sample as the sample average.

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate double y = rchi2(1)

. gmm (y - {d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .82949186
Iteration 1:   GMM criterion Q(b) =  1.262e-32
Iteration 2:   GMM criterion Q(b) =  9.545e-35

note: model is exactly identified

GMM estimation

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .9107644   .0548098    16.62   0.000     .8033392     1.01819
------------------------------------------------------------------------------
Instruments for equation 1: _cons

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------


The sample moment condition is the product of an observation-level error function that is specified inside the parentheses and an instrument, which is a vector of ones in this case. The parameter $$d$$ is enclosed in curly braces {}. We specify the onestep option because the number of parameters is the same as the number of moment conditions, which is to say that the estimator is exactly identified. When it is, each sample moment condition can be solved exactly, and there are no efficiency gains in optimally weighting the moment conditions.

We now illustrate that we could use the sample moment condition obtained from the variance to estimate $$d$$.

. gmm ((y-{d})^2 - 2*{d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  5.4361161
Iteration 1:   GMM criterion Q(b) =  .02909692
Iteration 2:   GMM criterion Q(b) =  .00004009
Iteration 3:   GMM criterion Q(b) =  5.714e-11
Iteration 4:   GMM criterion Q(b) =  1.172e-22

note: model is exactly identified

GMM estimation

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .7620814   .1156756     6.59   0.000     .5353613    .9888015
------------------------------------------------------------------------------
Instruments for equation 1: _cons


While we cannot say anything definitive from only one draw, we note that this estimate is further from the truth and that the standard error is much larger than those based on the sample average.

Now, we use gmm to estimate the parameters using uniform weights.

. matrix I = I(2)

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608
Iteration 1:   GMM criterion Q(b) =  .05343812
Iteration 2:   GMM criterion Q(b) =  .01852592
Iteration 3:   GMM criterion Q(b) =   .0185221
Iteration 4:   GMM criterion Q(b) =   .0185221

GMM estimation

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .7864099   .1050692     7.48   0.000     .5804781    .9923418
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons


The first set of parentheses specifies the first sample moment condition, and the second set of parentheses specifies the second sample moment condition. The options winitial(I) and onestep specify uniform weights.

Finally, we use gmm to estimate the parameters using two-step optimal weights. The weights are calculated using first-step consistent estimates.

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608
Iteration 1:   GMM criterion Q(b) =  .05343812
Iteration 2:   GMM criterion Q(b) =  .01852592
Iteration 3:   GMM criterion Q(b) =   .0185221
Iteration 4:   GMM criterion Q(b) =   .0185221

Step 2
Iteration 0:   GMM criterion Q(b) =  .02888076
Iteration 1:   GMM criterion Q(b) =  .00547223
Iteration 2:   GMM criterion Q(b) =  .00546176
Iteration 3:   GMM criterion Q(b) =  .00546175

GMM estimation

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500
GMM weight matrix:     Robust

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .9566219   .0493218    19.40   0.000     .8599529    1.053291
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons


All four estimators are consistent. Below we run a Monte Carlo simulation to see their relative efficiencies. We are most interested in the efficiency gains afforded by optimal GMM. We include the sample average, the sample variance, and the ML estimator discussed in Efficiency comparisons by Monte Carlo simulation. Theory tells us that the optimally weighted GMM estimator should be more efficient than the sample average but less efficient than the ML estimator.

The code below for the Monte Carlo builds on Efficiency comparisons by Monte Carlo simulation, Maximum likelihood estimation by mlexp: A chi-squared example, and Monte Carlo simulations using Stata. Click gmmchi2sim.do to download this code.

. clear all
. set seed 12345
. matrix I = I(2)
. postfile sim  d_a d_v d_ml d_gmm d_gmme using efcomp, replace
. forvalues i = 1/2000 {
2.     quietly drop _all
3.     quietly set obs 500
4.     quietly generate double y = rchi2(1)
5.
.     quietly mean y
6.     local d_a         =  _b[y]
7.
.     quietly gmm ( (y-{d=d_a'})^2 - 2*{d}) , instruments( )  ///
8.     if e(converged)==1 {
9.             local d_v = _b[d:_cons]
10.     }
11.     else {
12.             local d_v = .
13.     }
14.
.     quietly mlexp (ln(chi2den({d=d_a'},y)))
15.     if e(converged)==1 {
16.             local d_ml  =  _b[d:_cons]
17.     }
18.     else {
19.             local d_ml  = .
20.     }
21.
.     quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
>         winitial(I) onestep conv_maxiter(200)
22.     if e(converged)==1 {
23.             local d_gmm = _b[d:_cons]
24.     }
25.     else {
26.             local d_gmm = .
27.     }
28.
.     quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
29.     if e(converged)==1 {
30.             local d_gmme = _b[d:_cons]
31.     }
32.     else {
33.             local d_gmme = .
34.     }
35.
.     post sim (d_a') (d_v') (d_ml') (d_gmm') (d_gmme')
36.
. }
. postclose sim
. use efcomp, clear
. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
d_a |      2,000     1.00017    .0625367   .7792076    1.22256
d_v |      1,996    1.003621    .1732559   .5623049   2.281469
d_ml |      2,000    1.002876    .0395273   .8701175   1.120148
d_gmm |      2,000    .9984172    .1415176   .5947328   1.589704
d_gmme |      2,000    1.006765    .0540633   .8224731   1.188156


The simulation results indicate that the ML estimator is the most efficient (d_ml, std. dev. 0.0395), followed by the efficient GMM estimator (d_gmme}, std. dev. 0.0541), followed by the sample average (d_a, std. dev. 0.0625), followed by the uniformly-weighted GMM estimator (d_gmm, std. dev. 0.1415), and finally followed by the sample-variance moment condition (d_v, std. dev. 0.1732).

The estimator based on the sample-variance moment condition does not converge for 4 of 2,000 draws; this is why there are only 1,996 observations on d_v when there are 2,000 observations for the other estimators. These convergence failures occurred even though we used the sample average as the starting value of the nonlinear solver.

For a better idea about the distributions of these estimators, we graph the densities of their estimates.

Figure 1: Densities of the estimators

The density plots illustrate the efficiency ranking that we found from the standard deviations of the estimates.

The uniformly weighted GMM estimator is less efficient than the sample average because it places the same weight on the sample average as on the much less efficient estimator based on the sample variance.

In each of the overidentified cases, the GMM estimator uses a weighted average of two sample moment conditions to estimate the mean. The first sample moment condition is the sample average. The second moment condition is the sample variance. As the Monte Carlo results showed, the sample variance provides a much less efficient estimator for the mean than the sample average.

The GMM estimator that places equal weights on the efficient and the inefficient estimator is much less efficient than a GMM estimator that places much less weight on the less efficient estimator.

We display the weight matrix from our optimal GMM estimator to see how the sample moments were weighted.

. quietly gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

. matlist e(W), border(rows)

-------------------------------------
| 1         | 2
|     _cons |     _cons
-------------+-----------+-----------
1            |           |
_cons |  1.621476 |
-------------+-----------+-----------
2            |           |
_cons | -.2610053 |  .0707775
-------------------------------------


The diagonal elements show that the sample-mean moment condition receives more weight than the less efficient sample-variance moment condition.

Done and undone

We used a simple example to illustrate how GMM exploits having more equations than parameters to obtain a more efficient estimator. We also illustrated that optimally weighting the different moments provides important efficiency gains over an estimator that uniformly weights the moment conditions.

Our cursory introduction to GMM is best supplemented with a more formal treatment like the one in Cameron and Trivedi (2005) or Wooldridge (2010).

Graph code appendix

use efcomp
local N = _N
kdensity d_a,     n(N') generate(x_a    den_a)    nograph
kdensity d_v,     n(N') generate(x_v    den_v)    nograph
kdensity d_ml,    n(N') generate(x_ml   den_ml)   nograph
kdensity d_gmm,   n(N') generate(x_gmm  den_gmm)  nograph
kdensity d_gmme,  n(N') generate(x_gmme den_gmme) nograph
twoway (line den_a x_a,       lpattern(solid))        ///
(line den_v x_v,       lpattern(dash))         ///
(line den_ml x_ml,     lpattern(dot))          ///
(line den_gmm x_gmm,   lpattern(dash_dot))     ///
(line den_gmme x_gmme, lpattern(shordash))


References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2001. Applications of generalized method of moments estimation. Journal of Economic Perspectives 15(4): 87-100.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Programming an estimation command in Stata: Allowing for options

I make three improvements to the command that implements the ordinary least-squares (OLS) estimator that I discussed in Programming an estimation command in Stata: Allowing for sample restrictions and factor variables. First, I allow the user to request a robust estimator of the variance-covariance of the estimator (VCE). Second, I allow the user to suppress the constant term. Third, I store the residual degrees of freedom in e(df_r) so that test will use the $$t$$ or $$F$$ distribution instead of the normal or $$\chi^2$$ distribution to compute the $$p$$-value of Wald tests.

Allowing for robust standard errors

The syntax of myregress6, which I discussed in Programming an estimation command in Stata: Allowing for sample restrictions and factor variables, is

myregress6 depvar [indepvars] [if] [in]

where the independent variables can be time-series or factor variables. myregress7 has the syntax

myregress7 depvar [indepvars] [if] [in] [, robust ]

By default, myregress7 estimates the VCE assuming that the errors are independently and identically distributed (IID). If the option robust is specified, myregress7 uses the robust estimator of the VCE. See Cameron and Trivedi (2005), Stock and Watson (2010), Wooldridge (2015) for introductions to OLS; see Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects for the formulas and Stata matrix implementations. Click on the file name to download any code block. To avoid scrolling, view the code in the do-file editor, or your favorite text editor, to see the line numbers.

*! version 7.0.0  30Nov2015
program define myregress7, eclass
version 14

syntax varlist(numeric ts fv) [if] [in] [, Robust]
marksample touse

gettoken depvar indeps : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse'
local N                    = r(N)
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
local k                    = p' - diag0cnt(xpxi') - 1
matrix b'                 = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
if "robust'" == "" {
quietly summarize res2' if touse' , meanonly
local sum           = r(sum)
local s2            = sum'/(N'-(k'))
matrix V'          = s2'*xpxi'
}
else {
tempname M
quietly matrix accum M' = indeps' [iweight=res2'] if touse'
matrix V'               = (N'/(N'-(k')))*xpxi'*M'*xpxi'
local vce                   "robust"
local vcetype               "Robust"
}
ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N       = N'
ereturn scalar rank    = k'
ereturn local  vce     "vce'"
ereturn local  vcetype "vcetype'"
ereturn local  cmd     "myregress7"
ereturn display
end


A user may specify the robust option by typing robust, robus, robu, rob, ro, or r. In other words, r is the minimal abbreviation of the option robust. Line 5 of myregress7 implements this syntax. Specifying robust is optional because Robust is enclosed in the square brackets. r is the minimal abbreviation because the R is in uppercase and the remaining letters are in lowercase.

If the user specifies robust, or a valid abbreviation thereof, the local macro robust contains the word “robust”; otherwise, the local macro robust is empty. Line 25 uses this fact to determine which VCE should be computed; it specifies that lines 26–31 should be executed if the local macro robust is empty and that lines 32-36 should otherwise be executed. Lines 26-31 compute the IID estimator of the VCE. Lines 32-34 compute the robust estimator of the VCE. Lines 35 and 36 respectively put “robust” and “Robust” into the local macros vce and vcetype.

Line 41 puts the contents of the local macro vce into the local macro e(vce), which informs users and postestimation commands which VCE estimator was used. By convention, e(vce) is empty for the IID case. Line 42 puts the contents of the local macro vcetype into the local macro e(vcetype), which is used by ereturn display to correctly label the standard errors as robust.

I now run a regression with robust standard errors.

Example 1: myregress7 with robust standard errors

. sysuse auto
(1978 Automobile Data)

. myregress7 price mpg trunk i.rep78, robust
------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -262.7053   74.75538    -3.51   0.000    -409.2232   -116.1875
trunk |   41.75706   73.71523     0.57   0.571    -102.7221    186.2362
|
rep78 |
2  |   654.7905   1132.425     0.58   0.563    -1564.721    2874.302
3  |   1170.606   823.9454     1.42   0.155    -444.2979    2785.509
4  |   1473.352   650.4118     2.27   0.023     198.5679    2748.135
5  |   2896.888   937.6981     3.09   0.002     1059.034    4734.743
|
_cons |   9726.377   2040.335     4.77   0.000     5727.393    13725.36
------------------------------------------------------------------------------


Suppressing the constant term

myregress8 has the syntax

myregress8 depvar [indepvars] [if] [in] [, robust noconstant ]

*! version 8.0.0  30Nov2015
program define myregress8, eclass
version 14

syntax varlist(numeric ts fv) [if] [in] [, Robust noCONStant ]
marksample touse

gettoken depvar indeps : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse' , constant'
local N                    = r(N)
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
local k                    = p' - diag0cnt(xpxi') - 1
matrix b'                 = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
if "robust'" == "" {
quietly summarize res2' if touse' , meanonly
local sum           = r(sum)
local s2            = sum'/(N'-(k'))
matrix V'          = s2'*xpxi'
}
else {
tempname M
quietly matrix accum M' = indeps' [iweight=res2']     ///
if touse' , constant'
matrix V'               = (N'/(N'-(k')))*xpxi'*M'*xpxi'
local vce                   "robust"
local vcetype               "Robust"
}
ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N       = N'
ereturn scalar rank    = k'
ereturn local  vce     "vce'"
ereturn local  vcetype "vcetype'"
ereturn local  cmd     "myregress8"
ereturn display
end


The syntax command on line 5 puts “noconstant” into the local macro constant if the user types nocons, noconst, noconsta, noconstan, or noconstant; otherwise, the local macro constant is empty. The minimal abbreviation of option noconstant is nocons because the lowercase no is followed by CONStant. Note that specifying the option creates the local macro constant because the no is followed by uppercase letters specifying the minimum abbreviation.

To implement the option, I specified what is contained in the local macro constant as an option on the matrix accum command on line 14 and on the matrix accum command spread over lines 33 and 34. The matrix accum command that begins on line 33 is too long for one line. I used /// to comment out the end-of-line character and continue the command on line 34.

I now illustrate the noconstant option.

Example 2: myregress8 with option noconstant

. myregress8 price mpg trunk ibn.rep78, noconstant
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -262.7053   73.49434    -3.57   0.000    -406.7516   -118.6591
trunk |   41.75706    93.9671     0.44   0.657    -142.4151    225.9292
|
rep78 |
1  |   9726.377   2790.009     3.49   0.000      4258.06    15194.69
2  |   10381.17   2607.816     3.98   0.000     5269.943    15492.39
3  |   10896.98   2555.364     4.26   0.000     5888.561     15905.4
4  |   11199.73    2588.19     4.33   0.000      6126.97    16272.49
5  |   12623.27   2855.763     4.42   0.000     7026.073    18220.46
------------------------------------------------------------------------------


Using t or F distributions

The output tables reported in examples 1 and 2 use the normal distribution to compute $$p$$-values and confidence intervals, because Wald-based postestimation commmands like test and ereturn display use the normal or the $$\chi^2$$ distribution unless the residual degrees of freedom are stored in e(df_r).

*! version 9.0.0  30Nov2015
program define myregress9, eclass
version 14

syntax varlist(numeric ts fv) [if] [in] [, Robust noCONStant ]
marksample touse

gettoken depvar indeps : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse' , constant'
local N                    = r(N)
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
local k                    = p' - diag0cnt(xpxi') - 1
matrix b'                 = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
if "robust'" == "" {
quietly summarize res2' if touse' , meanonly
local sum           = r(sum)
local s2            = sum'/(N'-(k'))
matrix V'          = s2'*xpxi'
}
else {
tempname M
quietly matrix accum M' = indeps' [iweight=res2']     ///
if touse' , constant'
matrix V'               = (N'/(N'-(k')))*xpxi'*M'*xpxi'
local vce                   "robust"
local vcetype               "Robust"
}
ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N       = N'
ereturn scalar rank    = k'
ereturn scalar df_r    = N'-k'
ereturn local  vce     "vce'"
ereturn local  vcetype "vcetype'"
ereturn local  cmd     "myregress8"
ereturn display
end


Line 42 of myregress9.ado stores the residual degrees of freedom in e(df_r). Example 3 illustrates that ereturn display and test now use the $$t$$ and $$F$$ distributions.

Example 3: t or F distributions after myregress9

. myregress9 price mpg trunk ibn.rep78, noconstant
------------------------------------------------------------------------------
|      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -262.7053   73.49434    -3.57   0.001    -409.6184   -115.7923
trunk |   41.75706    93.9671     0.44   0.658    -146.0805    229.5946
|
rep78 |
1  |   9726.377   2790.009     3.49   0.001     4149.229    15303.53
2  |   10381.17   2607.816     3.98   0.000     5168.219    15594.12
3  |   10896.98   2555.364     4.26   0.000     5788.882    16005.08
4  |   11199.73    2588.19     4.33   0.000     6026.011    16373.45
5  |   12623.27   2855.763     4.42   0.000     6914.677    18331.85
------------------------------------------------------------------------------

. test trunk

( 1)  trunk = 0

F(  1,    62) =    0.20
Prob > F =    0.6583


Done and undone

I added an option for the robust estimator of the VCE, I added an option to suppress the constant term, and I stored the residual degrees of freedom in e(df_r) so that Wald-based postestimation commands will use $$t$$ or $$F$$ distributions. I illustrated option parsing by example, but I skipped the general theory and many details. Type . help syntax for more details about parsing options using the syntax command.

In the next post, I implement the modern syntax for robust and cluster-robust standard errors.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Stock, J. H., and M. W. Watson. 2010. Introduction to Econometrics. 3rd ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2015. Introductory Econometrics: A Modern Approach. 6th ed. Cincinnati, Ohio: South-Western.

Categories: Programming Tags:

## Programming an estimation command in Stata: Allowing for sample restrictions and factor variables

I modify the ordinary least-squares (OLS) command discussed in Programming an estimation command in Stata: A better OLS command to allow for sample restrictions, to handle missing values, to allow for factor variables, and to deal with perfectly collinear variables.

Sample restrictions

The myregress4 command described in Programming an estimation command in Stata: A better OLS command has the syntax

myregress4 depvar [indepvars]

where the indepvars may be time-series operated variables. myregress5 allows for sample restrictions and missing values. It has the syntax

myregress5 depvar [indepvars] [if] [in]

A user may optionally specify an if expression or an in range to restrict the sample. I also make myregress5 handle missing values in the user-specified variables.

*! version 5.0.0  22Nov2015
program define myregress5, eclass
version 14

syntax varlist(numeric ts) [if] [in]
marksample touse

gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
quietly summarize res2' if touse' , meanonly
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V', esample(touse')
ereturn scalar           N  = N'
ereturn local         cmd   "myregress5"
ereturn display
end


The syntax command in line 5 specifies that a user may optionally restrict the sample by specifying an if expression or in range. When the user specifies an if expression, syntax puts it into the local macro if; otherwise, the local macro if is empty. When the user specifies an in range, syntax puts it into the local macro in; otherwise, the local macro in is empty.

We could use the local macros if and in to handle user-specified sample restrictions, but these do not account for missing values in the user-specified variables. The marksample command in line 6 creates a local macro named touse, which contains the name of a temporary variable that is a sample-identification variable. Each observation in the sample-identification variable is either one or zero. It is one if the observation is included in the sample. It is zero if the observation is excluded from the sample. An observation can be excluded by a user-specified if expression, by a user-specified in range, or because there is a missing value in one of the user-specified variables.

Lines 20–23 use the sample-identification variable contained in the local macro touse to enforce these sample restrictions on the OLS calculations.

Line 28 posts the sample-identification variable into e(sample), which is one if the observation was included in the estimation sample and it is zero if the observation was excluded from the estimation sample.

Line 29 stores the number of observations in the sample in e(N).

Example 1 illustrates that myregress5 runs the requested regression on the sample that respects the missing values in rep78 and accounts for an if expression.

Example 1: myregress5 with missing values and an if expression

. sysuse auto
(1978 Automobile Data)

. count if !missing(rep78)
69

. count if !missing(rep78) & mpg < 30
62

. myregress5 price mpg trunk rep78 if mpg < 30
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -376.8591   107.4289    -3.51   0.000    -587.4159   -166.3024
trunk |  -36.39376   102.2139    -0.36   0.722    -236.7294    163.9418
rep78 |   556.3029   378.1101     1.47   0.141    -184.7793    1297.385
_cons |    12569.5   3455.556     3.64   0.000     5796.735    19342.27
------------------------------------------------------------------------------

. ereturn list

scalars:
e(N) =  62

macros:
e(cmd) : "myregress5"
e(properties) : "b V"

matrices:
e(b) :  1 x 4
e(V) :  4 x 4

functions:
e(sample)


Allowing for factor variables

Example 1 includes the number of repairs as a continuous variable, but it might be better treated as a discrete factor. myregress6 accepts factor variables. Factor-variable lists usually imply variable lists that contain perfectly collinear variables, so myregress6 also handles
perfectly collinear variables.

*! version 6.0.0  22Nov2015
program define myregress6, eclass
version 14

syntax varlist(numeric ts fv) [if] [in]
marksample touse

gettoken depvar : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse'
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
local k                    = p' - diag0cnt(xpxi') - 1
matrix b'                 = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
quietly summarize res2' if touse' , meanonly
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(k'))
matrix V'                  = s2'*xpxi'
ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N            = N'
ereturn scalar rank         = k'
ereturn local  cmd             "myregress6"
ereturn display
end


The fv in the parentheses after varlist in the syntax command in line 5 modifies the varlist to accept factor variables. Any specified factor variables are stored in the local macro varlist in a canonical form.

Estimation commands do not allow the dependent variable to be a factor variable. The _fv_check_depvar command in line 9 will exit with an error if the local macro depvar contains a factor variable.

Line 15 stores the number of columns in the matrix formed by matrix accum in the local macro p. Line 19 stores the number of linearly independent columns in the local macro k. This calculation uses diag0cnt() to account for the perfectly collinear variables that were dropped. (Each dropped variable puts a zero on the diagonal of the generalized inverse calculated by syminv() and diag0cnt() returns the number of zeros on the diagonal.)

On line 29, I specified option buildfvinfo on ereturn post to store hidden information that ereturn display, contrast, margins, and pwcompare use to label tables and to decide which functions of the parameters are estimable.

Line 31 stores the number of linearly independent variables in e(rank) for postestimation commands.

Now, I use myregress6 to include rep78 as a factor-operated variable. The base category is dropped because we included a constant term.

Example 2: myregress6 with a factor-operated variable

. myregress6 price mpg trunk i.rep78
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -262.7053   73.49434    -3.57   0.000    -406.7516   -118.6591
trunk |   41.75706    93.9671     0.44   0.657    -142.4151    225.9292
|
rep78 |
2  |   654.7905   2136.246     0.31   0.759    -3532.175    4841.756
3  |   1170.606   2001.739     0.58   0.559     -2752.73    5093.941
4  |   1473.352   2017.138     0.73   0.465    -2480.167     5426.87
5  |   2896.888   2121.206     1.37   0.172    -1260.599    7054.375
|
_cons |   9726.377   2790.009     3.49   0.000      4258.06    15194.69
------------------------------------------------------------------------------


Done and undone

I modified the OLS command discussed in Programming an estimation command in Stata: A better OLS command to allow for sample restrictions, to handle missing values, to allow for factor variables, and to deal with perfectly collinear variables. In the next post, I show how to allow options for robust standard errors and to suppress the constant term.

Categories: Programming Tags:

## Programming an estimation command in Stata: A better OLS command

I use the syntax command to improve the command that implements the ordinary least-squares (OLS) estimator that I discussed in Programming an estimation command in Stata: A first command for OLS. I show how to require that all variables be numeric variables and how to make the command accept time-series operated variables.

Stata syntax and the syntax command

The myregress2 command described in Programming an estimation command in Stata: A first command for OLS has the syntax

myregress2 depvar [indepvars]

This syntax requires that the dependent variable be specified because depvar is not enclosed in square brackets. The independent variables are optional because indepvars is enclosed in square brackets. Type

for an introduction to reading Stata syntax diagrams.

This syntax is implemented by the syntax command in line 5 of myregress2.ado, which I discussed at length in Programming an estimation command in Stata: A first command for OLS. The user must specify a list of variable names because varlist is not enclosed in square brackets. The syntax of the syntax command follows the rules of a syntax diagram.

*! version 2.0.0  26Oct2015
program define myregress2, eclass
version 14

syntax varlist
gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b'
quietly generate double res'       = (depvar' - xbhat')
quietly generate double res2'      = (res')^2
quietly summarize res2'
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V'
ereturn local         cmd   "myregress2"
ereturn display
end


Example 1 illustrates that myregress2 runs the requested regression when I specify a varlist.

Example 1: myregress2 with specified variables

. sysuse auto
(1978 Automobile Data)

. myregress2 price mpg trunk
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   65.59262    -3.36   0.001    -348.7241    -91.6057
trunk |   43.55851   88.71884     0.49   0.623    -130.3272    217.4442
_cons |   10254.95   2349.084     4.37   0.000      5650.83    14859.07
------------------------------------------------------------------------------


Example 2 illustrates that the syntax command displays an error message and stops execution when I do not specify a varlist. I use set trace on to see each line of code and the output it produces.

Example 2: myregress2 without a varlist

. set trace on

. myregress2
--------------------------------------------------------- begin myregress2 --
- version 14
- syntax varlist
varlist required
----------------------------------------------------------- end myregress2 --
r(100);


Example 3 illustrates that the syntax command is checking that the specified variables are in the current dataset. syntax throws an error because DoesNotExist is not a variable in the current dataset.

Example 3: myregress2 with a variable not in this dataset

. set trace on

. myregress2 price mpg trunk DoesNotExist
--------------------------------------------------------- begin myregress2 --
- version 14
- syntax varlist
----------------------------------------------------------- end myregress2 --
r(111);

end of do-file

r(111);


Because the syntax command on line 5 is not restricting the specified variables to be numeric, I get the no observations error in example 4 instead of an error indicating the actual problem, which is the string variable make.

Example 4: myregress2 with a string variable

. describe make

storage   display    value
variable name   type    format     label      variable label
-------------------------------------------------------------------------------
make            str18   %-18s                 Make and Model

. myregress2 price mpg trunk make
no observations
r(2000);

end of do-file

r(2000);


On line 5 of myregress3, I modify varlist to only accept numeric variables This change produces a more informative error message when I try to include a string variable in the regression.

*! version 3.0.0  30Oct2015
program define myregress3, eclass
version 14

syntax varlist(numeric)
gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b'
quietly generate double res'       = (depvar' - xbhat')
quietly generate double res2'      = (res')^2
quietly summarize res2'
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V'
ereturn local         cmd   "myregress3"
ereturn display
end


Example 5: myregress3 with a string variable

. set trace on

. myregress3 price mpg trunk make
--------------------------------------------------------- begin myregress3 --
- version 14
- syntax varlist(numeric)
string variables not allowed in varlist;
make is a string variable
----------------------------------------------------------- end myregress3 --
r(109);

end of do-file

r(109);


On line 5 of myregress4, I modify the varlist to accept time-series (ts) variables. The syntax command puts time-series variables in a canonical form that is stored in the local macro varlist, as illustrated in the display on line 6, whose output appears in example 6.

*! version 4.0.0  31Oct2015
program define myregress4, eclass
version 14

syntax varlist(numeric ts)
display "varlist is varlist'"
gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b'
quietly generate double res'       = (depvar' - xbhat')
quietly generate double res2'      = (res')^2
quietly summarize res2'
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V'
ereturn local         cmd   "myregress4"
ereturn display
end


Example 6: myregress4 with time-series variables

. sysuse gnp96

. myregress4  L(0/3).gnp
varlist is gnp96 L.gnp96 L2.gnp96 L3.gnp96
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
gnp96 |
L1. |   1.277086   .0860652    14.84   0.000     1.108402    1.445771
L2. |   -.135549   .1407719    -0.96   0.336    -.4114568    .1403588
L3. |  -.1368326   .0871645    -1.57   0.116    -.3076719    .0340067
|
_cons |   -2.94825   14.36785    -0.21   0.837    -31.10871    25.21221
------------------------------------------------------------------------------


Done and undone

I used the syntax command to improve how myregress2 handles the variables specified by the user. I showed how to require that all variables be numeric variables and how to make the command accept time-series operated variables. In the next post, I show how to make the command allow for sample restrictions, how to handle missing values, how to allow for factor-operated variables, and how to deal with perfectly collinear variables.

Categories: Programming Tags:

## Programming an estimation command in Stata: A first command for OLS

Programming an estimation command in Stata: A first ado command, in which I introduced some ado-programming concepts. Although I introduce some local macro tricks that I use all the time, I also build on Programing an estimation command in Stata: Where to store your stuff.

Local macro tricks

I use lots of local macro tricks in my ado-files. In this section, I illustrate ones that I use in the commands that I develop in this post. In every ado-file that I write, I ask questions about lists of variable names stored in local macros. I frequently use the extended macro functions and the gettoken command to ask these questions and store the results in a local macro.

The syntax for storing the result of an extended macro function in a local macro is

local localname : extended_fcn

Below, I use the extended macro function word count to count the number of elements in the list and store the result in the local macro count.

Example 1: Storing and extracting the result of an extended macro function

. local count : word count a b c

. display "count contains count'"
count contains 3


There are many extended macro functions, but I illustrate just the one I use in this post; type help extended fcn for a complete list.

A token is an element in a list. I frequently use the gettoken command to split lists apart. The gettoken command has the syntax

gettoken localname1 [localname2] : localname3

gettoken stores the first token in the list stored in the local macro localname3 into the local macro localname1. If the optional localname2 is specified, the remaining tokens are stored in the local macro localname2.

I use gettoken to store the first token stored in mylist into the local macro first, whose contents I subsequently extract and display.

Example 2: Using gettoken to store first token only

. local mylist y x1 x2

. display "mylist contains mylist'"
mylist contains y x1 x2

. gettoken first : mylist

. display "first contains first'"
first contains y


Now, I use gettoken to store the first token stored in mylist into the local macro first and the remaining tokens into the local macro left. I subsequently extract and display the contents of first and left.

Example 3: Using gettoken to store first and remaining tokens

. gettoken first left: mylist

. display "first contains first'"
first contains y

. display "left  contains left'"
left  contains  x1 x2


I frequently want to increase the value of a local macro by some fixed amount, say, $$3$$. I now illustrate a solution that I use.

Example 4: Local macro update

. local p = 1

. local p = p' + 3

. display "p is now p'"
p is now 4


When the update value, also known as the increment value, is $$1$$, we can use the increment operator, as below:

Example 5: Local macro update

. local p = 1

. local ++p

. display "p is now p'"
p is now 2


A first version of myregress

The code in myregress1 implements a version of the OLS formulas. I use myregress1 in example 6. Below example 6, I discuss the code and the output.

*! version 1.0.0  23Oct2015
program define myregress1, eclass
version 14

syntax varlist
display "The syntax command puts the variables specified by the "
display "    user into the local macro varlist"
display "    varlist contains varlist'"

gettoken depvar : varlist
display "The dependent variable is depvar'"

matrix accum zpz = varlist'
display "matrix accum forms Z'Z"
matrix list zpz

local p : word count varlist'
local p = p' + 1

matrix xpx                = zpz[2..p', 2..p']
matrix xpy                = zpz[2..p', 1]
matrix xpxi               = syminv(xpx)
matrix b                  = (xpxi*xpy)'

matrix score double xbhat = b
generate double res       = (depvar' - xbhat)
generate double res2      = res^2
summarize res2
local N                   = r(N)
local sum                 = r(sum)
local s2                  = sum'/(N'-(p'-1))
matrix V                  = s2'*xpxi
ereturn post b V
ereturn local         cmd   "myregress1"
ereturn display
end


Example 6: myregress1 output

. sysuse auto
(1978 Automobile Data)

. myregress1 price mpg trunk
The syntax command puts the variables specified by the
user into the local macro varlist
varlist contains price mpg trunk
The dependent variable is price
(obs=74)
matrix accum forms Z'Z

symmetric zpz[4,4]
price        mpg      trunk      _cons
price  3.448e+09
mpg    9132716      36008
trunk    6565725      20630      15340
_cons     456229       1576       1018         74

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
res2 |         74     6674851    1.30e+07   11.24372   9.43e+07
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   65.59262    -3.36   0.001    -348.7241    -91.6057
trunk |   43.55851   88.71884     0.49   0.623    -130.3272    217.4442
_cons |   10254.95   2349.084     4.37   0.000      5650.83    14859.07
------------------------------------------------------------------------------


Here are my comments on the code and the output in example 6.

• Line 2 specifies that myregress1 is an e-class command that stores its results in e().
• Lines 5–8 illustrate that the syntax command stores the names of the variables specified by the user in the local macro varlist. This behavior is also illustrated in example 6.
• Line 10 uses the gettoken command to store the first variable name stored in the local macro varlist in the local macro depvar. Line 11 displays this name and the usage is illustrated in example 6.
• Line 13 uses matrix accum to put $$(\Xb’\Xb)$$ and $$(\Xb’\yb)$$ into a Stata matrix named zpz, as discussed in Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects and further illustrated in lines 14–15 and example 6.
• Line 17 stores the number of variables in the local macro varlist into the local macro p.
• Line 18 increments the local macro p by $$1$$ to account for the constant term included by matrix accum by default.
• Lines 20–23 extract $$(\Xb’\Xb)$$ and $$(\Xb’\yb)$$ from zpz and put the vector of point estimates $$\widehat{\betab}$$ into the Stata row vector b.
• Line 25 puts $$\Xb\widehat{\betab}$$ into the variable xbhat.
• Lines 26 and 27 calculate the residuals and the squared residuals, respectively.
• Lines 28–32 calculate the estimated variance-covariance matrix of the estimator (VCE) from the sum of squared residuals.
• Line 33 stores b and V into e(b) and e(V), respectively.
• Line 34 stores the name of the estimation command (myregress1) in e(cmd).
• Line 35 produces a standard Stata output table from the results in e(b) and e(V).

myregress1 contains code to help illustrate how it works, and it uses hard-coded names for global objects like Stata variables and Stata matrices. Users do not want to see the output from the illustration lines, so they must be removed. Users do not want their global Stata matrices overwritten by a command they use, which is what myregress1 would do to a matrix named zpz, xpx, xpxi, b, or V.

The code in myregress2 fixes these problems.

*! version 2.0.0  26Oct2015
program define myregress2, eclass
version 14

syntax varlist
gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b'
generate double res'       = (depvar' - xbhat')
generate double res2'      = (res')^2
quietly summarize res2'
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V'
ereturn local         cmd   "myregress2"
ereturn display
end

• Line 8 uses tempname to put safe names into the local macros zpz, xpx, xpy, xpxi, b, and V.
• Line 9 uses tempvar to put safe names into the local macros xbhat, res, res2.
• Lines 11, 14–18, and 25–26 use the safe names in the local macros created by tempname instead of the hard-coded names for the matrices.
• Lines 18–20 use the safe names in the local macros created by tempvar instead of the hard-coded names for the variables it creates.

The output below shows the output produced by myregress2.

Example 7: myregress2 output

. myregress2 price mpg trunk
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   65.59262    -3.36   0.001    -348.7241    -91.6057
trunk |   43.55851   88.71884     0.49   0.623    -130.3272    217.4442
_cons |   10254.95   2349.084     4.37   0.000      5650.83    14859.07
------------------------------------------------------------------------------


Done and undone

After reviewing some tricks with local macros that I use in most of the ado-files that I write, I discussed two versions of an ado-command that implements the (OLS) estimator. In the next post, I extend this command so that the user may request a robust VCE, or that the constant term be suppressed, or both.

Categories: Programming Tags:

## Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects


OLS formulas

Recall that the OLS point estimates are given by

$\widehat{\betab} = \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1} \left( \sum_{i=1}^N \xb_i’y_i \right)$

where $${\bf x}_i$$ is the $$1\times k$$ vector of independent variables, $$y_i$$ is the dependent variable for each of the $$N$$ sample observations, and the model for $$y_i$$ is

$y_i = \xb_i\betab’ + \epsilon_i$

If the $$\epsilon_i$$ are independently and identically distributed, we estimate the variance-covariance matrix of the estimator (VCE) by

$\widehat{\Vb} = \widehat{s} \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}$

where $$\widehat{s} = 1/(N-k)\sum_{i=1}^N e_i^2$$ and $$e_i=y_i-{\bf x}_i\widehat{{\boldsymbol \beta}}$$.

See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2015) for introductions to OLS.

Stata matrix implementation

I use the matrix accum command to compute the sum of the products over the observations. Typing

.  matrix accum zpz = z1 z2 z3

puts $$\left( \sum_{i=1}^N {\bf z}_i'{\bf z}_i \right)$$ into the Stata matrix zpz, where $${\bf z}_i=( {\tt z1}_i, {\tt z2}_i, {\tt z3}_i, 1)$$. The $$1$$ appears because matrix accum has included the constant term by default, like almost all estimation commands.

Below, I use matrix accum to compute $$\left( \sum_{i=1}^N {\bf z}_i'{\bf z}_i \right)$$, which contains $$\left( \sum_{i=1}^N {\bf x}_i'{\bf x}_i \right)$$ and $$\left( \sum_{i=1}^N {\bf x}_i’y_i \right)$$.

Example 1: Using matrix accum

. sysuse auto
(1978 Automobile Data)

. matrix accum zpz = price mpg trunk
(obs=74)

. matrix list zpz

symmetric zpz[4,4]
price        mpg      trunk      _cons
price  3.448e+09
mpg    9132716      36008
trunk    6565725      20630      15340
_cons     456229       1576       1018         74


Now, I extract $$\left( \sum_{i=1}^N {\bf x}_i'{\bf x}_i \right)$$ from rows 2–4 and columns 2–4 of zpz and $$\left( \sum_{i=1}^N {\bf x}_i’y_i \right)$$ from rows 2–4 and column 1 of zpz.

Example 2: Extracting submatrices

. matrix xpx       = zpz[2..4, 2..4]

. matrix xpy       = zpz[2..4, 1]

. matrix list xpx

symmetric xpx[3,3]
mpg  trunk  _cons
mpg  36008
trunk  20630  15340
_cons   1576   1018     74

. matrix list xpy

xpy[3,1]
price
mpg  9132716
trunk  6565725
_cons   456229


I now compute $$\widehat{{\boldsymbol \beta}}$$ from the matrices formed in example 2.

Example 3: Computing $$\widehat{\betab}$$

. matrix xpxi      = syminv(xpx)

. matrix b         = xpxi*xpy

. matrix list b

b[3,1]
price
mpg  -220.16488
trunk    43.55851
_cons    10254.95

. matrix b         = b'

. matrix list b

b[1,3]
mpg       trunk       _cons
price  -220.16488    43.55851    10254.95


I transposed b to make it a row vector because point estimates in Stata are stored as row vectors.

Example 3 illustrates that the Stata matrix b contains the estimated coefficients and the names of the variables on which these values are estimated coefficients. To clarify, our model is
$\Eb[{\tt price}|{\tt mpg}, {\tt trunk} ] = {\tt mpg}*\beta_{\tt mpg} + {\tt trunk}*\beta_{\tt trunk} + {\tt \_cons}$

and b contains the information that $$-220.16$$ is the estimated coefficient on mpg, that $$43.56$$ is the estimated coefficient on trunk, and that $$10254.95$$ is the estimated constant. We can compute the linear combination $$\xb_i\widehat{\betab}$$ over the observations using the information in b, because b contains both the value and the name for each coefficient.

I use matrix score to compute this linear combination for each observation, and I use generate to reiterate what this linear combination is.

Example 4: Using matrix score to compute $$\xb_i\widehat{\betab}’$$

. matrix score double xbhat1 = b

. generate     double xbhat2 = mpg*(-220.16488) + trunk*(43.55851) + 10254.95

. list xbhat1 xbhat2 in 1/4

+-----------------------+
|    xbhat1      xbhat2 |
|-----------------------|
1. | 5890.4661   5890.4663 |
2. | 6991.2905   6991.2907 |
3. | 5934.0246   5934.0248 |
4. | 6548.5884   6548.5886 |
+-----------------------+


I use the predictions for $$\Eb[{\tt price}|{\tt mpg}, {\tt trunk} ]$$ in xbhat1 to compute the residuals and the estimated VCE.

Example 5: Computing the estimated VCE

. generate double res       = (price - xbhat1)

. generate double res2      = res^2

. summarize res2

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
res2 |         74     6674851    1.30e+07   11.24372   9.43e+07

. return list

scalars:
r(N) =  74
r(sum_w) =  74
r(mean) =  6674850.504745401
r(Var) =  168983977867533.1
r(sd) =  12999383.74952956
r(min) =  11.24371634723049
r(max) =  94250157.2111593
r(sum) =  493938937.3511598

. local N                   = r(N)

. local sum                 = r(sum)

. local s2                  = sum'/(N'-3)

. matrix V                  = (s2')*xpxi


(See Programing an estimation command in Stata: Where to store your stuff for discussions of using results from r-class commands and using local macros.)

I verify that my computations for $$\widehat{\betab}$$ and the VCE match those of regress.

Example 6: Comparing against regress

. regress price mpg trunk

Source |       SS           df       MS      Number of obs   =        74
-------------+----------------------------------   F(2, 71)        =     10.14
Model |   141126459         2  70563229.4   Prob > F        =    0.0001
Residual |   493938937        71  6956886.44   R-squared       =    0.2222
Total |   635065396        73  8699525.97   Root MSE        =    2637.6

------------------------------------------------------------------------------
price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   65.59262    -3.36   0.001    -350.9529    -89.3769
trunk |   43.55851   88.71884     0.49   0.625    -133.3418    220.4589
_cons |   10254.95   2349.084     4.37   0.000      5571.01    14938.89
------------------------------------------------------------------------------

. matrix list e(b)

e(b)[1,3]
mpg       trunk       _cons
y1  -220.16488    43.55851    10254.95

. matrix list b

b[1,3]
mpg       trunk       _cons
price  -220.16488    43.55851    10254.95

. matrix list e(V)

symmetric e(V)[3,3]
mpg       trunk       _cons
mpg   4302.3924
trunk   3384.4186   7871.0326
_cons  -138187.95  -180358.85   5518194.7

. matrix list V

symmetric V[3,3]
mpg       trunk       _cons
mpg   4302.3924
trunk   3384.4186   7871.0326
_cons  -138187.95  -180358.85   5518194.7


Robust standard errors

The frequently used robust estimator of the VCE is given by

$\widehat{V}_{robust}=\frac{N}{N-k} \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1} \Mb \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}$

where

$\Mb=\sum_{i=1}^N \widehat{e}_i^2\xb_i’\xb_i$

See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2015) for derivations and discussions.

matrix accum with weights $$\widehat{e}_i^2$$ computes the formula for $$\Mb$$. Below, I use matrix accum to compute $$\Mb$$ and $$\widehat{V}_{robust}$$

Example 7: A robust VCE

. matrix accum M    = mpg trunk [iweight=res2]
(obs=493938937.4)

. matrix V2         = (N'/(N'-3))*xpxi*M*xpxi


I now verify that my computations match those reported by regress.

Example 8: Comparing computations of robust VCE

. regress price mpg trunk, robust

Linear regression                               Number of obs     =         74
F(2, 71)          =      11.59
Prob > F          =     0.0000
R-squared         =     0.2222
Root MSE          =     2637.6

------------------------------------------------------------------------------
|               Robust
price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   72.45388    -3.04   0.003    -364.6338   -75.69595
trunk |   43.55851    71.4537     0.61   0.544    -98.91613    186.0331
_cons |   10254.95   2430.641     4.22   0.000      5408.39    15101.51
------------------------------------------------------------------------------

. matrix list e(V)

symmetric e(V)[3,3]
mpg       trunk       _cons
mpg   5249.5646
trunk   3569.5316   5105.6316
_cons  -169049.76  -147284.49   5908013.8

. matrix list V2

symmetric V2[3,3]
mpg       trunk       _cons
mpg   5249.5646
trunk   3569.5316   5105.6316
_cons  -169049.76  -147284.49   5908013.8


Cluster-robust standard errors

The cluster-robust estimator of the VCE is frequently used when the data have a panel structure, also known as a longitudinal structure. This VCE accounts for the within-group correlation of the errors, and it is given by

$\widehat{V}_{cluster}=\frac{N-1}{N-k}\frac{g}{g-1} \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1} \Mb_c \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}$

where

$\Mb_c=\sum_{j=1}^g \Xb_j’ (\widehat{\eb}_j \widehat{\eb}_j’) \Xb_j$

$$\Xb_j$$ is the $$n_j\times k$$ matrix of observations on $$\xb_i$$ in group $$j$$, $$\widehat{\eb}_j$$ is the $$n_j\times 1$$ vector of residuals in group $$j$$, and $$g$$ is the number of groups. See Cameron and Trivedi (2005), Wooldridge (2010), and [R] regress for derivations and discussions.

matrix opaccum computes the formula for $$\Mb_c$$. Below, I create the group variable cvar from rep78 and use matrix opaccum to compute $$\Mb_c$$ and $$\widehat{V}_{cluster}$$

Example 9: A cluster-robust VCE

. generate cvar = cond( missing(rep78), 6, rep78)

. tab cvar

cvar |      Freq.     Percent        Cum.
------------+-----------------------------------
1 |          2        2.70        2.70
2 |          8       10.81       13.51
3 |         30       40.54       54.05
4 |         18       24.32       78.38
5 |         11       14.86       93.24
6 |          5        6.76      100.00
------------+-----------------------------------
Total |         74      100.00

. local Nc = r(r)

. sort cvar

. matrix opaccum M2     = mpg trunk , group(cvar) opvar(res)

. matrix V2          = ((N'-1)/(N'-3))*(Nc'/(Nc'-1))*xpxi*M2*xpxi


I now verify that my computations match those reported by regress.

Example 10: Comparing computations of cluster-robust VCE

. regress price mpg trunk, vce(cluster cvar)

Linear regression                               Number of obs     =         74
F(2, 5)           =       9.54
Prob > F          =     0.0196
R-squared         =     0.2222
Root MSE          =     2637.6

(Std. Err. adjusted for 6 clusters in cvar)
------------------------------------------------------------------------------
|               Robust
price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   93.28127    -2.36   0.065     -459.952    19.62226
trunk |   43.55851   58.89644     0.74   0.493    -107.8396    194.9566
_cons |   10254.95   2448.547     4.19   0.009     3960.758    16549.14
------------------------------------------------------------------------------

. matrix list e(V)

symmetric e(V)[3,3]
mpg       trunk       _cons
mpg   8701.3957
trunk   4053.5381   3468.7911
_cons     -223021  -124190.97   5995384.3

. matrix list V2

symmetric V2[3,3]
mpg       trunk       _cons
mpg   8701.3957
trunk   4053.5381   3468.7911
_cons     -223021  -124190.97   5995384.3


Done and undone

I reviewed the formulas that underlie the OLS estimator and showed how to compute them using Stata matrix commands and functions. In the next two posts, I write an ado-command that implements these formulas.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Stock, J. H., and M. W. Watson. 2010. Introduction to Econometrics. 3rd ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Wooldridge, J. M. 2015. Introductory Econometrics: A Modern Approach. 6th ed. Cincinnati, Ohio: South-Western.

Categories: Programming Tags:

## xtabond Cheat Sheet

Random-effects and fixed-effects panel-data models do not allow me to use observable information of previous periods in my model. They are static. Dynamic panel-data models use current and past information. For instance, I may model current health outcomes as a function of health outcomes in the past— a sensible modeling assumption— and of past observable and unobservable characteristics.

Today I will provide information that will help you interpret the estimation and postestimation results from Stata’s Arellano–Bond estimator xtabond, the most common linear dynamic panel-data estimator.

The instruments and the regressors

We have fictional data for 1,000 people from 1991 to 2000. The outcome of interest is income (income), and the explanatory variables are years of schooling (educ) and an indicator for marital status (married). Below, we fit an Arellano–Bond model using xtabond.

. xtabond income married educ, vce(robust)

Arellano-Bond dynamic panel-data estimation     Number of obs     =      8,000
Group variable: id                              Number of groups  =      1,000
Time variable: year
Obs per group:
min =          8
avg =          8
max =          8

Number of instruments =     39                  Wald chi2(3)      =    3113.63
Prob > chi2       =     0.0000
One-step results
(Std. Err. adjusted for clustering on id)
------------------------------------------------------------------------------
|               Robust
income |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
income |
L1. |   .2008311   .0036375    55.21   0.000     .1937018    .2079604
|
married |   1.057667   .1006091    10.51   0.000     .8604764    1.254857
educ |    .057551   .0045863    12.55   0.000     .0485619      .06654
_cons |   .2645702   .0805474     3.28   0.001     .1067002    .4224403
------------------------------------------------------------------------------
Instruments for differenced equation
GMM-type: L(2/.).income
Standard: D.married D.educ
Instruments for level equation
Standard: _cons


A couple of elements in the output table are different from what one would expect. The output includes a coefficient for the lagged value of the dependent variable that we did not specify in the command. Why?

In the Arellano–Bond framework, the value of the dependent variable in the previous period is a predictor for the current value of the dependent variable. Stata includes the value of the dependent variable in the previous period for us. Another noteworthy aspect that appears in the table is the mention of 39 instruments in the header. This is followed by a footnote that refers to GMM and standard-type instruments. Here a bit of math will help us understand what is going on.

The relationship of interest is given by

$\begin{equation*} y_{it} = x_{it}’\beta_1 + y_{i(t-1)}\beta_2 + \alpha_i + \varepsilon_{it} \end{equation*}$

In the equation above, $$y_{it}$$ is the outcome of interest for individual $$i$$ at time $$t$$, $$x_{it}$$ are a set of regressors that may include past values, $$y_{i(t-1)}$$ is the value of the outcome in the previous period, $$\alpha_i$$ is a time-invariant unobservable, and $$\varepsilon_{it}$$ is a time-varying unobservable.

As in the fixed-effects framework, we assume the time-invariant unobserved component is related to the regressors. When unobservables and observables are correlated, we have an endogeneity problem that yields inconsistent parameter estimates if we use a conventional linear panel-data estimator. One solution is taking first-differences of the relationship of interest. However, the strategy of taking first-differences does not work. Why?

$\begin{eqnarray*} \Delta y_{it} &=& \Delta x_{it}’\beta_1 + \Delta y_{i(t-1)} + \Delta \varepsilon_{it} \\ E\left( \Delta y_{i(t-1)} \Delta \varepsilon_{it} \right) &\neq & 0 \end{eqnarray*}$

In the first equation above, we got rid of $$\alpha_i$$, which is correlated with our regressors, but we generated a new endogeneity problem. The second equation above illustrates one of our regressors is related to our unobservables. The solution is instrumental variables. Which instrumental variables? Arellano–Bond suggest the second lags of the dependent variable and all the feasible lags thereafter. This generates the set of moment conditions defined by

$\begin{eqnarray*} E\left( \Delta y_{i(t-2)} \Delta \varepsilon_{it} \right) &=& 0 \\ E\left( \Delta y_{i(t-3)} \Delta \varepsilon_{it} \right) &=& 0 \\ \ldots & & \\ E\left( \Delta y_{i(t-j)} \Delta \varepsilon_{it} \right) &=& 0 \end{eqnarray*}$

In our example, we have 10 time periods, which yield the following set of instruments:

$\begin{eqnarray*} t&=10& \quad y_{t-8}, y_{t-7}, y_{t-6}, y_{t-5}, y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\ t&=9& \quad y_{t-7}, y_{t-6}, y_{t-5}, y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\ t&=8& \quad y_{t-6}, y_{t-5}, y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\ t& = 7& \quad y_{t-5}, y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\ t&= 6& \quad y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\ t&= 5& \quad y_{t-3}, y_{t-2}, y_{t-1} \\ t&= 4& \quad y_{t-2}, y_{t-1} \\ t&=3& \quad y_{t-1} \end{eqnarray*}$

This gives us 36 instruments which are what the table calls GMM-type instruments. GMM has been explored in the blog post Estimating parameters by maximum likelihood and method of moments using mlexp and gmm and we will talk about it in a later post. The other three instruments are given by the first difference of the regressors educ and married and the constant. This is no different from two-stage least squares, where we include the exogenous variables as part of our instrument list.

Testing for serial correlation

The key for the instrument set in Arellano–Bond to work is that

$E\left( \Delta y_{i(t-j)} \Delta \varepsilon_{it} \right) = 0 \quad j \geq 2$

We can test these conditions in Stata using estat abond. In essence, the differenced unobserved time-invariant component should be unrelated to the second lag of the dependent variable and the lags thereafter. If this is not the case, we are back to the initial problem, endogeneity. Again, a bit of math will help us understand what is going on.

All is well if

$\Delta \varepsilon_{it} = \Delta \nu_{it}$

The unobservable is serially correlated of order 1 but not serially correlated of orders 2 or beyond.

But we are in trouble if

$\Delta \varepsilon_{it} = \Delta \nu_{it} + \Delta \nu_{i(t-1)}$

The second lag of the dependent variable will be related to the differenced time-varying component $$\Delta \varepsilon_{it}$$. Another way of saying this is that the differenced time-varying unobserved component is serially correlated with an order greater than 1.

estat abond provides a test for the serial correlation structure. For the example above,

. estat abond

Arellano-Bond test for zero autocorrelation in first-differenced errors
+-----------------------+
|Order |  z     Prob > z|
|------+----------------|
|   1  |-22.975  0.0000 |
|   2  |-.36763  0.7132 |
+-----------------------+
H0: no autocorrelation


We reject no autocorrelation of order 1 and cannot reject no autocorrelation of order 2. There is evidence that the Arellano–Bond model assumptions are satisfied. If this were not the case, we would have to look for different instruments. Essentially, we would have to fit a different dynamic model. This is what the xtdpd command allows us to do, but it is beyond the scope of this post.

Parting words

Dynamic panel-data models provide a useful research framework. In this post, I touched on the interpretation of a couple of results from estimation and postestimation from xtabond that will help you understand your output.

Categories: Statistics Tags:

## Programming an estimation command in Stata: A first ado-command

I discuss the code for a simple estimation command to focus on the details of how to implement an estimation command. The command that I discuss estimates the mean by the sample average. I begin by reviewing the formulas and a do-file that implements them. I subsequently introduce ado-file programming and discuss two versions of the command. Along the way, I illustrate some of the postestimation features that work after the command.

The formulas for our estimator

The formulas for the sample average and its estimated sampling variance, assuming an independently and identically distributed process, are

$\widehat{\mu} = 1/N \sum_{i=1}^N y_i$

$\widehat{Var}(\widehat{\mu}) = 1/[N(N-1)] \sum_{i=1}^N (y_i-\widehat{\mu})^2$

The code mean1.do performs these computations on price from the auto dataset.

Code block 1: mean1.do

// version 1.0.0 20Oct2015
version 14
sysuse auto
quietly summarize price
local sum          = r(sum)
local N            = r(N)
local mu           = (1/N')*sum'
generate double e2 = (price - mu')^2
quietly summarize e2
local V            = (1/((N')*(N'-1)))*r(sum)
display "muhat = " mu'
display "   V  = " V'


mean1.do uses summarize to compute the summations. Lines 5–7 and line 11 store results stored by summarize in r() into local macros that are subsequently used to compute the formulas. I recommend that you use double, instead of the default float, to compute all variables used in your formulas because it is almost always worth taking up the extra memory to gain the extra precision offered by double over float. (Essentially, each variable takes up twice as much space, but you get calculations that are correct to about $$10^{-16}$$ instead of $$10^{-8}$$.)

These calculations yield

Example 1: Computing the average and its sampling variance

. do mean1

. // version 1.0.0 20Oct2015
. version 14

. sysuse auto
(1978 Automobile Data)

. quietly summarize price

. local sum          = r(sum)

. local N            = r(N)

. local mu           = (1/N')*sum'

. generate double e2 = (price - mu')^2

. quietly summarize e2

. local V            = (1/((N')*(N'-1)))*r(sum)

. display "muhat = " mu'
muhat = 6165.2568

. display "   V  = " V'
V  = 117561.16

.
end of do-file


Now I verify that mean produces the same results.

Example 2: Results from mean

. mean price

Mean estimation                   Number of obs   =         74

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
price |   6165.257   342.8719      5481.914      6848.6
--------------------------------------------------------------

. matrix list e(b)

symmetric e(b)[1,1]
price
y1  6165.2568

. matrix list e(V)

symmetric e(V)[1,1]
price
price  117561.16


The code in mymean1.ado performs the same calculations as mean1.do. (The file mymean1.ado is in my current working directory.)

*! version 1.0.0 20Oct2015
program define mymean1
version 14

quietly summarize price
local sum          = r(sum)
local N            = r(N)
local mu           = (1/N')*sum'
capture drop e2				// Drop e2 if it exists
generate double e2 = (price - mu')^2
quietly summarize e2
local V            = (1/((N')*(N'-1)))*r(sum)
display "muhat = " mu'
display "   V  = " V'
end


Line 1 of mymean1.ado specifies that file defines the command mymean1. The command name must be the same as the file name that precedes the suffix .ado. The mymean1 command performs the same computations as the do-file mean1.do.

Example 3: Results from mymean1

. mymean1
muhat = 6165.2568
V  = 117561.16


A slightly better command

We want our command to be reusable; we want it to estimate the mean for any variable in memory, instead of only for price as performed by mymean1.ado. On line 5 of mymean2.ado, we use the syntax command to store the name of the variable specified by the user into the local macro varlist, which we use in the remainder of the computations.

*! version 2.0.0 20Oct2015
program define mymean2
version 14

syntax varlist
display "The local macro varlist contains varlist'"

quietly summarize varlist'
local sum          = r(sum)
local N            = r(N)
local mu           = (1/N')*sum'
capture drop e2				// Drop e2 if it exists
generate double e2 = (varlist' - mu')^2
quietly summarize e2
local V            = (1/((N')*(N'-1)))*r(sum)
display  "The average of varlist' is " mu'
display  "The estimated variance of the average is " V'
end


The extremely powerful syntax command puts the elements of Stata syntax specified by the user into local macros and throws errors when the user makes a mistake. I will discuss syntax in greater detail in subsequent posts.

I begin by illustrating how to replicate the previous results.

Example 4: Results from mymean2 price

. mymean2 price
The local macro varlist contains price
The average of price is 6165.2568
The estimated variance of the average is 117561.16


I now illustrate that it works for another variable.

Example 5: Results from mymean2 trunk

. mymean2 trunk
The local macro varlist contains trunk
The average of trunk is 13.756757
The estimated variance of the average is .24724576

. mean trunk

Mean estimation                   Number of obs   =         74

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
trunk |   13.75676   .4972381      12.76576    14.74775
--------------------------------------------------------------

. display "The variance of the estimator is " (_se[trunk])^2
The variance of the estimator is .24724576


Storing results in e()

mymean2.ado does not save the results that it displays. We fix this problem in mymean3.ado. Line 2 specifies the option e-class on program define to make mymean3 an e-class command. Line 18 uses ereturn post to move the matrix of point estimates (b) and the estimated variance-covariance of the estimator (VCE) into e(b) and e(V). The estimation-postestimation framework uses parameter names for display, hypothesis tests, and other features. In lines 15 and 16, we put those names into the column stripes of the vector of estimates and the estimated VCE. In line 17, we put those names into the row stripe of the estimated VCE.

*! version 3.0.0 20Oct2015
program define mymean3, eclass
version 14

syntax varlist

quietly summarize varlist'
local sum          = r(sum)
local N            = r(N)
matrix b           = (1/N')*sum'
capture drop e2				// Drop e2 if it exists
generate double e2 = (varlist' - b[1,1])^2
quietly summarize e2
matrix V           = (1/((N')*(N'-1)))*r(sum)
matrix colnames b  = varlist'
matrix colnames V  = varlist'
matrix rownames V  = varlist'
ereturn post b V
ereturn display
end


The ereturn display command on line 19 of mymean3.ado easily creates a standard output table using the results now stored in e(b) and e(V).

Example 6: Results from mymean3 trunk

. mymean3 trunk
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
trunk |   13.75676   .4972381    27.67   0.000     12.78219    14.73133
------------------------------------------------------------------------------


Estimation-postestimation

test, lincom, testnl, nlcom, and other Wald-based estimation-postestimation features work after mymean3 because all the required information is stored in e(b) and e(V).

To illustrate, I perform a Wald of the null hypothesis that the mean of trunk is $$11$$.

Example 7: test works after mymean3

. test _b[trunk]==11

( 1)  trunk = 11

chi2(  1) =   30.74
Prob > chi2 =    0.0000


The results stored in e() are the glue that holds the estimation-postestimation framework together. We have only stored e(b) and e(V) so far, so not all the standard features are working yet. (But we will get there in the #StataProgramming series.)

Using temporary names for global objects

Stata variables and matrices are global, as discussed in my previous blog post. We need some safe names for global objects. These safe names should not be in use elsewhere, and they should be temporary in that we want Stata to drop the corresponding objects when the command finishes. The tempvar and tempname commands put safe names into local macros and then drop the corresponding objects when the ado-file or do-file finishes. We explicitly dropped e2, if it existed, in line 9 of code block 2, in line 12 of code block 3, and in line 11 of code block 4. We do not need such a line in code block, because we are using temporary variable names.

In line 7 of mymean4.ado, the tempvar command puts a safe name into the local macro e2. In line 8 of mymean4.ado, the tempname command puts safe names into the local macros b and V. I illustrate the format followed by these safe names by displaying them on lines 9–11. The output reveals that a leading pair of underscores is followed by numbers and capital letters. Line 15 illustrates the use of these safe names. Instead of creating the matrix b, we create the matrix whose name is stored in the local macro b. In line 8, the tempname command created the local macro b to hold a safe name.

*! version 4.0.0 20Oct2015
program define mymean4, eclass
version 14

syntax varlist

tempvar e2
tempname b V
display "The safe name in e2 is e2'"
display "The safe name in b is b'"
display "The safe name in V is V'"
quietly summarize varlist'
local sum            = r(sum)
local N              = r(N)
matrix b'           = (1/N')*sum'
generate double e2' = (varlist' - b'[1,1])^2
quietly summarize e2'
matrix V'           = (1/((N')*(N'-1)))*r(sum)
matrix colnames b'  = varlist'
matrix colnames V'  = varlist'
matrix rownames V'  = varlist'
ereturn post b' V'
ereturn display
end


This code produces the output

Example 8: Results from mymean4 trunk

. mymean4 trunk
The safe name in e2 is __000000
The safe name in b is __000001
The safe name in V is __000002
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
trunk |   13.75676   .4972381    27.67   0.000     12.78219    14.73133
------------------------------------------------------------------------------


Removing the lines that display the safe names contained in the local macros yields mymean5.ado.

*! version 5.0.0 20Oct2015
program define mymean5, eclass
version 14

syntax varlist

tempvar e2
tempname b V
quietly summarize varlist'
local sum            = r(sum)
local N              = r(N)
matrix b'           = (1/N')*sum'
generate double e2' = (varlist' - b'[1,1])^2
quietly summarize e2'
matrix V'           = (1/((N')*(N'-1)))*r(sum)
matrix colnames b'  = varlist'
matrix colnames V'  = varlist'
matrix rownames V'  = varlist'
ereturn post b' V'
ereturn display
end


This code produces the output

Example 9: Results from mymean5 trunk

. mymean5 trunk
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
trunk |   13.75676   .4972381    27.67   0.000     12.78219    14.73133
------------------------------------------------------------------------------


Done and undone

I illustrated some basic ado-file programming techniques by implementing a command that estimates the mean of variable. Even though we have a command that produces correct, easy-to-read output that has some estimation-postestimation features, we have only scratched the surface of what we usually want to do in an estimation command. I dig a little deeper in the next few posts by developing a command that performs ordinary least-squares estimation.

Categories: Programming Tags:

## Using mlexp to estimate endogenous treatment effects in a probit model

I use features new to Stata 14.1 to estimate an average treatment effect (ATE) for a probit model with an endogenous treatment. In 14.1, we added new prediction statistics after mlexp that margins can use to estimate an ATE.

I am building on a previous post in which I demonstrated how to use mlexp to estimate the parameters of a probit model with sample selection. Our results match those obtained with biprobit; see [R] biprobit for more details. In a future post, I use these techniques to estimate treatment-effect parameters not yet available from another Stata command.

Probit model with treatment

In this section, I describe the potential-outcome framework used to define an ATE. For each treatment level, there is an outcome that we would observe if a person were to select that treatment level. When the outcome is binary and there are two treatment levels, we can specify how the potential outcomes $$y_{0i}$$ and $$y_{1i}$$ are generated from the regressors $${\bf x}_i$$ and the error terms $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$:

$\begin{eqnarray*} y_{0i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_0 + \epsilon_{0i} > 0) \cr y_{1i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_1 + \epsilon_{1i} > 0) \end{eqnarray*}$

(Assuming that each error is standard normal, this gives us a bivariate probit model.) The indicator function $${\bf 1}(\cdot)$$ outputs 1 when its input is true and 0 otherwise.

The probit model for potential outcomes $$y_{0i}$$ and $$y_{1i}$$ with treatment $$t_i$$ assumes that we observe the outcome

$y_i = (1-t_i) y_{0i} + t_i y_{1i} \nonumber$

So we observe $$y_{1i}$$ under the treatment ($$t_{i}=1$$) and $$y_{0i}$$ when the treatment is withheld ($$t_{i}=0$$).

The treatment $$t_i$$ is determined by regressors $${\bf z}_i$$ and standard normal error $$u_i$$:

$t_i = {\bf 1}({\bf z}_i{\boldsymbol \psi} + u_i > 0) \nonumber$

Probit model with endogenous treatment

We could estimate the parameters $${\boldsymbol \beta}_0$$ and $${\boldsymbol \beta}_1$$ using a probit regression on $$y_i$$ if $$t_i$$ was not related to the unobserved errors $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$. This may not always be the case. Suppose we modeled whether parents send their children to private school and used private tutoring for the child as a treatment. Unobserved factors that influence private school enrollment may be correlated with the unobserved factors that influence whether private tutoring is given. The treatment would be correlated with the unobserved errors of the outcome.

We can treat $$t_i$$ as endogenous by allowing $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$ to be correlated with $$u_i$$. In this post, we will assume that these correlations are the same. Formally, $$\epsilon_{0i}$$, $$\epsilon_{1i}$$, and $$u_i$$ are trivariate normal with covariance:

$\left[\begin{matrix} 1 & \rho_{01} & \rho_{t} \cr \rho_{01} & 1 & \rho_{t} \cr \rho_{t} & \rho_{t} & 1 \end{matrix}\right] \nonumber$

The correlation $$\rho_{01}$$ cannot be identified because we never observe both $$y_{0i}$$ and $$y_{1i}$$. However, identification of $$\rho_{01}$$ is not necessary to estimate the other parameters, because we will observe the covariates and outcome in observations from each treatment group.

The log-likelihood for observation $$i$$ is

$\begin{eqnarray*} \ln L_i = & & {\bf 1}(y_i =1 \mbox{ and } t_i = 1) \ln \Phi_2({\bf x}_i{\boldsymbol \beta}_1, {\bf z}_i{\boldsymbol \gamma},\rho_t) + \cr & & {\bf 1}(y_i=0 \mbox{ and } t_i=1)\ln \Phi_2(-{\bf x}_i{\boldsymbol \beta}_1, {\bf z}_i{\boldsymbol \gamma},-\rho_t) + \cr & & {\bf 1}(y_i=1 \mbox{ and } t_i=0) \ln \Phi_2({\bf x}_i{\boldsymbol \beta}_0, -{\bf z}_i{\boldsymbol \gamma},-\rho_t) + \cr & & {\bf 1}(y_i=0 \mbox{ and } t_i = 0)\ln \Phi_2(-{\bf x}_i{\boldsymbol \beta}_0, -{\bf z}_i{\boldsymbol \gamma},\rho_t) \end{eqnarray*}$

where $$\Phi_2$$ is the bivariate normal cumulative distribution function.

This model is a variation of the bivariate probit model. For a good introduction to the bivariate probit model, see Pindyck and Rubinfeld (1998).

The data

We will simulate data from a probit model with an endogenous treatment and then estimate the parameters of the model using mlexp. Then, we will use margins to estimate the ATE. We simulate a random sample of 10,000 observations.

. set seed 3211

. set obs 10000
number of observations (_N) was 0, now 10,000

. gen x = rnormal() + 4

. gen b = rpoisson(1)

. gen z = rnormal()


First, we generate the regressors. The variable $$x$$ has a normal distribution with a mean of 4 and variance of 1. It is used as a regressor for the outcome and treatment. The variable $$b$$ has a Poisson distribution with a mean of 1 and will be used as a treatment regressor. A standard normal variable $$z$$ is also used as a treatment regressor.

. matrix cm = (1, .3,.7 \ .3, 1, .7 \ .7, .7, 1)

. drawnorm ey0 ey1 et, corr(cm)

. gen t = .5*x - .1*b + .4*z - 2.4 + et > 0

. gen y0 = .6*x - .8 + ey0 > 0

. gen y1 = .3*x - 1.2 + ey1 > 0

. gen y = (1-t)*y0 + t*y1


Next, we draw the unobserved errors. The potential outcome and treatment errors will have correlation $$.7$$. We generate the errors using the drawnorm command. Finally, the outcome and treatment indicators are created.

Estimating the Model Parameters

Now, we will use mlexp to estimate the parameters of the probit model with an endogenous treatment. As in the previous post, we use the cond() function to calculate different values of the likelihood based on the different values of $$y$$ and $$t$$. We use the factor variable operator ibn on $$t$$ in equation y to allow for a different intercept at each level of $$t$$. An interaction between $$t$$ and $$x$$ is also specified in equation y. This allows for a different coefficient on $$x$$ at each level of $$t$$. We also specify vce(robust) so that we can use vce(unconditional) when we use margins later.

. mlexp (ln(cond(t,cond(y,binormal({y: i.t#c.x ibn.t},            ///
>                                  {t: x b z _cons}, {rho}),      ///
>                         binormal(-{y:},{t:}, -{rho})),          ///
>                  cond(y,binormal({y:},-{t:},-{rho}),            ///
>                         binormal(-{y:},-{t:},{rho})))))         ///
>         , vce(robust)

initial:       log pseudolikelihood = -13862.944
alternative:   log pseudolikelihood = -15511.071
rescale:       log pseudolikelihood = -13818.369
rescale eq:    log pseudolikelihood = -10510.488
Iteration 0:   log pseudolikelihood = -10510.488  (not concave)
Iteration 1:   log pseudolikelihood = -10004.946
Iteration 2:   log pseudolikelihood = -9487.4032
Iteration 3:   log pseudolikelihood = -9286.0118
Iteration 4:   log pseudolikelihood =  -9183.901
Iteration 5:   log pseudolikelihood = -9181.9207
Iteration 6:   log pseudolikelihood = -9172.0256
Iteration 7:   log pseudolikelihood = -9170.8198
Iteration 8:   log pseudolikelihood = -9170.7994
Iteration 9:   log pseudolikelihood = -9170.7994

Maximum likelihood estimation

Log pseudolikelihood = -9170.7994               Number of obs     =     10,000

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
y            |
t#c.x |
0  |   .5829362   .0223326    26.10   0.000     .5391651    .6267073
1  |   .2745585   .0259477    10.58   0.000     .2237021     .325415
|
t |
0  |  -.7423227   .0788659    -9.41   0.000     -.896897   -.5877483
1  |  -1.088765   .1488922    -7.31   0.000    -1.380589   -.7969419
-------------+----------------------------------------------------------------
t            |
x |   .4900691   .0148391    33.03   0.000     .4609851    .5191532
b |  -.1086717   .0132481    -8.20   0.000    -.1346375   -.0827059
z |   .4135792   .0150112    27.55   0.000     .3841579    .4430006
_cons |  -2.354418   .0640056   -36.78   0.000    -2.479867   -2.228969
-------------+----------------------------------------------------------------
/rho |   .7146737   .0377255    18.94   0.000     .6407331    .7886143
------------------------------------------------------------------------------


Our parameter estimates are close to their true values.

Estimating the ATE

The ATE of $$t$$ is the expected value of the difference between $$y_{1i}$$ and $$y_{0i}$$, the average difference between the potential outcomes. Using the law of iterated expectations, we have

$\begin{eqnarray*} E(y_{1i}-y_{0i}) &=& E\{E(y_{1i}-y_{0i}|{\bf x}_i)\} \cr &=& E\{\Phi({\bf x}_i{\boldsymbol \beta}_1)- \Phi({\bf x}_i{\boldsymbol \beta}_0)\} \end{eqnarray*}$

This can be estimated as a predictive margin.

Now, we estimate the ATE using margins. We specify the normal probability expression in the expression() option. The xb() term refers to the linear prediction of the first equation, which we can now predict in Stata 14.1. We specify r.t so that margins will take the difference of the expression under $$t=1$$ and $$t=0$$. We specify vce(unconditional) to obtain standard errors for the population ATE rather than the sample ATE. The contrast(nowald) option is specified to omit the Wald test for the difference.

. margins r.t, expression(normal(xb())) vce(unconditional) contrast(nowald)

Contrasts of predictive margins

Expression   : normal(xb())

--------------------------------------------------------------
|            Unconditional
|   Contrast   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
t |
(1 vs 0)  |  -.4112345   .0248909     -.4600197   -.3624493
--------------------------------------------------------------


We estimate that the ATE of $$t$$ on $$y$$ is $$-.41$$. So taking the treatment decreases the probability of a positive outcome by $$.41$$ on average over the population.

We will compare this estimate to the sample difference of $$y_{1}$$ and $$y_{0}$$.

. gen diff = y1 - y0

. sum diff

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
diff |     10,000      -.4132    .5303715         -1          1


In our sample, the average difference of $$y_{1}$$ and $$y_{0}$$ is also $$-.41$$.

Conclusion

I have demonstrated how to estimate the parameters of a model with a complex likelihood function: the probit model with an endogenous treatment using mlexp. See [R] mlexp for more details about mlexp. I have also demonstrated how to use margins to estimate the ATE for the probit model with an endogenous treatment. See [R] margins for more details about margins.

Reference

Pindyck, R. S., and D. L. Rubinfeld. 1998. Econometric Models and Economic Forecasts. 4th ed. New York: McGraw-Hill.

Categories: Statistics Tags:

## Programming an estimation command in Stata: Global macros versus local macros

I discuss a pair of examples that illustrate the differences between global macros and local macros. You can view this post as a technical appendix to the previous post in the #StataProgramming series, which introduced global macros and local macros.

In every command I write, I use local macros to store stuff in a workspace that will not alter a user’s data and to make my code easier to read. A good understanding of the differences between global macros and local macros helps me to write better code. The essential differences between global macros and local macros can be summarized in two points.

1. There is only one global macro with a specific name in Stata, and its contents can be accessed or changed by a Stata command executed at any Stata level.
2. In contrast, each Stata level can have a local macro of a specific name, and each one’s contents cannot be accessed or changed by commands executed at other Stata levels.

If you are already comfortable with 1 and 2, skip the remainder of this post.

Global macros are global

The do-files globala.do and globalb.do in code blocks globala and globalb illustrate what it means to be global.

Code block 1: globala.do

*-------------------------------Begin globala.do ---------------
*! globala.do
*  In this do-file we define the global macro vlist, but we
*  do not use it
global vlist var1 var2 var3

do globalb
*-------------------------------End globala.do ---------------


Code block 2: globalb.do

*-------------------------------Begin globalb.do ---------------
*! globalb.do
*  In this do-file, we use the global macro vlist, defined in globala.do

display "The global macro vlist contains $vlist" *-------------------------------End globalb.do ---------------  The easiest way to see what this code does is to execute it; the output is in example 1. Example 1: Output from do globala . do globala . *-------------------------------Begin globala.do --------------- . *! globala.do . * In this do-file we define the global macro vlist, but we . * do not use it . global vlist var1 var2 var3 . . do globalb . *-------------------------------Begin globalb.do --------------- . *! globalb.do . * In this do-file, we use the global macro vlist, defined in globala.do . . display "The global macro vlist contains$vlist"
The global macro vlist contains var1 var2 var3

. *-------------------------------End globalb.do ---------------
.
end of do-file

. *-------------------------------End globala.do ---------------
.
end of do-file


Line 5 of globalb.do can access the contents of vlist created on line 5 of globala.do because vlist is a global macro.

Figure 1 makes this same point graphically: the global macro vlist is in global memory, and a command executed anywhere can access or change the contents of vlist.

Figure 1: A global macro in global memory

Local macros are local

The do-files locala.do and localb.do in code blocks 3 and 4 illustrate what it means to be local.

Code block 3: locala.do

*-------------------------------Begin locala.do ---------------
*! locala.do
local mylist "a b c"
display "mylist contains mylist'"

do localb

display "mylist contains mylist'"
*-------------------------------End locala.do ---------------


Code block 4: localb.do

*-------------------------------Begin localb.do ---------------
*! localb.do
local mylist "x y z"
display "mylist contains mylist'"
*-------------------------------End localb.do ---------------


The easiest way to see what this code does is to execute it; the output is in example 2.

Example 2: Output from do locala

. do locala

. *-------------------------------Begin locala.do ---------------
. *! locala.do
. local mylist "a b c"

. display "mylist contains mylist'"
mylist contains a b c

.
. do localb

. *-------------------------------Begin localb.do ---------------
. *! localb.do
. local mylist "x y z"

. display "mylist contains mylist'"
mylist contains x y z

. *-------------------------------End localb.do ---------------
.
end of do-file

.
. display "mylist contains mylist'"
mylist contains a b c

. *-------------------------------End locala.do ---------------
.
end of do-file


The code in blocks 3 and 4 and the output in example 2 illustrate that a command executed at the level of localb.do cannot change the local macro mylist that is local to locala.do. Line 8 of locala.do displays the contents of the mylist local to locala.do. The contents are still a b c after localb.do finishes because the local macro mylist created on line 3 of locala.do is local to locala.do and it is unaffected by the mylist created on line 3 of localb.do.

Figure 2 makes this point graphically. The contents of the local macro mylist that is local to locala.do can be accessed and changed by commands run in locala.do, but not by commands run in localb.do. Analogously, the contents of the local macro mylist that is local to localb.do can be accessed and changed by commands run in localb.do, but not by commands run in locala.do.

Figure 2: Local macros are local to do-files

Done and Undone

I essentially provided you with a technical appendix to the previous #StataProgramming post. I illustrated that global macros are global and that local macros are local. I use the concepts developed thus far to present an ado-command in the next post.

Categories: Programming Tags: