## What’s new from Stata Press

Reflecting on the year, Stata has a lot to be thankful for—we released Stata 14, celebrated 30 years of Stata, and had the pleasure of meeting and working with many great people, including our Stata Press authors.

Are you interested in writing a book about Stata or just a book on statistics? We’d love to work with you too. Stata Press offers books with clear, step-by-step examples that make learning and teaching easier. Read more about our submission guidelines, or contact us to get started.

If you’re searching for a good book to read during the holidays, check out our full list of books or our most recent ones below. If you’d like to be notified when new books are released, sign up for Stata Press email alerts.

I hope you all have a great New Year!

### Stata for the Behavioral Sciences

Michael N. Mitchell’s Stata for the Behavioral Sciences is an ideal reference for Read more…

Categories: Tags:

## Programming an estimation command in Stata: Mata 101

I introduce Mata, the matrix programming language that is part of Stata.

This is the eleventh post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Meeting Mata

Mata is a matrix programming language that is part of Stata. Mata code is fast because it is compiled to object code that runs on a virtual machine; type help m1_how for details.

The easiest way to learn Mata is to use it. I begin with an interactive session. (You might find it useful to type along.)

Example 1: A first interactive Mata session

. mata:
------------------------------------------------- mata (type end to exit) ------
: X = J(3, 4, 5)

: X
1   2   3   4
+-----------------+
1 |  5   5   5   5  |
2 |  5   5   5   5  |
3 |  5   5   5   5  |
+-----------------+

: w = (1::4)

: w
1
+-----+
1 |  1  |
2 |  2  |
3 |  3  |
4 |  4  |
+-----+

: v = X*w

: v
1
+------+
1 |  50  |
2 |  50  |
3 |  50  |
+------+

: v'
1    2    3
+----------------+
1 |  50   50   50  |
+----------------+

: end
--------------------------------------------------------------------------------


Typing mata: causes Stata to drop down to a Mata session. Typing end ends the Mata session, thereby popping back up to Stata. The dot prompt . is Stata asking for something to do. After you type mata:, the colon prompt : is the Mata compiler asking for something to do.

Typing X = J(3, 4, 5) at the colon prompt causes Mata to compile and execute this code. J(r, c, v) is the Mata function that creates an r$$\times$$c matrix, each of whose elements is v. The expression on the right-hand side of the assignment operator = is assigned to the symbol on the left-hand side.

Typing X by itself causes Mata to display what X contains, which is a 3$$\times$$4 matrix of 5s. Unassigned expressions display their results. Type help m2_exp for details about expressions.

Typing w = (1::4) causes Mata to use the column range operator to create the 4$$\times$$1 column vector that was assigned to w and displayed when I typed w by itself. Type help m2_op_range for details and a discussion of the row range operator.

Typing v = X*w causes Mata to assign the matrix product of X times w to v, which I subsequently display. I then illustrate that is the transpose operator. Type help m2_exp, marker(remarks7) for a list of operators.

Again, typing end ends the Mata session.

In almost all the work I do, I extract submatrices from a matrix.

Example 2: Extracting submatrices from a matrix

. mata:
------------------------------------------------- mata (type end to exit) ------
: rseed(1234)

: W = runiform(4,4)

: W
1             2             3             4
+---------------------------------------------------------+
1 |  .9472316166   .0522233748   .9743182755   .9457483679  |
2 |  .1856478315   .9487333737   .8825376215   .9440776079  |
3 |  .0894258515   .7505444902   .9484983174   .1121626508  |
4 |  .4809064012   .9763447517   .1254975307   .7655025515  |
+---------------------------------------------------------+

: v = (2, 4)

: u = (1\ 3)

: v
1   2
+---------+
1 |  2   4  |
+---------+

: u
1
+-----+
1 |  1  |
2 |  3  |
+-----+

: W[u, v]
1             2
+-----------------------------+
1 |  .0522233748   .9457483679  |
2 |  .7505444902   .1121626508  |
+-----------------------------+

: W[| 1,1 \ 3,3 |]
1             2             3
+-------------------------------------------+
1 |  .9472316166   .0522233748   .9743182755  |
2 |  .1856478315   .9487333737   .8825376215  |
3 |  .0894258515   .7505444902   .9484983174  |
+-------------------------------------------+

: end
--------------------------------------------------------------------------------


I use rseed() to set the seed for the random-number generator and then use runiform(r,c) to create a 4$$\times$$4 matrix uniform deviates, which I subsequently display.

Next, I use the row-join operator , to create the row vector v and I use the column-join operator \ to create the column vector u. Type help m2_op_join for details.

Typing W[u,v] extracts from W the rows specified in the vector u and the columns specified in the vector v.

I frequently extract rectangular blocks defined by a top-left element and a bottom-right element. I illustrate this syntax by typing

W[| 1,1 \ 3,3 |]

In detail, [| opens a range-subscript extraction, 1,1 is the address of the top-left element, \ separates the top-left element from the bottom-right element, 3,3 is the address of the bottom-right element, and |] closes a range-subscript extraction. Type help m2_subscripts for details.

Ironically, when I am doing matrix programming, I frequently want the element-by-element operator instead of the matrix operator. Preface any matrix operator in Mata with a colon (:) to obtain the element-by-element equivalent.

Example 3: Element-wise operators

. mata:
------------------------------------------------- mata (type end to exit) ------
: W = W[| 2,1 \ 4,4 |]

: W
1             2             3             4
+---------------------------------------------------------+
1 |  .1856478315   .9487333737   .8825376215   .9440776079  |
2 |  .0894258515   .7505444902   .9484983174   .1121626508  |
3 |  .4809064012   .9763447517   .1254975307   .7655025515  |
+---------------------------------------------------------+

: v = .1*(4::6)

: v
1
+------+
1 |  .4  |
2 |  .5  |
3 |  .6  |
+------+

: v:*W
1             2             3             4
+---------------------------------------------------------+
1 |  .0742591326   .3794933495   .3530150486   .3776310432  |
2 |  .0447129257   .3752722451   .4742491587   .0560813254  |
3 |  .2885438407    .585806851   .0752985184   .4593015309  |
+---------------------------------------------------------+

: v'*W
1             2             3             4
+---------------------------------------------------------+
1 |  .4075158991   1.340572446   .9025627257   .8930138994  |
+---------------------------------------------------------+

: end
--------------------------------------------------------------------------------


I extract the bottom four rows of W, store this matrix in W, and display this new W. I then create a row-wise conformable vector v, perform element-wise multiplication of v across the columns of W, and display the result. I cannot type v*W because the 3$$\times$$1 v is not conformable with the 3$$\times$$3 W. But I can, and do, type v’*W because the 1$$\times$$3 v’ is conformable with the 3$$\times$$3 W.

Example 4 uses an element-wise logical operator.

Example 4: Element-wise logical operator

. mata:
------------------------------------------------- mata (type end to exit) ------
: W :< v
1   2   3   4
+-----------------+
1 |  1   0   0   0  |
2 |  1   0   0   1  |
3 |  1   0   1   0  |
+-----------------+

: end
--------------------------------------------------------------------------------


I display the result of comparing the element-wise conformable v with W. Type help m2_op_colon for details.

Stata data in Mata

The Mata function st_data() creates a Mata matrix containing a copy of the data from the Stata dataset in memory. The Mata function st_view() creates a Mata view of the data in the Stata dataset in memory. Views act like matrices, but there is a speed-space tradeoff. Copies are fast at the cost of using twice as much memory. Views are slower, but they use little extra memory.

Copying the data from Stata into Mata doubles the memory used, but the values are stored in Mata memory. Every time a Mata function asks for a value from a matrix, it finds it immediately. In contrast, a view of the data in Stata barely increases the memory used, but the values are in Stata memory. Every time a Mata function asks for a value from a view, it finds a sign telling it where in Stata to get the value.

Example 5: Data from Stata into Mata

. sysuse auto
(1978 Automobile Data)

. list mpg headroom trunk rep78 turn foreign in 1/3 , nolabel

+-------------------------------------------------+
| mpg   headroom   trunk   rep78   turn   foreign |
|-------------------------------------------------|
1. |  22        2.5      11       3     40         0 |
2. |  17        3.0      11       3     40         0 |
3. |  22        3.0      12       .     35         0 |
+-------------------------------------------------+

. mata:
------------------------------------------------- mata (type end to exit) ------
: Y = st_data(., "mpg headroom trunk")

: st_view(X=., ., "rep78 turn foreign")

: V = Y,X

: V[| 1,1 \ 3,6 |]
1     2     3     4     5     6
+-------------------------------------+
1 |   22   2.5    11     3    40     0  |
2 |   17     3    11     3    40     0  |
3 |   22     3    12     .    35     0  |
+-------------------------------------+

: X[3,1] = 7

: X[| 1,1 \ 3,3 |]
1    2    3
+----------------+
1 |   3   40    0  |
2 |   3   40    0  |
3 |   7   35    0  |
+----------------+

: end
--------------------------------------------------------------------------------

. list rep78 turn foreign in 1/3 , nolabel

+------------------------+
| rep78   turn   foreign |
|------------------------|
1. |     3     40         0 |
2. |     3     40         0 |
3. |     7     35         0 |
+------------------------+


After I list out the first three observations on six variables in the auto dataset, I drop down to Mata, use st_data() to put a copy of all the observations on mpg, headroom, and trunk into the Mata matrix Y, and use st_view() to create the Mata view X on to all the observations on rep78, turn, and foreign.

After row-joining Y and X to create V, I display the first 3 rows of V. Note that the third observation on rep78 is missing and that Mata matrices and views can contain missing values.

Changing the value of an element in a view changes the data in Stata. I illustrate this point by replacing the (3,1) element of the view X with 7, displaying the first three rows of the view, and listing out the first three observations on rep78, turn, and foreign.

Copying matrices between Mata and Stata

The Mata function st_matrix() puts a copy of a Stata matrix into a Mata matrix, or it puts a copy of a Mata matrix into a Stata matrix. In example 6, V = st_matrix("B") puts a copy of the Stata matrix B into the Mata matrix V.

Example 6: Creating a copy of a Stata matrix in a Mata vector

. matrix B = (1, 2\ 3, 4)

. matrix list B

B[2,2]
c1  c2
r1   1   2
r2   3   4

. mata:
------------------------------------------------- mata (type end to exit) ------
: V = st_matrix("B")

: V
1   2
+---------+
1 |  1   2  |
2 |  3   4  |
+---------+

: end
--------------------------------------------------------------------------------


In example 7, st_matrix("Z", W) puts a copy of the Mata matrix W into the Stata matrix Z.

Example 7: Creating a copy of a Mata matrix in a Stata vector

. mata:
------------------------------------------------- mata (type end to exit) ------
: W = (4..6\7..9)

: W
1   2   3
+-------------+
1 |  4   5   6  |
2 |  7   8   9  |
+-------------+

: st_matrix("Z", W)

: end
--------------------------------------------------------------------------------

. matrix list Z

Z[2,3]
c1  c2  c3
r1   4   5   6
r2   7   8   9


Strings

Mata matrices can be string matrices.

In my work, I frequently have a list of variables in a string scalar that is easier to work with as a string vector.

Turning a string scalar list into a string vector

. mata:
------------------------------------------------- mata (type end to exit) ------
: s1 = "price mpg trunk"

: s1
price mpg trunk

: s2 = tokens(s1)

: s2
1       2       3
+-------------------------+
1 |  price     mpg   trunk  |
+-------------------------+

: end
--------------------------------------------------------------------------------


I use tokens() to create the string vector s2 from the string vector s1.

Flow of control

Mata has constructs for looping over a block of code enclosed between curly braces or only executing it if an expression is true.

I frequently use the for() construction to loop over a block of code.

Code block 1: for()

mata:
for(i=1; i<=3; i=i+1) {
i
}
end


In this example, I set i to the initial value of 1. The loop will continue as long as i is less than or equal to 3. Each time through the loop, the block of code enclosed between the curly braces is executed, and 1 is added to the current value of i. The code block displays the value of i. Example 9 illustrates these points.

Example 9: A for loop

. mata:
------------------------------------------------- mata (type end to exit) ------
: for(i=1; i<=3; i=i+1) {
>         i
> }
1
2
3

: end
--------------------------------------------------------------------------------


Sometimes, I want to execute a block of code as long as a condition is true, in which case I use a while loop, as in code block 2 and example 10.

Code block 1: globala.do

i = 7
while (i>5) {
i
i = i - 1
}


I set i to 7 and repeat the block of code between the curly braces while i is greater than 5. The block of code displays the current value of i, then subtracts 1 from i.

Example 10: A while loop

. mata:
------------------------------------------------- mata (type end to exit) ------
: i = 7

: while (i>5) {
>     i
>     i = i - 1
> }
7
6

: end
--------------------------------------------------------------------------------


The if construct only executes a code block if an expression is true. I usually use the if-else construct that executes one code block if an expression is true and another code block if the expression is false.

Example 11: An if-else construct

. mata:
-------------------------------------------- mata (type end to exit) ---
: for(i=2; i<10; i=i+5) {
>         i
>         if (i<3) {
>                 "i is less than 3"
>         }
>         else {
>                 "i is not less than 3"
>         }
> }
2
i is less than 3
7
i is not less than 3

: end
-------------------------------------------------------------------------


One-line calls to Mata

I frequently make one-line calls to Mata from Stata. A one-line call to Mata causes Stata to drop to Mata, compile and execute the line of Mata code, and pop back up to Stata.

Example 12: One-line calls to Mata

. mata: st_matrix("Q", I(3))

. matrix list Q

symmetric Q[3,3]
c1  c2  c3
r1   1
r2   0   1
r3   0   0   1


In example 12, I use the one-line call to Mata mata: st_matrix("Q", I(3)) to put a copy of the Mata matrix returned by the Mata expression I(3) into the Stata matrix Q. After the one-line call to Mata, I am back in Stata, so I use matrix list Q to show that the Stata matrix Q is a copy of the Mata matrix W.

Done and undone

I used an interactive session to introduce Mata, the matrix programming language that is part of Stata.

In the next post, I show how to define Mata functions.

Categories: Programming Tags:

## Using mlexp to estimate endogenous treatment effects in a heteroskedastic probit model

I use features new to Stata 14.1 to estimate an average treatment effect (ATE) for a heteroskedastic probit model with an endogenous treatment. In 14.1, we added new prediction statistics after mlexp that margins can use to estimate an ATE.

I am building on a previous post in which I demonstrated how to use mlexp to estimate the parameters of a probit model with an endogenous treatment and used margins to estimate the ATE for the model Using mlexp to estimate endogenous treatment effects in a probit model. Currently, no official commands estimate the heteroskedastic probit model with an endogenous treatment, so in this post I show how mlexp can be used to extend the models estimated by Stata.

Heteroskedastic probit model

For binary outcome $$y_i$$ and regressors $${\bf x}_i$$, the probit model assumes

$y_i = {\bf 1}({\bf x}_i{\boldsymbol \beta} + \epsilon_i > 0)$

The indicator function $${\bf 1}(\cdot)$$ outputs 1 when its input is true and outputs 0 otherwise. The error $$\epsilon_i$$ is standard normal.

Assuming that the error has constant variance may not always be wise. Suppose we are studying a certain business decision. Large firms, because they have the resources to take chances, may exhibit more variation in the factors that affect their decision than small firms.

In the heteroskedastic probit model, regressors $${\bf w}_i$$ determine the variance of $$\epsilon_i$$. Following Harvey (1976), we have

$\mbox{Var}\left(\epsilon_i\right) = \left\{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber$

Heteroskedastic probit model with treatment

In this section, I review the potential-outcome framework used to define an ATE and extend it for the heteroskedastic probit model. For each treatment level, there is an outcome that we would observe if a person were to select that treatment level. When the outcome is binary and there are two treatment levels, we can specify how the potential outcomes $$y_{0i}$$ and $$y_{1i}$$ are generated from the regressors $${\bf x}_i$$ and the error terms $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$:

$\begin{eqnarray*} y_{0i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_0 + \epsilon_{0i} > 0) \cr y_{1i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_1 + \epsilon_{1i} > 0) \end{eqnarray*}$

We assume a heteroskedastic probit model for the potential outcomes. The errors are normal with mean $$0$$ and conditional variance generated by regressors $${\bf w}_i$$. In this post, we assume equal variance of the potential outcome errors.

$\mbox{Var}\left(\epsilon_{0i}\right) = \mbox{Var}\left(\epsilon_{1i}\right) = \left\{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber$

The heteroskedastic probit model for potential outcomes $$y_{0i}$$ and $$y_{1i}$$ with treatment $$t_i$$ assumes that we observe the outcome

$y_i = (1-t_i) y_{0i} + t_i y_{1i} \nonumber$

So we observe $$y_{1i}$$ under the treatment ($$t_{i}=1$$) and $$y_{0i}$$ when the treatment is withheld ($$t_{i}=0$$).

The treatment $$t_i$$ is determined by regressors $${\bf z}_i$$ and error $$u_i$$:

$t_i = {\bf 1}({\bf z}_i{\boldsymbol \psi} + u_i > 0) \nonumber$

The treatment error $$u_i$$ is normal with mean zero, and we allow its variance to be determined by another set of regressors $${\bf v}_i$$:

$\mbox{Var}\left(u_i\right) = \left\{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)\right\}^2 \nonumber$

Heteroskedastic probit model with endogenous treatment

In the previous post, I described how to model endogeneity for the treatment $$t_i$$ by correlating the outcome errors $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$ with the treatment error $$u_i$$. We use the same framework for modeling endogeneity here. The variance of the errors may change depending on the heteroskedasticity regressors $${\bf w}_i$$ and $${\bf v}_i$$, but their correlation remains constant. The errors $$\epsilon_{0i}$$, $$\epsilon_{1i}$$, and $$u_i$$ are trivariate normal with correlation

$\left[\begin{matrix} 1 & \rho_{01} & \rho_{t} \cr \rho_{01} & 1 & \rho_{t} \cr \rho_{t} & \rho_{t} & 1 \end{matrix}\right] \nonumber$

Now we have all the pieces we need to write the log likelihood of the heteroskedastic probit model with an endogenous treatment. The form of the likelihood is similar to what was given in the previous post. Now the inputs to the bivariate normal cumulative distribution function, $$\Phi_2$$, are standardized by dividing by the conditional standard deviations of the errors.

The log likelihood for observation $$i$$ is

$\begin{eqnarray*} \ln L_i = & & {\bf 1}(y_i =1 \mbox{ and } t_i = 1) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\} + \cr & & {\bf 1}(y_i=0 \mbox{ and } t_i=1)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr & & {\bf 1}(y_i=1 \mbox{ and } t_i=0) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr & & {\bf 1}(y_i=0 \mbox{ and } t_i = 0)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\} \end{eqnarray*}$

The data

We will simulate data from a heteroskedastic probit model with an endogenous treatment and then estimate the parameters of the model with mlexp. Then, we will use margins to estimate the ATE.

. set seed 323

. set obs 10000
number of observations (_N) was 0, now 10,000

. generate x = .8*rnormal() + 4

. generate b = rpoisson(1)

. generate z = rnormal()

. matrix cm = (1, .3,.7 \ .3, 1, .7 \ .7, .7, 1)

. drawnorm ey0 ey1 et, corr(cm)


We simulate a random sample of 10,000 observations. The treatment and outcome regressors are generated in a similar manner to their creation in the last post. As in the last post, we generate the errors with drawnorm to have correlation $$0.7$$.

. generate g = runiform()

. generate h = rnormal()

. quietly replace ey0 = ey0*exp(.5*g)

. quietly replace ey1 = ey1*exp(.5*g)

. quietly replace et = et*exp(.1*h)

. generate t = .5*x - .1*b + .5*z - 2.4 + et > 0

. generate y0 = .6*x - .8 + ey0 > 0

. generate y1 = .3*x - 1.3 + ey1 > 0

. generate y = (1-t)*y0 + t*y1


The uniform variable g is generated as a regressor for the outcome error variance, while h is a regressor for the treatment error variance. We scale the errors by using the variance regressors so that they are heteroskedastic, and then we generate the treatment and outcome indicators.

Estimating the model parameters

Now, we will use mlexp to estimate the parameters of the heteroskedastic probit model with an endogenous treatment. As in the previous post, we use the cond() function to calculate different values of the likelihood based on the different values of $$y$$ and $$t$$. We use the factor-variable operator ibn on $$t$$ in equation y to allow for a different intercept at each level of $$t$$. An interaction between $$t$$ and $$x$$ is also specified in equation y. This allows for a different coefficient on $$x$$ at each level of $$t$$.

. mlexp (ln(cond(t, ///
>         cond(y,binormal({y: i.t#c.x ibn.t}/exp({g:g}), ///
>             {t: x b z _cons}/exp({h:h}),{rho}), ///
>                 binormal(-{y:}/exp({g:}),{t:}/exp({h:}),-{rho})), ///
>         cond(y,binormal({y:}/exp({g:}),-{t:}/exp({h:}),-{rho}), ///
>                 binormal(-{y:}/exp({g:}),-{t:}/exp({h:}),{rho}) ///
>         )))), vce(robust)

initial:       log pseudolikelihood = -13862.944
alternative:   log pseudolikelihood = -16501.619
rescale:       log pseudolikelihood = -13858.877
rescale eq:    log pseudolikelihood = -11224.877
Iteration 0:   log pseudolikelihood = -11224.877  (not concave)
Iteration 1:   log pseudolikelihood = -10644.625
Iteration 2:   log pseudolikelihood = -10074.998
Iteration 3:   log pseudolikelihood = -9976.6027
Iteration 4:   log pseudolikelihood = -9973.0988
Iteration 5:   log pseudolikelihood = -9973.0913
Iteration 6:   log pseudolikelihood = -9973.0913

Maximum likelihood estimation

Log pseudolikelihood = -9973.0913               Number of obs     =     10,000

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
y            |
t#c.x |
0  |   .6178115   .0334521    18.47   0.000     .5522467    .6833764
1  |   .2732094   .0365742     7.47   0.000     .2015253    .3448936
|
t |
0  |  -.8403294   .1130197    -7.44   0.000    -1.061844   -.6188149
1  |  -1.215177   .1837483    -6.61   0.000    -1.575317   -.8550371
-------------+----------------------------------------------------------------
g            |
g |   .4993187   .0513297     9.73   0.000     .3987143    .5999232
-------------+----------------------------------------------------------------
t            |
x |   .4985802   .0183033    27.24   0.000     .4627065    .5344539
b |  -.1140255   .0132988    -8.57   0.000    -.1400908   -.0879603
z |   .4993995   .0150844    33.11   0.000     .4698347    .5289643
_cons |  -2.402772   .0780275   -30.79   0.000    -2.555703   -2.249841
-------------+----------------------------------------------------------------
h            |
h |   .1011185   .0199762     5.06   0.000     .0619658    .1402713
-------------+----------------------------------------------------------------
/rho |   .7036964   .0326734    21.54   0.000     .6396577    .7677351
------------------------------------------------------------------------------


Our parameter estimates are close to their true values.

Estimating the ATE

The ATE of $$t$$ is the expected value of the difference between $$y_{1i}$$ and $$y_{0i}$$, the average difference between the potential outcomes. Using the law of iterated expectations, we have

$\begin{eqnarray*} E(y_{1i}-y_{0i})&=& E\left\{ E\left(y_{1i}-y_{0i}|{\bf x}_i,{\bf w}_i\right)\right\} \cr &=& E\left\lbrack\Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{ \exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}- \Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{ \exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}\right\rbrack \cr \end{eqnarray*}$

This can be estimated as a mean of predictions.

Now, we estimate the ATE by using margins. We specify the normal probability expression in the expression() option. We use the expression function xb() to get the linear predictions for the outcome equation and the outcome error variance equation. We can now predict these linear forms after mlexp in Stata 14.1. We specify r.t so that margins will take the difference of the expression under t=1 and t=0. We specify vce(unconditional) to obtain standard errors for the population ATE rather than the sample ATE; we specified vce(robust) for mlexp so that we could specify vce(unconditional) for margins. The contrast(nowald) option is specified to omit the Wald test for the difference.

. margins r.t, expression(normal(xb(y)/exp(xb(g)))) ///
>     vce(unconditional) contrast(nowald)

Contrasts of predictive margins

Expression   : normal(xb(y)/exp(xb(g)))

--------------------------------------------------------------
|            Unconditional
|   Contrast   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
t |
(1 vs 0)  |  -.4183043   .0202635     -.4580202   -.3785885
--------------------------------------------------------------


We estimate that the ATE of $$t$$ on $$y$$ is $$-0.42$$. So taking the treatment decreases the probability of a positive outcome by $$0.42$$ on average over the population.

We will compare this estimate to the average difference of $$y_{1}$$ and $$y_{0}$$ in the sample. We can do this because we simulated the data. In practice, only one potential outcome is observed for every observation, and this average difference cannot be computed.

. generate diff = y1 - y0

. sum diff

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
diff |     10,000      -.4164    .5506736         -1          1


In our sample, the average difference of $$y_{1}$$ and $$y_{0}$$ is also $$-0.42$$.

Conclusion

I have demonstrated how to estimate the parameters of a model that is not available in Stata: the heteroskedastic probit model with an endogenous treatment using mlexp. See [R] mlexp for more details about mlexp. I have also demonstrated how to use margins to estimate the ATE for the heteroskedastic probit model with an endogenous treatment. See [R] margins for more details about mlexp.

Reference

Harvey, A. C. 1976. Estimating regression models with multiplicative heteroscedasticity. Econometrica 44: 461-465.

Categories: Statistics Tags:

## Programming an estimation command in Stata: Using a subroutine to parse a complex option

I make two improvements to the command that implements the ordinary least-squares (OLS) estimator that I discussed in Programming an estimation command in Stata: Allowing for options. First, I add an option for a cluster-robust estimator of the variance-covariance of the estimator (VCE). Second, I make the command accept the modern syntax for either a robust or a cluster-robust estimator of the VCE. In the process, I use subroutines in my ado-program to facilitate the parsing, and I discuss some advanced parsing tricks.

This is the tenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Allowing for a robust or a cluster-robust VCE

The syntax of myregress9, which I discussed in Programming an estimation command in Stata: Allowing for options, is

myregress9 depvar [indepvars] [if] [in] [, robust noconstant]

The syntax of myregress10, which I discuss here, is

myregress10 depvar [indepvars] [if] [in] [, vce(robust | cluster clustervar) noconstant]

By default, myregress10 estimates the VCE assuming that the errors are independently and identically distributed (IID). If the option vce(robust) is specified, myregress10 uses the robust estimator of the VCE. If the option vce(cluster clustervar) is specified, myregress10 uses the cluster-robust estimator of the VCE. See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2010, 2015) for introductions to OLS; see Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects for the formulas and Stata matrix implementations.

I recommend that you click on the file name to download the code for my myregress10.ado. To avoid scrolling, view the code in the do-file editor, or your favorite text editor, to see the line numbers.

*! version 10.0.0  02Dec2015
program define myregress10, eclass sortpreserve
version 14

syntax varlist(numeric ts fv) [if] [in] [, vce(string) noCONStant ]
marksample touse

gettoken depvar indeps : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

if "vce'"' != "" {
my_vce_parse , vce(vce')
local vcetype     "robust"
local clustervar  "r(clustervar)'"
if "clustervar'" != "" {
markout touse' clustervar'
sort clustervar'
}
}

quietly matrix accum zpz' = varlist' if touse' , constant'
local N                    = r(N)
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
matrix b'                 = (xpxi'*xpy')'
local k                    = p' - diag0cnt(xpxi') - 1
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'

if "vcetype'" == "robust" {
if "clustervar'" == "" {
tempname M
quietly matrix accum M' = indeps'         ///
[iweight=res2'] if touse' , constant'
local fac                = (N'/(N'-k'))
local df_r               = (N'-k')
}
else  {
tempvar idvar
tempname M
quietly egen idvar' = group(clustervar') if touse'
quietly summarize idvar' if touse', meanonly
local Nc   = r(max)
local fac  = ((N'-1)/(N'-k')*(Nc'/(Nc'-1)))
local df_r = (Nc'-1)
matrix opaccum M' = indeps' if touse'     ///
, group(clustervar') opvar(res')
}
matrix V' = (fac')*xpxi'*M'*xpxi'
local vce                   "robust"
local vcetype               "Robust"
}
else {                            // IID Case
quietly summarize res2' if touse' , meanonly
local sum           = r(sum)
local s2            = sum'/(N'-k')
local df_r          = (N'-k')
matrix V'          = s2'*xpxi'
}

ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N       = N'
ereturn scalar rank    = k'
ereturn scalar df_r    = df_r'
ereturn local  vce     "vce'"
ereturn local  vcetype "vcetype'"
ereturn local clustvar "clustvar'"
ereturn local  cmd     "myregress10"
ereturn display
end

program define my_vce_parse, rclass
syntax  [, vce(string) ]

local case : word count vce'

if case' > 2 {
my_vce_error , typed(vce')
}

local 0 ", vce'"'
syntax  [, Robust CLuster * ]

if case' == 2 {
if "robust'" == "robust" | "cluster'" == "" {
my_vce_error , typed(vce')
}

capture confirm numeric variable options'
if _rc {
my_vce_error , typed(vce')
}

local clustervar "options'"
}
else {    // case = 1
if "robust'" == "" {
my_vce_error , typed(vce')
}

}

return clear
return local clustervar "clustervar'"
end

program define my_vce_error
syntax , typed(string)

display "{red}{bf:vce(typed')} invalid"'
error 498
end


The syntax command on line 5 puts whatever the user encloses in vce() into a local macro called vce. For example, if the user types

. myregress10 price mpg trunk , vce(hello there)


the local macro vce will contain “hello there”. If the user does not specify something in the vce() option, the local macro vce will be empty. Line 14 uses this condition to execute lines 15–21 only if the user has specified something in option vce().

When the user specifies something in the vce() option, line 15 calls the ado subroutine my_vce_parse to parse what is in the local macro vce. my_vce_parse stores the name of the cluster variable in r(clustervar) and deals with error conditions, as I discuss below. Line 16 stores “robust” into the local macro vcetype, and line 17 stores the contents of the local macro r(clustervar) created by my_vce_parse into the local macro and clustervar.

If the user does not specify something in vce(), the local macro vcetype will be empty and line 36 ensures that myregress10 will compute an IID estimator of the VCE.

Lines 19 and 20 are only executed if the local macro clustervar is not empty. Line 19 updates the touse variable, whose name is stored in the local macro touse, to account for missing values in the cluster variable, whose name is stored in clustervar. Line 20 sorts the dataset in the ascending order of the cluster variable. Users do not want estimation commands resorting their datasets. On line 2, I specified the sortpreserve option on program define to keep the dataset in the order it was in when myregress10 was executed by the user.

Lines 36–65 compute the requested estimator for the VCE. Recall that the local macro vcetype is empty or it contains “robust” and that the local macro clustervar is empty or it contains the name of the cluster variable. The if and else statements use the values stored in vcetype and clustervar to execute one of three blocks of code.

1. Lines 38–42 compute a robust estimator of the VCE when vcetype contains “robust” and clustervar is empty.
2. Lines 45–53 compute a cluster-robust of the VCE when vcetype contains “robust” and clustervar contains the name of the cluster variable.
3. Lines 60–64 compute an IID estimator of the VCE when vcetype does not contain “robust”.

Line 73 stores the name of the cluster variable in e(clustervar), if the local macro clustervar is not empty.

Lines 78–111 define the rclass ado-subroutine my_vce_parse, which performs two tasks. First, it stores the name of the cluster variable in the local macro r(clustervar) when the user specifies vce(cluster clustervar). Second, it finds cases in which the user specified a syntax error in vce() and returns an error in such cases.

Putting these parsing details into a subroutine makes the main command much easier to follow. I recommend that you encapsulate details in subroutines.

The ado-subroutine my_vce_parse is local to the ado-command myregress10; the name my_vce_parse is in a namespace local to myregress10, and my_vce_parse can only be executed from within myregress10.

Line 79 uses syntax to store whatever the user specified in the option vce() in the local macro vce. Line 81 puts the number of words in vce into the local macro case. Line 83 causes the ado-subroutine my_vce_error to display an error message and return error code 498 when there are more than two words in vce. (Recall that vce should contain either robust or cluster clustervar.)

Having ruled out the cases with more than two words, line 87 stores what the local macro vce contains in the local macro 0. Line 88 uses syntax to parse what is in the local macro 0. If the user specified vce(robust), or a valid abbreviation thereof, syntax stores “robust” in the local macro robust; otherwise, the local macro robust is empty. If the user specified vce(cluster something), or a valid abbreviation of cluster, syntax stores “cluster” in the local macro cluster; otherwise, the local macro cluster is empty. The option * causes syntax to put any remaining options into the local macro options. In this case, syntax will store the something in the local macro options.

Remember the trick used in lines 87 and 88. Option parsing is frequently made much easier by storing what a local macro contains in the local macro 0 and using syntax to parse it.

When there are two words in the local macro vce, lines 91–100 ensure that the first word is “cluster” and that the second word, stored in the local macro options, is the name of a numeric variable. When all is well, line 100 stores the name of this numeric variable in the local macro clustervar. Lines 95–98 use a subtle construction to display a custom error message. Rather than let confirm display an error message, lines 95–98 use capture and an if condition to display our custom error message. In detail, line 95 uses confirm to confirm that the local macro options contains the name of a numeric variable. capture puts the return code produced by confirm in the scalar _rc. When options contains the name of a numeric variable, confirm produces the return code 0 and capture stores 0 in _rc; otherwise, confirm produces a positive return code, and capture stores this positive return code in _rc.

When all is well, line 109 clears whatever was in r(), and line 110 stores the name of the cluster variable in r(clustervar).

Lines 113–118 define the ado-subroutine my_vce_error, which displays a custom error message. Like my_vce_parse, my_vce_error is local to myregress10.ado.

Done and undone

I added an option for a cluster-robust estimator of the VCE, and I made myregress10 accept the modern syntax for either a robust or a cluster-robust estimator of the VCE. In the process, I used subroutines in myregress10.ado to facilitate the parsing, and I discussed some advanced parsing tricks.

Although it may seem that I have covered every possible nuance, I have only dealt with a few. Type help syntax for more details about parsing options using the syntax command.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Stock, J. H., and M. W. Watson. 2010. Introduction to Econometrics. 3rd ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Wooldridge, J. M. 2015. Introductory Econometrics: A Modern Approach. 6th ed. Cincinnati, Ohio: South-Western.

Categories: Programming Tags:

## Understanding the generalized method of moments (GMM): A simple example

$$\newcommand{\Eb}{{\bf E}}$$This post was written jointly with Enrique Pinzon, Senior Econometrician, StataCorp.

The generalized method of moments (GMM) is a method for constructing estimators, analogous to maximum likelihood (ML). GMM uses assumptions about specific moments of the random variables instead of assumptions about the entire distribution, which makes GMM more robust than ML, at the cost of some efficiency. The assumptions are called moment conditions.

GMM generalizes the method of moments (MM) by allowing the number of moment conditions to be greater than the number of parameters. Using these extra moment conditions makes GMM more efficient than MM. When there are more moment conditions than parameters, the estimator is said to be overidentified. GMM can efficiently combine the moment conditions when the estimator is overidentified.

We illustrate these points by estimating the mean of a $$\chi^2(1)$$ by MM, ML, a simple GMM estimator, and an efficient GMM estimator. This example builds on Efficiency comparisons by Monte Carlo simulation and is similar in spirit to the example in Wooldridge (2001).

GMM weights and efficiency

GMM builds on the ideas of expected values and sample averages. Moment conditions are expected values that specify the model parameters in terms of the true moments. The sample moment conditions are the sample equivalents to the moment conditions. GMM finds the parameter values that are closest to satisfying the sample moment conditions.

The mean of a $$\chi^2$$ random variable with $$d$$ degree of freedom is $$d$$, and its variance is $$2d$$. Two moment conditions for the mean are thus

$\begin{eqnarray*} \Eb\left[Y – d \right]&=& 0 \\ \Eb\left[(Y – d )^2 – 2d \right]&=& 0 \end{eqnarray*}$

The sample moment equivalents are

$\begin{eqnarray} 1/N\sum_{i=1}^N (y_i – \widehat{d} )&=& 0 \tag{1} \\ 1/N\sum_{i=1}^N\left[(y_i – \widehat{d} )^2 – 2\widehat{d}\right] &=& 0 \tag{2} \end{eqnarray}$

We could use either sample moment condition (1) or sample moment condition (2) to estimate $$d$$. In fact, below we use each one and show that (1) provides a much more efficient estimator.

When we use both (1) and (2), there are two sample moment conditions and only one parameter, so we cannot solve this system of equations. GMM finds the parameters that get as close as possible to solving weighted sample moment conditions.

Uniform weights and optimal weights are two ways of weighting the sample moment conditions. The uniform weights use an identity matrix to weight the moment conditions. The optimal weights use the inverse of the covariance matrix of the moment conditions.

We begin by drawing a sample of a size 500 and use gmm to estimate the parameters using sample moment condition (1), which we illustrate is the sample as the sample average.

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate double y = rchi2(1)

. gmm (y - {d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .82949186
Iteration 1:   GMM criterion Q(b) =  1.262e-32
Iteration 2:   GMM criterion Q(b) =  9.545e-35

note: model is exactly identified

GMM estimation

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .9107644   .0548098    16.62   0.000     .8033392     1.01819
------------------------------------------------------------------------------
Instruments for equation 1: _cons

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------


The sample moment condition is the product of an observation-level error function that is specified inside the parentheses and an instrument, which is a vector of ones in this case. The parameter $$d$$ is enclosed in curly braces {}. We specify the onestep option because the number of parameters is the same as the number of moment conditions, which is to say that the estimator is exactly identified. When it is, each sample moment condition can be solved exactly, and there are no efficiency gains in optimally weighting the moment conditions.

We now illustrate that we could use the sample moment condition obtained from the variance to estimate $$d$$.

. gmm ((y-{d})^2 - 2*{d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  5.4361161
Iteration 1:   GMM criterion Q(b) =  .02909692
Iteration 2:   GMM criterion Q(b) =  .00004009
Iteration 3:   GMM criterion Q(b) =  5.714e-11
Iteration 4:   GMM criterion Q(b) =  1.172e-22

note: model is exactly identified

GMM estimation

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .7620814   .1156756     6.59   0.000     .5353613    .9888015
------------------------------------------------------------------------------
Instruments for equation 1: _cons


While we cannot say anything definitive from only one draw, we note that this estimate is further from the truth and that the standard error is much larger than those based on the sample average.

Now, we use gmm to estimate the parameters using uniform weights.

. matrix I = I(2)

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608
Iteration 1:   GMM criterion Q(b) =  .05343812
Iteration 2:   GMM criterion Q(b) =  .01852592
Iteration 3:   GMM criterion Q(b) =   .0185221
Iteration 4:   GMM criterion Q(b) =   .0185221

GMM estimation

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .7864099   .1050692     7.48   0.000     .5804781    .9923418
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons


The first set of parentheses specifies the first sample moment condition, and the second set of parentheses specifies the second sample moment condition. The options winitial(I) and onestep specify uniform weights.

Finally, we use gmm to estimate the parameters using two-step optimal weights. The weights are calculated using first-step consistent estimates.

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608
Iteration 1:   GMM criterion Q(b) =  .05343812
Iteration 2:   GMM criterion Q(b) =  .01852592
Iteration 3:   GMM criterion Q(b) =   .0185221
Iteration 4:   GMM criterion Q(b) =   .0185221

Step 2
Iteration 0:   GMM criterion Q(b) =  .02888076
Iteration 1:   GMM criterion Q(b) =  .00547223
Iteration 2:   GMM criterion Q(b) =  .00546176
Iteration 3:   GMM criterion Q(b) =  .00546175

GMM estimation

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500
GMM weight matrix:     Robust

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .9566219   .0493218    19.40   0.000     .8599529    1.053291
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons


All four estimators are consistent. Below we run a Monte Carlo simulation to see their relative efficiencies. We are most interested in the efficiency gains afforded by optimal GMM. We include the sample average, the sample variance, and the ML estimator discussed in Efficiency comparisons by Monte Carlo simulation. Theory tells us that the optimally weighted GMM estimator should be more efficient than the sample average but less efficient than the ML estimator.

The code below for the Monte Carlo builds on Efficiency comparisons by Monte Carlo simulation, Maximum likelihood estimation by mlexp: A chi-squared example, and Monte Carlo simulations using Stata. Click gmmchi2sim.do to download this code.

. clear all
. set seed 12345
. matrix I = I(2)
. postfile sim  d_a d_v d_ml d_gmm d_gmme using efcomp, replace
. forvalues i = 1/2000 {
2.     quietly drop _all
3.     quietly set obs 500
4.     quietly generate double y = rchi2(1)
5.
.     quietly mean y
6.     local d_a         =  _b[y]
7.
.     quietly gmm ( (y-{d=d_a'})^2 - 2*{d}) , instruments( )  ///
8.     if e(converged)==1 {
9.             local d_v = _b[d:_cons]
10.     }
11.     else {
12.             local d_v = .
13.     }
14.
.     quietly mlexp (ln(chi2den({d=d_a'},y)))
15.     if e(converged)==1 {
16.             local d_ml  =  _b[d:_cons]
17.     }
18.     else {
19.             local d_ml  = .
20.     }
21.
.     quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
>         winitial(I) onestep conv_maxiter(200)
22.     if e(converged)==1 {
23.             local d_gmm = _b[d:_cons]
24.     }
25.     else {
26.             local d_gmm = .
27.     }
28.
.     quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
29.     if e(converged)==1 {
30.             local d_gmme = _b[d:_cons]
31.     }
32.     else {
33.             local d_gmme = .
34.     }
35.
.     post sim (d_a') (d_v') (d_ml') (d_gmm') (d_gmme')
36.
. }
. postclose sim
. use efcomp, clear
. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
d_a |      2,000     1.00017    .0625367   .7792076    1.22256
d_v |      1,996    1.003621    .1732559   .5623049   2.281469
d_ml |      2,000    1.002876    .0395273   .8701175   1.120148
d_gmm |      2,000    .9984172    .1415176   .5947328   1.589704
d_gmme |      2,000    1.006765    .0540633   .8224731   1.188156


The simulation results indicate that the ML estimator is the most efficient (d_ml, std. dev. 0.0395), followed by the efficient GMM estimator (d_gmme}, std. dev. 0.0541), followed by the sample average (d_a, std. dev. 0.0625), followed by the uniformly-weighted GMM estimator (d_gmm, std. dev. 0.1415), and finally followed by the sample-variance moment condition (d_v, std. dev. 0.1732).

The estimator based on the sample-variance moment condition does not converge for 4 of 2,000 draws; this is why there are only 1,996 observations on d_v when there are 2,000 observations for the other estimators. These convergence failures occurred even though we used the sample average as the starting value of the nonlinear solver.

For a better idea about the distributions of these estimators, we graph the densities of their estimates.

Figure 1: Densities of the estimators

The density plots illustrate the efficiency ranking that we found from the standard deviations of the estimates.

The uniformly weighted GMM estimator is less efficient than the sample average because it places the same weight on the sample average as on the much less efficient estimator based on the sample variance.

In each of the overidentified cases, the GMM estimator uses a weighted average of two sample moment conditions to estimate the mean. The first sample moment condition is the sample average. The second moment condition is the sample variance. As the Monte Carlo results showed, the sample variance provides a much less efficient estimator for the mean than the sample average.

The GMM estimator that places equal weights on the efficient and the inefficient estimator is much less efficient than a GMM estimator that places much less weight on the less efficient estimator.

We display the weight matrix from our optimal GMM estimator to see how the sample moments were weighted.

. quietly gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

. matlist e(W), border(rows)

-------------------------------------
| 1         | 2
|     _cons |     _cons
-------------+-----------+-----------
1            |           |
_cons |  1.621476 |
-------------+-----------+-----------
2            |           |
_cons | -.2610053 |  .0707775
-------------------------------------


The diagonal elements show that the sample-mean moment condition receives more weight than the less efficient sample-variance moment condition.

Done and undone

We used a simple example to illustrate how GMM exploits having more equations than parameters to obtain a more efficient estimator. We also illustrated that optimally weighting the different moments provides important efficiency gains over an estimator that uniformly weights the moment conditions.

Our cursory introduction to GMM is best supplemented with a more formal treatment like the one in Cameron and Trivedi (2005) or Wooldridge (2010).

Graph code appendix

use efcomp
local N = _N
kdensity d_a,     n(N') generate(x_a    den_a)    nograph
kdensity d_v,     n(N') generate(x_v    den_v)    nograph
kdensity d_ml,    n(N') generate(x_ml   den_ml)   nograph
kdensity d_gmm,   n(N') generate(x_gmm  den_gmm)  nograph
kdensity d_gmme,  n(N') generate(x_gmme den_gmme) nograph
twoway (line den_a x_a,       lpattern(solid))        ///
(line den_v x_v,       lpattern(dash))         ///
(line den_ml x_ml,     lpattern(dot))          ///
(line den_gmm x_gmm,   lpattern(dash_dot))     ///
(line den_gmme x_gmme, lpattern(shordash))


References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2001. Applications of generalized method of moments estimation. Journal of Economic Perspectives 15(4): 87-100.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Programming an estimation command in Stata: Allowing for options

I make three improvements to the command that implements the ordinary least-squares (OLS) estimator that I discussed in Programming an estimation command in Stata: Allowing for sample restrictions and factor variables. First, I allow the user to request a robust estimator of the variance-covariance of the estimator (VCE). Second, I allow the user to suppress the constant term. Third, I store the residual degrees of freedom in e(df_r) so that test will use the $$t$$ or $$F$$ distribution instead of the normal or $$\chi^2$$ distribution to compute the $$p$$-value of Wald tests.

This is the ninth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Allowing for robust standard errors

The syntax of myregress6, which I discussed in Programming an estimation command in Stata: Allowing for sample restrictions and factor variables, is

myregress6 depvar [indepvars] [if] [in]

where the independent variables can be time-series or factor variables. myregress7 has the syntax

myregress7 depvar [indepvars] [if] [in] [, robust ]

By default, myregress7 estimates the VCE assuming that the errors are independently and identically distributed (IID). If the option robust is specified, myregress7 uses the robust estimator of the VCE. See Cameron and Trivedi (2005), Stock and Watson (2010), Wooldridge (2015) for introductions to OLS; see Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects for the formulas and Stata matrix implementations. Click on the file name to download any code block. To avoid scrolling, view the code in the do-file editor, or your favorite text editor, to see the line numbers.

*! version 7.0.0  30Nov2015
program define myregress7, eclass
version 14

syntax varlist(numeric ts fv) [if] [in] [, Robust]
marksample touse

gettoken depvar indeps : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse'
local N                    = r(N)
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
local k                    = p' - diag0cnt(xpxi') - 1
matrix b'                 = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
if "robust'" == "" {
quietly summarize res2' if touse' , meanonly
local sum           = r(sum)
local s2            = sum'/(N'-(k'))
matrix V'          = s2'*xpxi'
}
else {
tempname M
quietly matrix accum M' = indeps' [iweight=res2'] if touse'
matrix V'               = (N'/(N'-(k')))*xpxi'*M'*xpxi'
local vce                   "robust"
local vcetype               "Robust"
}
ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N       = N'
ereturn scalar rank    = k'
ereturn local  vce     "vce'"
ereturn local  vcetype "vcetype'"
ereturn local  cmd     "myregress7"
ereturn display
end


A user may specify the robust option by typing robust, robus, robu, rob, ro, or r. In other words, r is the minimal abbreviation of the option robust. Line 5 of myregress7 implements this syntax. Specifying robust is optional because Robust is enclosed in the square brackets. r is the minimal abbreviation because the R is in uppercase and the remaining letters are in lowercase.

If the user specifies robust, or a valid abbreviation thereof, the local macro robust contains the word “robust”; otherwise, the local macro robust is empty. Line 25 uses this fact to determine which VCE should be computed; it specifies that lines 26–31 should be executed if the local macro robust is empty and that lines 32-36 should otherwise be executed. Lines 26-31 compute the IID estimator of the VCE. Lines 32-34 compute the robust estimator of the VCE. Lines 35 and 36 respectively put “robust” and “Robust” into the local macros vce and vcetype.

Line 41 puts the contents of the local macro vce into the local macro e(vce), which informs users and postestimation commands which VCE estimator was used. By convention, e(vce) is empty for the IID case. Line 42 puts the contents of the local macro vcetype into the local macro e(vcetype), which is used by ereturn display to correctly label the standard errors as robust.

I now run a regression with robust standard errors.

Example 1: myregress7 with robust standard errors

. sysuse auto
(1978 Automobile Data)

. myregress7 price mpg trunk i.rep78, robust
------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -262.7053   74.75538    -3.51   0.000    -409.2232   -116.1875
trunk |   41.75706   73.71523     0.57   0.571    -102.7221    186.2362
|
rep78 |
2  |   654.7905   1132.425     0.58   0.563    -1564.721    2874.302
3  |   1170.606   823.9454     1.42   0.155    -444.2979    2785.509
4  |   1473.352   650.4118     2.27   0.023     198.5679    2748.135
5  |   2896.888   937.6981     3.09   0.002     1059.034    4734.743
|
_cons |   9726.377   2040.335     4.77   0.000     5727.393    13725.36
------------------------------------------------------------------------------


Suppressing the constant term

myregress8 has the syntax

myregress8 depvar [indepvars] [if] [in] [, robust noconstant ]

*! version 8.0.0  30Nov2015
program define myregress8, eclass
version 14

syntax varlist(numeric ts fv) [if] [in] [, Robust noCONStant ]
marksample touse

gettoken depvar indeps : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse' , constant'
local N                    = r(N)
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
local k                    = p' - diag0cnt(xpxi') - 1
matrix b'                 = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
if "robust'" == "" {
quietly summarize res2' if touse' , meanonly
local sum           = r(sum)
local s2            = sum'/(N'-(k'))
matrix V'          = s2'*xpxi'
}
else {
tempname M
quietly matrix accum M' = indeps' [iweight=res2']     ///
if touse' , constant'
matrix V'               = (N'/(N'-(k')))*xpxi'*M'*xpxi'
local vce                   "robust"
local vcetype               "Robust"
}
ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N       = N'
ereturn scalar rank    = k'
ereturn local  vce     "vce'"
ereturn local  vcetype "vcetype'"
ereturn local  cmd     "myregress8"
ereturn display
end


The syntax command on line 5 puts “noconstant” into the local macro constant if the user types nocons, noconst, noconsta, noconstan, or noconstant; otherwise, the local macro constant is empty. The minimal abbreviation of option noconstant is nocons because the lowercase no is followed by CONStant. Note that specifying the option creates the local macro constant because the no is followed by uppercase letters specifying the minimum abbreviation.

To implement the option, I specified what is contained in the local macro constant as an option on the matrix accum command on line 14 and on the matrix accum command spread over lines 33 and 34. The matrix accum command that begins on line 33 is too long for one line. I used /// to comment out the end-of-line character and continue the command on line 34.

I now illustrate the noconstant option.

Example 2: myregress8 with option noconstant

. myregress8 price mpg trunk ibn.rep78, noconstant
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -262.7053   73.49434    -3.57   0.000    -406.7516   -118.6591
trunk |   41.75706    93.9671     0.44   0.657    -142.4151    225.9292
|
rep78 |
1  |   9726.377   2790.009     3.49   0.000      4258.06    15194.69
2  |   10381.17   2607.816     3.98   0.000     5269.943    15492.39
3  |   10896.98   2555.364     4.26   0.000     5888.561     15905.4
4  |   11199.73    2588.19     4.33   0.000      6126.97    16272.49
5  |   12623.27   2855.763     4.42   0.000     7026.073    18220.46
------------------------------------------------------------------------------


Using t or F distributions

The output tables reported in examples 1 and 2 use the normal distribution to compute $$p$$-values and confidence intervals, because Wald-based postestimation commmands like test and ereturn display use the normal or the $$\chi^2$$ distribution unless the residual degrees of freedom are stored in e(df_r).

*! version 9.0.0  30Nov2015
program define myregress9, eclass
version 14

syntax varlist(numeric ts fv) [if] [in] [, Robust noCONStant ]
marksample touse

gettoken depvar indeps : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse' , constant'
local N                    = r(N)
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
local k                    = p' - diag0cnt(xpxi') - 1
matrix b'                 = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
if "robust'" == "" {
quietly summarize res2' if touse' , meanonly
local sum           = r(sum)
local s2            = sum'/(N'-(k'))
matrix V'          = s2'*xpxi'
}
else {
tempname M
quietly matrix accum M' = indeps' [iweight=res2']     ///
if touse' , constant'
matrix V'               = (N'/(N'-(k')))*xpxi'*M'*xpxi'
local vce                   "robust"
local vcetype               "Robust"
}
ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N       = N'
ereturn scalar rank    = k'
ereturn scalar df_r    = N'-k'
ereturn local  vce     "vce'"
ereturn local  vcetype "vcetype'"
ereturn local  cmd     "myregress8"
ereturn display
end


Line 42 of myregress9.ado stores the residual degrees of freedom in e(df_r). Example 3 illustrates that ereturn display and test now use the $$t$$ and $$F$$ distributions.

Example 3: t or F distributions after myregress9

. myregress9 price mpg trunk ibn.rep78, noconstant
------------------------------------------------------------------------------
|      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -262.7053   73.49434    -3.57   0.001    -409.6184   -115.7923
trunk |   41.75706    93.9671     0.44   0.658    -146.0805    229.5946
|
rep78 |
1  |   9726.377   2790.009     3.49   0.001     4149.229    15303.53
2  |   10381.17   2607.816     3.98   0.000     5168.219    15594.12
3  |   10896.98   2555.364     4.26   0.000     5788.882    16005.08
4  |   11199.73    2588.19     4.33   0.000     6026.011    16373.45
5  |   12623.27   2855.763     4.42   0.000     6914.677    18331.85
------------------------------------------------------------------------------

. test trunk

( 1)  trunk = 0

F(  1,    62) =    0.20
Prob > F =    0.6583


Done and undone

I added an option for the robust estimator of the VCE, I added an option to suppress the constant term, and I stored the residual degrees of freedom in e(df_r) so that Wald-based postestimation commands will use $$t$$ or $$F$$ distributions. I illustrated option parsing by example, but I skipped the general theory and many details. Type . help syntax for more details about parsing options using the syntax command.

In the next post, I implement the modern syntax for robust and cluster-robust standard errors.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Stock, J. H., and M. W. Watson. 2010. Introduction to Econometrics. 3rd ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2015. Introductory Econometrics: A Modern Approach. 6th ed. Cincinnati, Ohio: South-Western.

Categories: Programming Tags:

## Programming an estimation command in Stata: Allowing for sample restrictions and factor variables

I modify the ordinary least-squares (OLS) command discussed in Programming an estimation command in Stata: A better OLS command to allow for sample restrictions, to handle missing values, to allow for factor variables, and to deal with perfectly collinear variables.

This is the eighth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Sample restrictions

The myregress4 command described in Programming an estimation command in Stata: A better OLS command has the syntax

myregress4 depvar [indepvars]

where the indepvars may be time-series operated variables. myregress5 allows for sample restrictions and missing values. It has the syntax

myregress5 depvar [indepvars] [if] [in]

A user may optionally specify an if expression or an in range to restrict the sample. I also make myregress5 handle missing values in the user-specified variables.

*! version 5.0.0  22Nov2015
program define myregress5, eclass
version 14

syntax varlist(numeric ts) [if] [in]
marksample touse

gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
quietly summarize res2' if touse' , meanonly
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V', esample(touse')
ereturn scalar           N  = N'
ereturn local         cmd   "myregress5"
ereturn display
end


The syntax command in line 5 specifies that a user may optionally restrict the sample by specifying an if expression or in range. When the user specifies an if expression, syntax puts it into the local macro if; otherwise, the local macro if is empty. When the user specifies an in range, syntax puts it into the local macro in; otherwise, the local macro in is empty.

We could use the local macros if and in to handle user-specified sample restrictions, but these do not account for missing values in the user-specified variables. The marksample command in line 6 creates a local macro named touse, which contains the name of a temporary variable that is a sample-identification variable. Each observation in the sample-identification variable is either one or zero. It is one if the observation is included in the sample. It is zero if the observation is excluded from the sample. An observation can be excluded by a user-specified if expression, by a user-specified in range, or because there is a missing value in one of the user-specified variables.

Lines 20–23 use the sample-identification variable contained in the local macro touse to enforce these sample restrictions on the OLS calculations.

Line 28 posts the sample-identification variable into e(sample), which is one if the observation was included in the estimation sample and it is zero if the observation was excluded from the estimation sample.

Line 29 stores the number of observations in the sample in e(N).

Example 1 illustrates that myregress5 runs the requested regression on the sample that respects the missing values in rep78 and accounts for an if expression.

Example 1: myregress5 with missing values and an if expression

. sysuse auto
(1978 Automobile Data)

. count if !missing(rep78)
69

. count if !missing(rep78) & mpg < 30
62

. myregress5 price mpg trunk rep78 if mpg < 30
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -376.8591   107.4289    -3.51   0.000    -587.4159   -166.3024
trunk |  -36.39376   102.2139    -0.36   0.722    -236.7294    163.9418
rep78 |   556.3029   378.1101     1.47   0.141    -184.7793    1297.385
_cons |    12569.5   3455.556     3.64   0.000     5796.735    19342.27
------------------------------------------------------------------------------

. ereturn list

scalars:
e(N) =  62

macros:
e(cmd) : "myregress5"
e(properties) : "b V"

matrices:
e(b) :  1 x 4
e(V) :  4 x 4

functions:
e(sample)


Allowing for factor variables

Example 1 includes the number of repairs as a continuous variable, but it might be better treated as a discrete factor. myregress6 accepts factor variables. Factor-variable lists usually imply variable lists that contain perfectly collinear variables, so myregress6 also handles
perfectly collinear variables.

*! version 6.0.0  22Nov2015
program define myregress6, eclass
version 14

syntax varlist(numeric ts fv) [if] [in]
marksample touse

gettoken depvar : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist' if touse'
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
local k                    = p' - diag0cnt(xpxi') - 1
matrix b'                 = (xpxi'*xpy')'
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'
quietly summarize res2' if touse' , meanonly
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(k'))
matrix V'                  = s2'*xpxi'
ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N            = N'
ereturn scalar rank         = k'
ereturn local  cmd             "myregress6"
ereturn display
end


The fv in the parentheses after varlist in the syntax command in line 5 modifies the varlist to accept factor variables. Any specified factor variables are stored in the local macro varlist in a canonical form.

Estimation commands do not allow the dependent variable to be a factor variable. The _fv_check_depvar command in line 9 will exit with an error if the local macro depvar contains a factor variable.

Line 15 stores the number of columns in the matrix formed by matrix accum in the local macro p. Line 19 stores the number of linearly independent columns in the local macro k. This calculation uses diag0cnt() to account for the perfectly collinear variables that were dropped. (Each dropped variable puts a zero on the diagonal of the generalized inverse calculated by syminv() and diag0cnt() returns the number of zeros on the diagonal.)

On line 29, I specified option buildfvinfo on ereturn post to store hidden information that ereturn display, contrast, margins, and pwcompare use to label tables and to decide which functions of the parameters are estimable.

Line 31 stores the number of linearly independent variables in e(rank) for postestimation commands.

Now, I use myregress6 to include rep78 as a factor-operated variable. The base category is dropped because we included a constant term.

Example 2: myregress6 with a factor-operated variable

. myregress6 price mpg trunk i.rep78
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -262.7053   73.49434    -3.57   0.000    -406.7516   -118.6591
trunk |   41.75706    93.9671     0.44   0.657    -142.4151    225.9292
|
rep78 |
2  |   654.7905   2136.246     0.31   0.759    -3532.175    4841.756
3  |   1170.606   2001.739     0.58   0.559     -2752.73    5093.941
4  |   1473.352   2017.138     0.73   0.465    -2480.167     5426.87
5  |   2896.888   2121.206     1.37   0.172    -1260.599    7054.375
|
_cons |   9726.377   2790.009     3.49   0.000      4258.06    15194.69
------------------------------------------------------------------------------


Done and undone

I modified the OLS command discussed in Programming an estimation command in Stata: A better OLS command to allow for sample restrictions, to handle missing values, to allow for factor variables, and to deal with perfectly collinear variables. In the next post, I show how to allow options for robust standard errors and to suppress the constant term.

Categories: Programming Tags:

## Programming an estimation command in Stata: A better OLS command

I use the syntax command to improve the command that implements the ordinary least-squares (OLS) estimator that I discussed in Programming an estimation command in Stata: A first command for OLS. I show how to require that all variables be numeric variables and how to make the command accept time-series operated variables.

This is the seventh post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Stata syntax and the syntax command

The myregress2 command described in Programming an estimation command in Stata: A first command for OLS has the syntax

myregress2 depvar [indepvars]

This syntax requires that the dependent variable be specified because depvar is not enclosed in square brackets. The independent variables are optional because indepvars is enclosed in square brackets. Type

for an introduction to reading Stata syntax diagrams.

This syntax is implemented by the syntax command in line 5 of myregress2.ado, which I discussed at length in Programming an estimation command in Stata: A first command for OLS. The user must specify a list of variable names because varlist is not enclosed in square brackets. The syntax of the syntax command follows the rules of a syntax diagram.

*! version 2.0.0  26Oct2015
program define myregress2, eclass
version 14

syntax varlist
gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b'
quietly generate double res'       = (depvar' - xbhat')
quietly generate double res2'      = (res')^2
quietly summarize res2'
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V'
ereturn local         cmd   "myregress2"
ereturn display
end


Example 1 illustrates that myregress2 runs the requested regression when I specify a varlist.

Example 1: myregress2 with specified variables

. sysuse auto
(1978 Automobile Data)

. myregress2 price mpg trunk
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   65.59262    -3.36   0.001    -348.7241    -91.6057
trunk |   43.55851   88.71884     0.49   0.623    -130.3272    217.4442
_cons |   10254.95   2349.084     4.37   0.000      5650.83    14859.07
------------------------------------------------------------------------------


Example 2 illustrates that the syntax command displays an error message and stops execution when I do not specify a varlist. I use set trace on to see each line of code and the output it produces.

Example 2: myregress2 without a varlist

. set trace on

. myregress2
--------------------------------------------------------- begin myregress2 --
- version 14
- syntax varlist
varlist required
----------------------------------------------------------- end myregress2 --
r(100);


Example 3 illustrates that the syntax command is checking that the specified variables are in the current dataset. syntax throws an error because DoesNotExist is not a variable in the current dataset.

Example 3: myregress2 with a variable not in this dataset

. set trace on

. myregress2 price mpg trunk DoesNotExist
--------------------------------------------------------- begin myregress2 --
- version 14
- syntax varlist
----------------------------------------------------------- end myregress2 --
r(111);

end of do-file

r(111);


Because the syntax command on line 5 is not restricting the specified variables to be numeric, I get the no observations error in example 4 instead of an error indicating the actual problem, which is the string variable make.

Example 4: myregress2 with a string variable

. describe make

storage   display    value
variable name   type    format     label      variable label
-------------------------------------------------------------------------------
make            str18   %-18s                 Make and Model

. myregress2 price mpg trunk make
no observations
r(2000);

end of do-file

r(2000);


On line 5 of myregress3, I modify varlist to only accept numeric variables This change produces a more informative error message when I try to include a string variable in the regression.

*! version 3.0.0  30Oct2015
program define myregress3, eclass
version 14

syntax varlist(numeric)
gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b'
quietly generate double res'       = (depvar' - xbhat')
quietly generate double res2'      = (res')^2
quietly summarize res2'
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V'
ereturn local         cmd   "myregress3"
ereturn display
end


Example 5: myregress3 with a string variable

. set trace on

. myregress3 price mpg trunk make
--------------------------------------------------------- begin myregress3 --
- version 14
- syntax varlist(numeric)
string variables not allowed in varlist;
make is a string variable
----------------------------------------------------------- end myregress3 --
r(109);

end of do-file

r(109);


On line 5 of myregress4, I modify the varlist to accept time-series (ts) variables. The syntax command puts time-series variables in a canonical form that is stored in the local macro varlist, as illustrated in the display on line 6, whose output appears in example 6.

*! version 4.0.0  31Oct2015
program define myregress4, eclass
version 14

syntax varlist(numeric ts)
display "varlist is varlist'"
gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b'
quietly generate double res'       = (depvar' - xbhat')
quietly generate double res2'      = (res')^2
quietly summarize res2'
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V'
ereturn local         cmd   "myregress4"
ereturn display
end


Example 6: myregress4 with time-series variables

. sysuse gnp96

. myregress4  L(0/3).gnp
varlist is gnp96 L.gnp96 L2.gnp96 L3.gnp96
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
gnp96 |
L1. |   1.277086   .0860652    14.84   0.000     1.108402    1.445771
L2. |   -.135549   .1407719    -0.96   0.336    -.4114568    .1403588
L3. |  -.1368326   .0871645    -1.57   0.116    -.3076719    .0340067
|
_cons |   -2.94825   14.36785    -0.21   0.837    -31.10871    25.21221
------------------------------------------------------------------------------


Done and undone

I used the syntax command to improve how myregress2 handles the variables specified by the user. I showed how to require that all variables be numeric variables and how to make the command accept time-series operated variables. In the next post, I show how to make the command allow for sample restrictions, how to handle missing values, how to allow for factor-operated variables, and how to deal with perfectly collinear variables.

Categories: Programming Tags:

## Programming an estimation command in Stata: A first command for OLS

$$\newcommand{\betab}{\boldsymbol{\beta}} \newcommand{\xb}{{\bf x}} \newcommand{\yb}{{\bf y}} \newcommand{\Xb}{{\bf X}}$$I show how to write a Stata estimation command that implements the ordinary least-squares (OLS) estimator by explaining the code. I use concepts that I introduced in previous #StataProgramming posts. In particular, I build on Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects, in which I recalled the OLS formulas and showed how to compute them using Stata matrix commands and functions and on
Programming an estimation command in Stata: A first ado command, in which I introduced some ado-programming concepts. Although I introduce some local macro tricks that I use all the time, I also build on Programing an estimation command in Stata: Where to store your stuff.

This is the sixth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Local macro tricks

I use lots of local macro tricks in my ado-files. In this section, I illustrate ones that I use in the commands that I develop in this post. In every ado-file that I write, I ask questions about lists of variable names stored in local macros. I frequently use the extended macro functions and the gettoken command to ask these questions and store the results in a local macro.

The syntax for storing the result of an extended macro function in a local macro is

local localname : extended_fcn

Below, I use the extended macro function word count to count the number of elements in the list and store the result in the local macro count.

Example 1: Storing and extracting the result of an extended macro function

. local count : word count a b c

. display "count contains count'"
count contains 3


There are many extended macro functions, but I illustrate just the one I use in this post; type help extended fcn for a complete list.

A token is an element in a list. I frequently use the gettoken command to split lists apart. The gettoken command has the syntax

gettoken localname1 [localname2] : localname3

gettoken stores the first token in the list stored in the local macro localname3 into the local macro localname1. If the optional localname2 is specified, the remaining tokens are stored in the local macro localname2.

I use gettoken to store the first token stored in mylist into the local macro first, whose contents I subsequently extract and display.

Example 2: Using gettoken to store first token only

. local mylist y x1 x2

. display "mylist contains mylist'"
mylist contains y x1 x2

. gettoken first : mylist

. display "first contains first'"
first contains y


Now, I use gettoken to store the first token stored in mylist into the local macro first and the remaining tokens into the local macro left. I subsequently extract and display the contents of first and left.

Example 3: Using gettoken to store first and remaining tokens

. gettoken first left: mylist

. display "first contains first'"
first contains y

. display "left  contains left'"
left  contains  x1 x2


I frequently want to increase the value of a local macro by some fixed amount, say, $$3$$. I now illustrate a solution that I use.

Example 4: Local macro update

. local p = 1

. local p = p' + 3

. display "p is now p'"
p is now 4


When the update value, also known as the increment value, is $$1$$, we can use the increment operator, as below:

Example 5: Local macro update

. local p = 1

. local ++p

. display "p is now p'"
p is now 2


A first version of myregress

The code in myregress1 implements a version of the OLS formulas. I use myregress1 in example 6. Below example 6, I discuss the code and the output.

*! version 1.0.0  23Oct2015
program define myregress1, eclass
version 14

syntax varlist
display "The syntax command puts the variables specified by the "
display "    user into the local macro varlist"
display "    varlist contains varlist'"

gettoken depvar : varlist
display "The dependent variable is depvar'"

matrix accum zpz = varlist'
display "matrix accum forms Z'Z"
matrix list zpz

local p : word count varlist'
local p = p' + 1

matrix xpx                = zpz[2..p', 2..p']
matrix xpy                = zpz[2..p', 1]
matrix xpxi               = syminv(xpx)
matrix b                  = (xpxi*xpy)'

matrix score double xbhat = b
generate double res       = (depvar' - xbhat)
generate double res2      = res^2
summarize res2
local N                   = r(N)
local sum                 = r(sum)
local s2                  = sum'/(N'-(p'-1))
matrix V                  = s2'*xpxi
ereturn post b V
ereturn local         cmd   "myregress1"
ereturn display
end


Example 6: myregress1 output

. sysuse auto
(1978 Automobile Data)

. myregress1 price mpg trunk
The syntax command puts the variables specified by the
user into the local macro varlist
varlist contains price mpg trunk
The dependent variable is price
(obs=74)
matrix accum forms Z'Z

symmetric zpz[4,4]
price        mpg      trunk      _cons
price  3.448e+09
mpg    9132716      36008
trunk    6565725      20630      15340
_cons     456229       1576       1018         74

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
res2 |         74     6674851    1.30e+07   11.24372   9.43e+07
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   65.59262    -3.36   0.001    -348.7241    -91.6057
trunk |   43.55851   88.71884     0.49   0.623    -130.3272    217.4442
_cons |   10254.95   2349.084     4.37   0.000      5650.83    14859.07
------------------------------------------------------------------------------


Here are my comments on the code and the output in example 6.

• Line 2 specifies that myregress1 is an e-class command that stores its results in e().
• Lines 5–8 illustrate that the syntax command stores the names of the variables specified by the user in the local macro varlist. This behavior is also illustrated in example 6.
• Line 10 uses the gettoken command to store the first variable name stored in the local macro varlist in the local macro depvar. Line 11 displays this name and the usage is illustrated in example 6.
• Line 13 uses matrix accum to put $$(\Xb’\Xb)$$ and $$(\Xb’\yb)$$ into a Stata matrix named zpz, as discussed in Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects and further illustrated in lines 14–15 and example 6.
• Line 17 stores the number of variables in the local macro varlist into the local macro p.
• Line 18 increments the local macro p by $$1$$ to account for the constant term included by matrix accum by default.
• Lines 20–23 extract $$(\Xb’\Xb)$$ and $$(\Xb’\yb)$$ from zpz and put the vector of point estimates $$\widehat{\betab}$$ into the Stata row vector b.
• Line 25 puts $$\Xb\widehat{\betab}$$ into the variable xbhat.
• Lines 26 and 27 calculate the residuals and the squared residuals, respectively.
• Lines 28–32 calculate the estimated variance-covariance matrix of the estimator (VCE) from the sum of squared residuals.
• Line 33 stores b and V into e(b) and e(V), respectively.
• Line 34 stores the name of the estimation command (myregress1) in e(cmd).
• Line 35 produces a standard Stata output table from the results in e(b) and e(V).

myregress1 contains code to help illustrate how it works, and it uses hard-coded names for global objects like Stata variables and Stata matrices. Users do not want to see the output from the illustration lines, so they must be removed. Users do not want their global Stata matrices overwritten by a command they use, which is what myregress1 would do to a matrix named zpz, xpx, xpxi, b, or V.

The code in myregress2 fixes these problems.

*! version 2.0.0  26Oct2015
program define myregress2, eclass
version 14

syntax varlist
gettoken depvar : varlist

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

quietly matrix accum zpz' = varlist'
local p : word count varlist'
local p = p' + 1
matrix xpx'                = zpz'[2..p', 2..p']
matrix xpy'                = zpz'[2..p', 1]
matrix xpxi'               = syminv(xpx')
matrix b'                  = (xpxi'*xpy')'
quietly matrix score double xbhat' = b'
generate double res'       = (depvar' - xbhat')
generate double res2'      = (res')^2
quietly summarize res2'
local N                     = r(N)
local sum                   = r(sum)
local s2                    = sum'/(N'-(p'-1))
matrix V'                  = s2'*xpxi'
ereturn post b' V'
ereturn local         cmd   "myregress2"
ereturn display
end

• Line 8 uses tempname to put safe names into the local macros zpz, xpx, xpy, xpxi, b, and V.
• Line 9 uses tempvar to put safe names into the local macros xbhat, res, res2.
• Lines 11, 14–18, and 25–26 use the safe names in the local macros created by tempname instead of the hard-coded names for the matrices.
• Lines 18–20 use the safe names in the local macros created by tempvar instead of the hard-coded names for the variables it creates.

The output below shows the output produced by myregress2.

Example 7: myregress2 output

. myregress2 price mpg trunk
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   65.59262    -3.36   0.001    -348.7241    -91.6057
trunk |   43.55851   88.71884     0.49   0.623    -130.3272    217.4442
_cons |   10254.95   2349.084     4.37   0.000      5650.83    14859.07
------------------------------------------------------------------------------


Done and undone

After reviewing some tricks with local macros that I use in most of the ado-files that I write, I discussed two versions of an ado-command that implements the (OLS) estimator. In the next post, I extend this command so that the user may request a robust VCE, or that the constant term be suppressed, or both.

Categories: Programming Tags:

## Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects

$$\newcommand{\epsilonb}{\boldsymbol{\epsilon}} \newcommand{\ebi}{\boldsymbol{\epsilon}_i} \newcommand{\Sigmab}{\boldsymbol{\Sigma}} \newcommand{\betab}{\boldsymbol{\beta}} \newcommand{\eb}{{\bf e}} \newcommand{\xb}{{\bf x}} \newcommand{\zb}{{\bf z}} \newcommand{\yb}{{\bf y}} \newcommand{\Xb}{{\bf X}} \newcommand{\Mb}{{\bf M}} \newcommand{\Eb}{{\bf E}} \newcommand{\Xtb}{\tilde{\bf X}} \newcommand{\Vb}{{\bf V}}$$I present the formulas for computing the ordinary least-squares (OLS) estimator, and I discuss some do-file implementations of them. I discuss the formulas and the computation of independence-based standard errors, robust standard errors, and cluster-robust standard errors. I introduce the Stata matrix commands and matrix functions that I use in ado-commands that I discuss in upcoming posts.

This is the fifth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

OLS formulas

Recall that the OLS point estimates are given by

$\widehat{\betab} = \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1} \left( \sum_{i=1}^N \xb_i’y_i \right)$

where $${\bf x}_i$$ is the $$1\times k$$ vector of independent variables, $$y_i$$ is the dependent variable for each of the $$N$$ sample observations, and the model for $$y_i$$ is

$y_i = \xb_i\betab’ + \epsilon_i$

If the $$\epsilon_i$$ are independently and identically distributed, we estimate the variance-covariance matrix of the estimator (VCE) by

$\widehat{\Vb} = \widehat{s} \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}$

where $$\widehat{s} = 1/(N-k)\sum_{i=1}^N e_i^2$$ and $$e_i=y_i-{\bf x}_i\widehat{{\boldsymbol \beta}}$$.

See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2015) for introductions to OLS.

Stata matrix implementation

I use the matrix accum command to compute the sum of the products over the observations. Typing

.  matrix accum zpz = z1 z2 z3

puts $$\left( \sum_{i=1}^N {\bf z}_i'{\bf z}_i \right)$$ into the Stata matrix zpz, where $${\bf z}_i=( {\tt z1}_i, {\tt z2}_i, {\tt z3}_i, 1)$$. The $$1$$ appears because matrix accum has included the constant term by default, like almost all estimation commands.

Below, I use matrix accum to compute $$\left( \sum_{i=1}^N {\bf z}_i'{\bf z}_i \right)$$, which contains $$\left( \sum_{i=1}^N {\bf x}_i'{\bf x}_i \right)$$ and $$\left( \sum_{i=1}^N {\bf x}_i’y_i \right)$$.

Example 1: Using matrix accum

. sysuse auto
(1978 Automobile Data)

. matrix accum zpz = price mpg trunk
(obs=74)

. matrix list zpz

symmetric zpz[4,4]
price        mpg      trunk      _cons
price  3.448e+09
mpg    9132716      36008
trunk    6565725      20630      15340
_cons     456229       1576       1018         74


Now, I extract $$\left( \sum_{i=1}^N {\bf x}_i'{\bf x}_i \right)$$ from rows 2–4 and columns 2–4 of zpz and $$\left( \sum_{i=1}^N {\bf x}_i’y_i \right)$$ from rows 2–4 and column 1 of zpz.

Example 2: Extracting submatrices

. matrix xpx       = zpz[2..4, 2..4]

. matrix xpy       = zpz[2..4, 1]

. matrix list xpx

symmetric xpx[3,3]
mpg  trunk  _cons
mpg  36008
trunk  20630  15340
_cons   1576   1018     74

. matrix list xpy

xpy[3,1]
price
mpg  9132716
trunk  6565725
_cons   456229


I now compute $$\widehat{{\boldsymbol \beta}}$$ from the matrices formed in example 2.

Example 3: Computing $$\widehat{\betab}$$

. matrix xpxi      = syminv(xpx)

. matrix b         = xpxi*xpy

. matrix list b

b[3,1]
price
mpg  -220.16488
trunk    43.55851
_cons    10254.95

. matrix b         = b'

. matrix list b

b[1,3]
mpg       trunk       _cons
price  -220.16488    43.55851    10254.95


I transposed b to make it a row vector because point estimates in Stata are stored as row vectors.

Example 3 illustrates that the Stata matrix b contains the estimated coefficients and the names of the variables on which these values are estimated coefficients. To clarify, our model is
$\Eb[{\tt price}|{\tt mpg}, {\tt trunk} ] = {\tt mpg}*\beta_{\tt mpg} + {\tt trunk}*\beta_{\tt trunk} + {\tt \_cons}$

and b contains the information that $$-220.16$$ is the estimated coefficient on mpg, that $$43.56$$ is the estimated coefficient on trunk, and that $$10254.95$$ is the estimated constant. We can compute the linear combination $$\xb_i\widehat{\betab}$$ over the observations using the information in b, because b contains both the value and the name for each coefficient.

I use matrix score to compute this linear combination for each observation, and I use generate to reiterate what this linear combination is.

Example 4: Using matrix score to compute $$\xb_i\widehat{\betab}’$$

. matrix score double xbhat1 = b

. generate     double xbhat2 = mpg*(-220.16488) + trunk*(43.55851) + 10254.95

. list xbhat1 xbhat2 in 1/4

+-----------------------+
|    xbhat1      xbhat2 |
|-----------------------|
1. | 5890.4661   5890.4663 |
2. | 6991.2905   6991.2907 |
3. | 5934.0246   5934.0248 |
4. | 6548.5884   6548.5886 |
+-----------------------+


I use the predictions for $$\Eb[{\tt price}|{\tt mpg}, {\tt trunk} ]$$ in xbhat1 to compute the residuals and the estimated VCE.

Example 5: Computing the estimated VCE

. generate double res       = (price - xbhat1)

. generate double res2      = res^2

. summarize res2

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
res2 |         74     6674851    1.30e+07   11.24372   9.43e+07

. return list

scalars:
r(N) =  74
r(sum_w) =  74
r(mean) =  6674850.504745401
r(Var) =  168983977867533.1
r(sd) =  12999383.74952956
r(min) =  11.24371634723049
r(max) =  94250157.2111593
r(sum) =  493938937.3511598

. local N                   = r(N)

. local sum                 = r(sum)

. local s2                  = sum'/(N'-3)

. matrix V                  = (s2')*xpxi


(See Programing an estimation command in Stata: Where to store your stuff for discussions of using results from r-class commands and using local macros.)

I verify that my computations for $$\widehat{\betab}$$ and the VCE match those of regress.

Example 6: Comparing against regress

. regress price mpg trunk

Source |       SS           df       MS      Number of obs   =        74
-------------+----------------------------------   F(2, 71)        =     10.14
Model |   141126459         2  70563229.4   Prob > F        =    0.0001
Residual |   493938937        71  6956886.44   R-squared       =    0.2222
Total |   635065396        73  8699525.97   Root MSE        =    2637.6

------------------------------------------------------------------------------
price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   65.59262    -3.36   0.001    -350.9529    -89.3769
trunk |   43.55851   88.71884     0.49   0.625    -133.3418    220.4589
_cons |   10254.95   2349.084     4.37   0.000      5571.01    14938.89
------------------------------------------------------------------------------

. matrix list e(b)

e(b)[1,3]
mpg       trunk       _cons
y1  -220.16488    43.55851    10254.95

. matrix list b

b[1,3]
mpg       trunk       _cons
price  -220.16488    43.55851    10254.95

. matrix list e(V)

symmetric e(V)[3,3]
mpg       trunk       _cons
mpg   4302.3924
trunk   3384.4186   7871.0326
_cons  -138187.95  -180358.85   5518194.7

. matrix list V

symmetric V[3,3]
mpg       trunk       _cons
mpg   4302.3924
trunk   3384.4186   7871.0326
_cons  -138187.95  -180358.85   5518194.7


Robust standard errors

The frequently used robust estimator of the VCE is given by

$\widehat{V}_{robust}=\frac{N}{N-k} \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1} \Mb \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}$

where

$\Mb=\sum_{i=1}^N \widehat{e}_i^2\xb_i’\xb_i$

See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2015) for derivations and discussions.

matrix accum with weights $$\widehat{e}_i^2$$ computes the formula for $$\Mb$$. Below, I use matrix accum to compute $$\Mb$$ and $$\widehat{V}_{robust}$$

Example 7: A robust VCE

. matrix accum M    = mpg trunk [iweight=res2]
(obs=493938937.4)

. matrix V2         = (N'/(N'-3))*xpxi*M*xpxi


I now verify that my computations match those reported by regress.

Example 8: Comparing computations of robust VCE

. regress price mpg trunk, robust

Linear regression                               Number of obs     =         74
F(2, 71)          =      11.59
Prob > F          =     0.0000
R-squared         =     0.2222
Root MSE          =     2637.6

------------------------------------------------------------------------------
|               Robust
price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   72.45388    -3.04   0.003    -364.6338   -75.69595
trunk |   43.55851    71.4537     0.61   0.544    -98.91613    186.0331
_cons |   10254.95   2430.641     4.22   0.000      5408.39    15101.51
------------------------------------------------------------------------------

. matrix list e(V)

symmetric e(V)[3,3]
mpg       trunk       _cons
mpg   5249.5646
trunk   3569.5316   5105.6316
_cons  -169049.76  -147284.49   5908013.8

. matrix list V2

symmetric V2[3,3]
mpg       trunk       _cons
mpg   5249.5646
trunk   3569.5316   5105.6316
_cons  -169049.76  -147284.49   5908013.8


Cluster-robust standard errors

The cluster-robust estimator of the VCE is frequently used when the data have a panel structure, also known as a longitudinal structure. This VCE accounts for the within-group correlation of the errors, and it is given by

$\widehat{V}_{cluster}=\frac{N-1}{N-k}\frac{g}{g-1} \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1} \Mb_c \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}$

where

$\Mb_c=\sum_{j=1}^g \Xb_j’ (\widehat{\eb}_j \widehat{\eb}_j’) \Xb_j$

$$\Xb_j$$ is the $$n_j\times k$$ matrix of observations on $$\xb_i$$ in group $$j$$, $$\widehat{\eb}_j$$ is the $$n_j\times 1$$ vector of residuals in group $$j$$, and $$g$$ is the number of groups. See Cameron and Trivedi (2005), Wooldridge (2010), and [R] regress for derivations and discussions.

matrix opaccum computes the formula for $$\Mb_c$$. Below, I create the group variable cvar from rep78 and use matrix opaccum to compute $$\Mb_c$$ and $$\widehat{V}_{cluster}$$

Example 9: A cluster-robust VCE

. generate cvar = cond( missing(rep78), 6, rep78)

. tab cvar

cvar |      Freq.     Percent        Cum.
------------+-----------------------------------
1 |          2        2.70        2.70
2 |          8       10.81       13.51
3 |         30       40.54       54.05
4 |         18       24.32       78.38
5 |         11       14.86       93.24
6 |          5        6.76      100.00
------------+-----------------------------------
Total |         74      100.00

. local Nc = r(r)

. sort cvar

. matrix opaccum M2     = mpg trunk , group(cvar) opvar(res)

. matrix V2          = ((N'-1)/(N'-3))*(Nc'/(Nc'-1))*xpxi*M2*xpxi


I now verify that my computations match those reported by regress.

Example 10: Comparing computations of cluster-robust VCE

. regress price mpg trunk, vce(cluster cvar)

Linear regression                               Number of obs     =         74
F(2, 5)           =       9.54
Prob > F          =     0.0196
R-squared         =     0.2222
Root MSE          =     2637.6

(Std. Err. adjusted for 6 clusters in cvar)
------------------------------------------------------------------------------
|               Robust
price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg |  -220.1649   93.28127    -2.36   0.065     -459.952    19.62226
trunk |   43.55851   58.89644     0.74   0.493    -107.8396    194.9566
_cons |   10254.95   2448.547     4.19   0.009     3960.758    16549.14
------------------------------------------------------------------------------

. matrix list e(V)

symmetric e(V)[3,3]
mpg       trunk       _cons
mpg   8701.3957
trunk   4053.5381   3468.7911
_cons     -223021  -124190.97   5995384.3

. matrix list V2

symmetric V2[3,3]
mpg       trunk       _cons
mpg   8701.3957
trunk   4053.5381   3468.7911
_cons     -223021  -124190.97   5995384.3
`

Done and undone

I reviewed the formulas that underlie the OLS estimator and showed how to compute them using Stata matrix commands and functions. In the next two posts, I write an ado-command that implements these formulas.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Stock, J. H., and M. W. Watson. 2010. Introduction to Econometrics. 3rd ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Wooldridge, J. M. 2015. Introductory Econometrics: A Modern Approach. 6th ed. Cincinnati, Ohio: South-Western.

Categories: Programming Tags: