We often use probit and logit models to analyze binary outcomes. A case can be made that the logit model is easier to interpret than the probit model, but Stata’s margins command makes any estimator easy to interpret. Ultimately, estimates from both models produce similar results, and using one or the other is a matter of habit or preference.

I show that the estimates from a probit and logit model are similar for the computation of a set of effects that are of interest to researchers. I focus on the effects of changes in the covariates on the probability of a positive outcome for continuous and discrete covariates. I evaluate these effects on average and at the mean value of the covariates. In other words, I study the average marginal effects (AME), the average treatment effects (ATE), the marginal effects at the mean values of the covariates (MEM), and the treatment effects at the mean values of the covariates (TEM).

First, I present the results. Second, I discuss the code used for the simulations.

Results

In Table 1, I present the results of a simulation with 4,000 replications when the true data generating process (DGP) satisfies the assumptions of a probit model. I show the Read more…

Categories: Statistics Tags:

## Programming an estimation command in Stata: Computing OLS objects in Mata


This is the fourteenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

OLS formulas

Recall that the OLS point estimates are given by

$\widehat{\betab} = \left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1} \left( \sum_{i=1}^N \xb_i’y_i \right)$

where $$\xb_i$$ is the $$1\times k$$ vector of independent variables, $$y_i$$ is the dependent variable for each of the $$N$$ sample observations, and the model for $$y_i$$ is

$y_i = \xb_i\betab’ + \epsilon_i$

If the $$\epsilon_i$$ are independently and identically distributed (IID), we estimate Read more…

Categories: Programming Tags:

## Programming an estimation command in Stata: A first ado-command using Mata

I discuss a sequence of ado-commands that use Mata to estimate the mean of a variable. The commands illustrate a general structure for Stata/Mata programs. This post builds on Programming an estimation command in Stata: Mata 101, Programming an estimation command in Stata: Mata functions, and Programming an estimation command in Stata: A first ado-command.

This is the thirteenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

I begin by reviewing the structure in mymean5.ado, which I discussed Read more…

Categories: Programming Tags:

## Programming an estimation command in Stata: Mata functions

I show how to write a function in Mata, the matrix programming language that is part of Stata. This post uses concepts introduced in Programming an estimation command in Stata: Mata 101.

This is the twelfth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Mata functions

Commands do work in Stata. Functions do work in Mata. Commands operate on Stata objects, like variables, and users specify options to alter the behavior. Mata functions accept arguments, operate on the arguments, and may return a result or alter the value of an argument to contain a result.

mata:
{
A = X + Y
return(A)
}
end


myadd() accepts two arguments, X and Y, puts the sum of X and Y into A, and returns A. For example, Read more…

Categories: Programming Tags:

## A tour of datetime in Stata

Converting a string date

Stata has a wide array of tools to work with dates. You can have dates in years, months, or even milliseconds. In this post, I will provide a brief tour of working with dates that will help you get started using all of Stata’s tools.

When you load a dataset, you will notice that every variable has a display format. For date variables, the display format is %td for daily dates, %tm for monthly dates, etc. Let’s load the wpi1 dataset as Read more…

Categories: Data Management Tags:

## What’s new from Stata Press

Reflecting on the year, Stata has a lot to be thankful for—we released Stata 14, celebrated 30 years of Stata, and had the pleasure of meeting and working with many great people, including our Stata Press authors.

Are you interested in writing a book about Stata or just a book on statistics? We’d love to work with you too. Stata Press offers books with clear, step-by-step examples that make learning and teaching easier. Read more about our submission guidelines, or contact us to get started.

If you’re searching for a good book to read during the holidays, check out our full list of books or our most recent ones below. If you’d like to be notified when new books are released, sign up for Stata Press email alerts.

I hope you all have a great New Year!

### Stata for the Behavioral Sciences

Michael N. Mitchell’s Stata for the Behavioral Sciences is an ideal reference for Read more…

Categories: Tags:

## Programming an estimation command in Stata: Mata 101

I introduce Mata, the matrix programming language that is part of Stata.

This is the eleventh post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Meeting Mata

Mata is a matrix programming language that is part of Stata. Mata code is fast because it is compiled to object code that runs on a virtual machine; type help m1_how for details.

The easiest way to learn Mata is to use it. I begin with an interactive session. (You might find it useful to type along.)

Example 1: A first interactive Mata session

. mata:
------------------------------------------------- mata (type end to exit) ------
: X = J(3, 4, 5)

: X
1   2   3   4
+-----------------+
1 |  5   5   5   5  |
2 |  5   5   5   5  |
3 |  5   5   5   5  |
+-----------------+

: w = (1::4)

: w
1
+-----+
1 |  1  |
2 |  2  |
3 |  3  |
4 |  4  |
+-----+

: v = X*w

: v
1
+------+
1 |  50  |
2 |  50  |
3 |  50  |
+------+

: v'
1    2    3
+----------------+
1 |  50   50   50  |
+----------------+

: end
--------------------------------------------------------------------------------


Typing mata: causes Stata to drop down to a Mata session. Typing end ends the Mata session, thereby popping back up to Stata. The dot prompt . is Stata asking for something to do. After you type mata:, the colon prompt : is the Mata compiler asking for something to do.

Typing X = J(3, 4, 5) at the colon prompt causes Mata to compile and execute this code. J(r, c, v) is the Mata function that creates an r$$\times$$c matrix, each of whose elements is v. The expression on the right-hand side of the assignment operator = is assigned to the symbol on the left-hand side.

Typing X by itself causes Mata to display what X contains, which is a 3$$\times$$4 matrix of 5s. Unassigned expressions display their results. Type help m2_exp for details about expressions.

Typing w = (1::4) causes Mata to use the column range operator to create the 4$$\times$$1 column vector that was assigned to w and displayed when I typed w by itself. Type help m2_op_range for details and a discussion of the row range operator.

Typing v = X*w causes Mata to assign the matrix product of X times w to v, which I subsequently display. I then illustrate that is the transpose operator. Type help m2_exp, marker(remarks7) for a list of operators.

Again, typing end ends the Mata session.

In almost all the work I do, I extract submatrices from a matrix.

Example 2: Extracting submatrices from a matrix

. mata:
------------------------------------------------- mata (type end to exit) ------
: rseed(1234)

: W = runiform(4,4)

: W
1             2             3             4
+---------------------------------------------------------+
1 |  .9472316166   .0522233748   .9743182755   .9457483679  |
2 |  .1856478315   .9487333737   .8825376215   .9440776079  |
3 |  .0894258515   .7505444902   .9484983174   .1121626508  |
4 |  .4809064012   .9763447517   .1254975307   .7655025515  |
+---------------------------------------------------------+

: v = (2, 4)

: u = (1\ 3)

: v
1   2
+---------+
1 |  2   4  |
+---------+

: u
1
+-----+
1 |  1  |
2 |  3  |
+-----+

: W[u, v]
1             2
+-----------------------------+
1 |  .0522233748   .9457483679  |
2 |  .7505444902   .1121626508  |
+-----------------------------+

: W[| 1,1 \ 3,3 |]
1             2             3
+-------------------------------------------+
1 |  .9472316166   .0522233748   .9743182755  |
2 |  .1856478315   .9487333737   .8825376215  |
3 |  .0894258515   .7505444902   .9484983174  |
+-------------------------------------------+

: end
--------------------------------------------------------------------------------


I use rseed() to set the seed for the random-number generator and then use runiform(r,c) to create a 4$$\times$$4 matrix uniform deviates, which I subsequently display.

Next, I use the row-join operator , to create the row vector v and I use the column-join operator \ to create the column vector u. Type help m2_op_join for details.

Typing W[u,v] extracts from W the rows specified in the vector u and the columns specified in the vector v.

I frequently extract rectangular blocks defined by a top-left element and a bottom-right element. I illustrate this syntax by typing

W[| 1,1 \ 3,3 |]

In detail, [| opens a range-subscript extraction, 1,1 is the address of the top-left element, \ separates the top-left element from the bottom-right element, 3,3 is the address of the bottom-right element, and |] closes a range-subscript extraction. Type help m2_subscripts for details.

Ironically, when I am doing matrix programming, I frequently want the element-by-element operator instead of the matrix operator. Preface any matrix operator in Mata with a colon (:) to obtain the element-by-element equivalent.

Example 3: Element-wise operators

. mata:
------------------------------------------------- mata (type end to exit) ------
: W = W[| 2,1 \ 4,4 |]

: W
1             2             3             4
+---------------------------------------------------------+
1 |  .1856478315   .9487333737   .8825376215   .9440776079  |
2 |  .0894258515   .7505444902   .9484983174   .1121626508  |
3 |  .4809064012   .9763447517   .1254975307   .7655025515  |
+---------------------------------------------------------+

: v = .1*(4::6)

: v
1
+------+
1 |  .4  |
2 |  .5  |
3 |  .6  |
+------+

: v:*W
1             2             3             4
+---------------------------------------------------------+
1 |  .0742591326   .3794933495   .3530150486   .3776310432  |
2 |  .0447129257   .3752722451   .4742491587   .0560813254  |
3 |  .2885438407    .585806851   .0752985184   .4593015309  |
+---------------------------------------------------------+

: v'*W
1             2             3             4
+---------------------------------------------------------+
1 |  .4075158991   1.340572446   .9025627257   .8930138994  |
+---------------------------------------------------------+

: end
--------------------------------------------------------------------------------


I extract the bottom four rows of W, store this matrix in W, and display this new W. I then create a row-wise conformable vector v, perform element-wise multiplication of v across the columns of W, and display the result. I cannot type v*W because the 3$$\times$$1 v is not conformable with the 3$$\times$$3 W. But I can, and do, type v’*W because the 1$$\times$$3 v’ is conformable with the 3$$\times$$3 W.

Example 4 uses an element-wise logical operator.

Example 4: Element-wise logical operator

. mata:
------------------------------------------------- mata (type end to exit) ------
: W :< v
1   2   3   4
+-----------------+
1 |  1   0   0   0  |
2 |  1   0   0   1  |
3 |  1   0   1   0  |
+-----------------+

: end
--------------------------------------------------------------------------------


I display the result of comparing the element-wise conformable v with W. Type help m2_op_colon for details.

Stata data in Mata

The Mata function st_data() creates a Mata matrix containing a copy of the data from the Stata dataset in memory. The Mata function st_view() creates a Mata view of the data in the Stata dataset in memory. Views act like matrices, but there is a speed-space tradeoff. Copies are fast at the cost of using twice as much memory. Views are slower, but they use little extra memory.

Copying the data from Stata into Mata doubles the memory used, but the values are stored in Mata memory. Every time a Mata function asks for a value from a matrix, it finds it immediately. In contrast, a view of the data in Stata barely increases the memory used, but the values are in Stata memory. Every time a Mata function asks for a value from a view, it finds a sign telling it where in Stata to get the value.

Example 5: Data from Stata into Mata

. sysuse auto
(1978 Automobile Data)

. list mpg headroom trunk rep78 turn foreign in 1/3 , nolabel

+-------------------------------------------------+
| mpg   headroom   trunk   rep78   turn   foreign |
|-------------------------------------------------|
1. |  22        2.5      11       3     40         0 |
2. |  17        3.0      11       3     40         0 |
3. |  22        3.0      12       .     35         0 |
+-------------------------------------------------+

. mata:
------------------------------------------------- mata (type end to exit) ------
: Y = st_data(., "mpg headroom trunk")

: st_view(X=., ., "rep78 turn foreign")

: V = Y,X

: V[| 1,1 \ 3,6 |]
1     2     3     4     5     6
+-------------------------------------+
1 |   22   2.5    11     3    40     0  |
2 |   17     3    11     3    40     0  |
3 |   22     3    12     .    35     0  |
+-------------------------------------+

: X[3,1] = 7

: X[| 1,1 \ 3,3 |]
1    2    3
+----------------+
1 |   3   40    0  |
2 |   3   40    0  |
3 |   7   35    0  |
+----------------+

: end
--------------------------------------------------------------------------------

. list rep78 turn foreign in 1/3 , nolabel

+------------------------+
| rep78   turn   foreign |
|------------------------|
1. |     3     40         0 |
2. |     3     40         0 |
3. |     7     35         0 |
+------------------------+


After I list out the first three observations on six variables in the auto dataset, I drop down to Mata, use st_data() to put a copy of all the observations on mpg, headroom, and trunk into the Mata matrix Y, and use st_view() to create the Mata view X on to all the observations on rep78, turn, and foreign.

After row-joining Y and X to create V, I display the first 3 rows of V. Note that the third observation on rep78 is missing and that Mata matrices and views can contain missing values.

Changing the value of an element in a view changes the data in Stata. I illustrate this point by replacing the (3,1) element of the view X with 7, displaying the first three rows of the view, and listing out the first three observations on rep78, turn, and foreign.

Copying matrices between Mata and Stata

The Mata function st_matrix() puts a copy of a Stata matrix into a Mata matrix, or it puts a copy of a Mata matrix into a Stata matrix. In example 6, V = st_matrix("B") puts a copy of the Stata matrix B into the Mata matrix V.

Example 6: Creating a copy of a Stata matrix in a Mata vector

. matrix B = (1, 2\ 3, 4)

. matrix list B

B[2,2]
c1  c2
r1   1   2
r2   3   4

. mata:
------------------------------------------------- mata (type end to exit) ------
: V = st_matrix("B")

: V
1   2
+---------+
1 |  1   2  |
2 |  3   4  |
+---------+

: end
--------------------------------------------------------------------------------


In example 7, st_matrix("Z", W) puts a copy of the Mata matrix W into the Stata matrix Z.

Example 7: Creating a copy of a Mata matrix in a Stata vector

. mata:
------------------------------------------------- mata (type end to exit) ------
: W = (4..6\7..9)

: W
1   2   3
+-------------+
1 |  4   5   6  |
2 |  7   8   9  |
+-------------+

: st_matrix("Z", W)

: end
--------------------------------------------------------------------------------

. matrix list Z

Z[2,3]
c1  c2  c3
r1   4   5   6
r2   7   8   9


Strings

Mata matrices can be string matrices.

In my work, I frequently have a list of variables in a string scalar that is easier to work with as a string vector.

Turning a string scalar list into a string vector

. mata:
------------------------------------------------- mata (type end to exit) ------
: s1 = "price mpg trunk"

: s1
price mpg trunk

: s2 = tokens(s1)

: s2
1       2       3
+-------------------------+
1 |  price     mpg   trunk  |
+-------------------------+

: end
--------------------------------------------------------------------------------


I use tokens() to create the string vector s2 from the string vector s1.

Flow of control

Mata has constructs for looping over a block of code enclosed between curly braces or only executing it if an expression is true.

I frequently use the for() construction to loop over a block of code.

Code block 1: for()

mata:
for(i=1; i<=3; i=i+1) {
i
}
end


In this example, I set i to the initial value of 1. The loop will continue as long as i is less than or equal to 3. Each time through the loop, the block of code enclosed between the curly braces is executed, and 1 is added to the current value of i. The code block displays the value of i. Example 9 illustrates these points.

Example 9: A for loop

. mata:
------------------------------------------------- mata (type end to exit) ------
: for(i=1; i<=3; i=i+1) {
>         i
> }
1
2
3

: end
--------------------------------------------------------------------------------


Sometimes, I want to execute a block of code as long as a condition is true, in which case I use a while loop, as in code block 2 and example 10.

Code block 1: globala.do

i = 7
while (i>5) {
i
i = i - 1
}


I set i to 7 and repeat the block of code between the curly braces while i is greater than 5. The block of code displays the current value of i, then subtracts 1 from i.

Example 10: A while loop

. mata:
------------------------------------------------- mata (type end to exit) ------
: i = 7

: while (i>5) {
>     i
>     i = i - 1
> }
7
6

: end
--------------------------------------------------------------------------------


The if construct only executes a code block if an expression is true. I usually use the if-else construct that executes one code block if an expression is true and another code block if the expression is false.

Example 11: An if-else construct

. mata:
-------------------------------------------- mata (type end to exit) ---
: for(i=2; i<10; i=i+5) {
>         i
>         if (i<3) {
>                 "i is less than 3"
>         }
>         else {
>                 "i is not less than 3"
>         }
> }
2
i is less than 3
7
i is not less than 3

: end
-------------------------------------------------------------------------


One-line calls to Mata

I frequently make one-line calls to Mata from Stata. A one-line call to Mata causes Stata to drop to Mata, compile and execute the line of Mata code, and pop back up to Stata.

Example 12: One-line calls to Mata

. mata: st_matrix("Q", I(3))

. matrix list Q

symmetric Q[3,3]
c1  c2  c3
r1   1
r2   0   1
r3   0   0   1


In example 12, I use the one-line call to Mata mata: st_matrix("Q", I(3)) to put a copy of the Mata matrix returned by the Mata expression I(3) into the Stata matrix Q. After the one-line call to Mata, I am back in Stata, so I use matrix list Q to show that the Stata matrix Q is a copy of the Mata matrix W.

Done and undone

I used an interactive session to introduce Mata, the matrix programming language that is part of Stata.

In the next post, I show how to define Mata functions.

Categories: Programming Tags:

## Using mlexp to estimate endogenous treatment effects in a heteroskedastic probit model

I use features new to Stata 14.1 to estimate an average treatment effect (ATE) for a heteroskedastic probit model with an endogenous treatment. In 14.1, we added new prediction statistics after mlexp that margins can use to estimate an ATE.

I am building on a previous post in which I demonstrated how to use mlexp to estimate the parameters of a probit model with an endogenous treatment and used margins to estimate the ATE for the model Using mlexp to estimate endogenous treatment effects in a probit model. Currently, no official commands estimate the heteroskedastic probit model with an endogenous treatment, so in this post I show how mlexp can be used to extend the models estimated by Stata.

Heteroskedastic probit model

For binary outcome $$y_i$$ and regressors $${\bf x}_i$$, the probit model assumes

$y_i = {\bf 1}({\bf x}_i{\boldsymbol \beta} + \epsilon_i > 0)$

The indicator function $${\bf 1}(\cdot)$$ outputs 1 when its input is true and outputs 0 otherwise. The error $$\epsilon_i$$ is standard normal.

Assuming that the error has constant variance may not always be wise. Suppose we are studying a certain business decision. Large firms, because they have the resources to take chances, may exhibit more variation in the factors that affect their decision than small firms.

In the heteroskedastic probit model, regressors $${\bf w}_i$$ determine the variance of $$\epsilon_i$$. Following Harvey (1976), we have

$\mbox{Var}\left(\epsilon_i\right) = \left\{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber$

Heteroskedastic probit model with treatment

In this section, I review the potential-outcome framework used to define an ATE and extend it for the heteroskedastic probit model. For each treatment level, there is an outcome that we would observe if a person were to select that treatment level. When the outcome is binary and there are two treatment levels, we can specify how the potential outcomes $$y_{0i}$$ and $$y_{1i}$$ are generated from the regressors $${\bf x}_i$$ and the error terms $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$:

$\begin{eqnarray*} y_{0i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_0 + \epsilon_{0i} > 0) \cr y_{1i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_1 + \epsilon_{1i} > 0) \end{eqnarray*}$

We assume a heteroskedastic probit model for the potential outcomes. The errors are normal with mean $$0$$ and conditional variance generated by regressors $${\bf w}_i$$. In this post, we assume equal variance of the potential outcome errors.

$\mbox{Var}\left(\epsilon_{0i}\right) = \mbox{Var}\left(\epsilon_{1i}\right) = \left\{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber$

The heteroskedastic probit model for potential outcomes $$y_{0i}$$ and $$y_{1i}$$ with treatment $$t_i$$ assumes that we observe the outcome

$y_i = (1-t_i) y_{0i} + t_i y_{1i} \nonumber$

So we observe $$y_{1i}$$ under the treatment ($$t_{i}=1$$) and $$y_{0i}$$ when the treatment is withheld ($$t_{i}=0$$).

The treatment $$t_i$$ is determined by regressors $${\bf z}_i$$ and error $$u_i$$:

$t_i = {\bf 1}({\bf z}_i{\boldsymbol \psi} + u_i > 0) \nonumber$

The treatment error $$u_i$$ is normal with mean zero, and we allow its variance to be determined by another set of regressors $${\bf v}_i$$:

$\mbox{Var}\left(u_i\right) = \left\{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)\right\}^2 \nonumber$

Heteroskedastic probit model with endogenous treatment

In the previous post, I described how to model endogeneity for the treatment $$t_i$$ by correlating the outcome errors $$\epsilon_{0i}$$ and $$\epsilon_{1i}$$ with the treatment error $$u_i$$. We use the same framework for modeling endogeneity here. The variance of the errors may change depending on the heteroskedasticity regressors $${\bf w}_i$$ and $${\bf v}_i$$, but their correlation remains constant. The errors $$\epsilon_{0i}$$, $$\epsilon_{1i}$$, and $$u_i$$ are trivariate normal with correlation

$\left[\begin{matrix} 1 & \rho_{01} & \rho_{t} \cr \rho_{01} & 1 & \rho_{t} \cr \rho_{t} & \rho_{t} & 1 \end{matrix}\right] \nonumber$

Now we have all the pieces we need to write the log likelihood of the heteroskedastic probit model with an endogenous treatment. The form of the likelihood is similar to what was given in the previous post. Now the inputs to the bivariate normal cumulative distribution function, $$\Phi_2$$, are standardized by dividing by the conditional standard deviations of the errors.

The log likelihood for observation $$i$$ is

$\begin{eqnarray*} \ln L_i = & & {\bf 1}(y_i =1 \mbox{ and } t_i = 1) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\} + \cr & & {\bf 1}(y_i=0 \mbox{ and } t_i=1)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr & & {\bf 1}(y_i=1 \mbox{ and } t_i=0) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr & & {\bf 1}(y_i=0 \mbox{ and } t_i = 0)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\} \end{eqnarray*}$

The data

We will simulate data from a heteroskedastic probit model with an endogenous treatment and then estimate the parameters of the model with mlexp. Then, we will use margins to estimate the ATE.

. set seed 323

. set obs 10000
number of observations (_N) was 0, now 10,000

. generate x = .8*rnormal() + 4

. generate b = rpoisson(1)

. generate z = rnormal()

. matrix cm = (1, .3,.7 \ .3, 1, .7 \ .7, .7, 1)

. drawnorm ey0 ey1 et, corr(cm)


We simulate a random sample of 10,000 observations. The treatment and outcome regressors are generated in a similar manner to their creation in the last post. As in the last post, we generate the errors with drawnorm to have correlation $$0.7$$.

. generate g = runiform()

. generate h = rnormal()

. quietly replace ey0 = ey0*exp(.5*g)

. quietly replace ey1 = ey1*exp(.5*g)

. quietly replace et = et*exp(.1*h)

. generate t = .5*x - .1*b + .5*z - 2.4 + et > 0

. generate y0 = .6*x - .8 + ey0 > 0

. generate y1 = .3*x - 1.3 + ey1 > 0

. generate y = (1-t)*y0 + t*y1


The uniform variable g is generated as a regressor for the outcome error variance, while h is a regressor for the treatment error variance. We scale the errors by using the variance regressors so that they are heteroskedastic, and then we generate the treatment and outcome indicators.

Estimating the model parameters

Now, we will use mlexp to estimate the parameters of the heteroskedastic probit model with an endogenous treatment. As in the previous post, we use the cond() function to calculate different values of the likelihood based on the different values of $$y$$ and $$t$$. We use the factor-variable operator ibn on $$t$$ in equation y to allow for a different intercept at each level of $$t$$. An interaction between $$t$$ and $$x$$ is also specified in equation y. This allows for a different coefficient on $$x$$ at each level of $$t$$.

. mlexp (ln(cond(t, ///
>         cond(y,binormal({y: i.t#c.x ibn.t}/exp({g:g}), ///
>             {t: x b z _cons}/exp({h:h}),{rho}), ///
>                 binormal(-{y:}/exp({g:}),{t:}/exp({h:}),-{rho})), ///
>         cond(y,binormal({y:}/exp({g:}),-{t:}/exp({h:}),-{rho}), ///
>                 binormal(-{y:}/exp({g:}),-{t:}/exp({h:}),{rho}) ///
>         )))), vce(robust)

initial:       log pseudolikelihood = -13862.944
alternative:   log pseudolikelihood = -16501.619
rescale:       log pseudolikelihood = -13858.877
rescale eq:    log pseudolikelihood = -11224.877
Iteration 0:   log pseudolikelihood = -11224.877  (not concave)
Iteration 1:   log pseudolikelihood = -10644.625
Iteration 2:   log pseudolikelihood = -10074.998
Iteration 3:   log pseudolikelihood = -9976.6027
Iteration 4:   log pseudolikelihood = -9973.0988
Iteration 5:   log pseudolikelihood = -9973.0913
Iteration 6:   log pseudolikelihood = -9973.0913

Maximum likelihood estimation

Log pseudolikelihood = -9973.0913               Number of obs     =     10,000

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
y            |
t#c.x |
0  |   .6178115   .0334521    18.47   0.000     .5522467    .6833764
1  |   .2732094   .0365742     7.47   0.000     .2015253    .3448936
|
t |
0  |  -.8403294   .1130197    -7.44   0.000    -1.061844   -.6188149
1  |  -1.215177   .1837483    -6.61   0.000    -1.575317   -.8550371
-------------+----------------------------------------------------------------
g            |
g |   .4993187   .0513297     9.73   0.000     .3987143    .5999232
-------------+----------------------------------------------------------------
t            |
x |   .4985802   .0183033    27.24   0.000     .4627065    .5344539
b |  -.1140255   .0132988    -8.57   0.000    -.1400908   -.0879603
z |   .4993995   .0150844    33.11   0.000     .4698347    .5289643
_cons |  -2.402772   .0780275   -30.79   0.000    -2.555703   -2.249841
-------------+----------------------------------------------------------------
h            |
h |   .1011185   .0199762     5.06   0.000     .0619658    .1402713
-------------+----------------------------------------------------------------
/rho |   .7036964   .0326734    21.54   0.000     .6396577    .7677351
------------------------------------------------------------------------------


Our parameter estimates are close to their true values.

Estimating the ATE

The ATE of $$t$$ is the expected value of the difference between $$y_{1i}$$ and $$y_{0i}$$, the average difference between the potential outcomes. Using the law of iterated expectations, we have

$\begin{eqnarray*} E(y_{1i}-y_{0i})&=& E\left\{ E\left(y_{1i}-y_{0i}|{\bf x}_i,{\bf w}_i\right)\right\} \cr &=& E\left\lbrack\Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{ \exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}- \Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{ \exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}\right\rbrack \cr \end{eqnarray*}$

This can be estimated as a mean of predictions.

Now, we estimate the ATE by using margins. We specify the normal probability expression in the expression() option. We use the expression function xb() to get the linear predictions for the outcome equation and the outcome error variance equation. We can now predict these linear forms after mlexp in Stata 14.1. We specify r.t so that margins will take the difference of the expression under t=1 and t=0. We specify vce(unconditional) to obtain standard errors for the population ATE rather than the sample ATE; we specified vce(robust) for mlexp so that we could specify vce(unconditional) for margins. The contrast(nowald) option is specified to omit the Wald test for the difference.

. margins r.t, expression(normal(xb(y)/exp(xb(g)))) ///
>     vce(unconditional) contrast(nowald)

Contrasts of predictive margins

Expression   : normal(xb(y)/exp(xb(g)))

--------------------------------------------------------------
|            Unconditional
|   Contrast   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
t |
(1 vs 0)  |  -.4183043   .0202635     -.4580202   -.3785885
--------------------------------------------------------------


We estimate that the ATE of $$t$$ on $$y$$ is $$-0.42$$. So taking the treatment decreases the probability of a positive outcome by $$0.42$$ on average over the population.

We will compare this estimate to the average difference of $$y_{1}$$ and $$y_{0}$$ in the sample. We can do this because we simulated the data. In practice, only one potential outcome is observed for every observation, and this average difference cannot be computed.

. generate diff = y1 - y0

. sum diff

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
diff |     10,000      -.4164    .5506736         -1          1


In our sample, the average difference of $$y_{1}$$ and $$y_{0}$$ is also $$-0.42$$.

Conclusion

I have demonstrated how to estimate the parameters of a model that is not available in Stata: the heteroskedastic probit model with an endogenous treatment using mlexp. See [R] mlexp for more details about mlexp. I have also demonstrated how to use margins to estimate the ATE for the heteroskedastic probit model with an endogenous treatment. See [R] margins for more details about mlexp.

Reference

Harvey, A. C. 1976. Estimating regression models with multiplicative heteroscedasticity. Econometrica 44: 461-465.

Categories: Statistics Tags:

## Programming an estimation command in Stata: Using a subroutine to parse a complex option

I make two improvements to the command that implements the ordinary least-squares (OLS) estimator that I discussed in Programming an estimation command in Stata: Allowing for options. First, I add an option for a cluster-robust estimator of the variance-covariance of the estimator (VCE). Second, I make the command accept the modern syntax for either a robust or a cluster-robust estimator of the VCE. In the process, I use subroutines in my ado-program to facilitate the parsing, and I discuss some advanced parsing tricks.

This is the tenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Allowing for a robust or a cluster-robust VCE

The syntax of myregress9, which I discussed in Programming an estimation command in Stata: Allowing for options, is

myregress9 depvar [indepvars] [if] [in] [, robust noconstant]

The syntax of myregress10, which I discuss here, is

myregress10 depvar [indepvars] [if] [in] [, vce(robust | cluster clustervar) noconstant]

By default, myregress10 estimates the VCE assuming that the errors are independently and identically distributed (IID). If the option vce(robust) is specified, myregress10 uses the robust estimator of the VCE. If the option vce(cluster clustervar) is specified, myregress10 uses the cluster-robust estimator of the VCE. See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2010, 2015) for introductions to OLS; see Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects for the formulas and Stata matrix implementations.

I recommend that you click on the file name to download the code for my myregress10.ado. To avoid scrolling, view the code in the do-file editor, or your favorite text editor, to see the line numbers.

*! version 10.0.0  02Dec2015
program define myregress10, eclass sortpreserve
version 14

syntax varlist(numeric ts fv) [if] [in] [, vce(string) noCONStant ]
marksample touse

gettoken depvar indeps : varlist
_fv_check_depvar depvar'

tempname zpz xpx xpy xpxi b V
tempvar  xbhat res res2

if "vce'"' != "" {
my_vce_parse , vce(vce')
local vcetype     "robust"
local clustervar  "r(clustervar)'"
if "clustervar'" != "" {
markout touse' clustervar'
sort clustervar'
}
}

quietly matrix accum zpz' = varlist' if touse' , constant'
local N                    = r(N)
local p                    = colsof(zpz')
matrix xpx'               = zpz'[2..p', 2..p']
matrix xpy'               = zpz'[2..p', 1]
matrix xpxi'              = syminv(xpx')
matrix b'                 = (xpxi'*xpy')'
local k                    = p' - diag0cnt(xpxi') - 1
quietly matrix score double xbhat' = b' if touse'
quietly generate double res'       = (depvar' - xbhat') if touse'
quietly generate double res2'      = (res')^2 if touse'

if "vcetype'" == "robust" {
if "clustervar'" == "" {
tempname M
quietly matrix accum M' = indeps'         ///
[iweight=res2'] if touse' , constant'
local fac                = (N'/(N'-k'))
local df_r               = (N'-k')
}
else  {
tempvar idvar
tempname M
quietly egen idvar' = group(clustervar') if touse'
quietly summarize idvar' if touse', meanonly
local Nc   = r(max)
local fac  = ((N'-1)/(N'-k')*(Nc'/(Nc'-1)))
local df_r = (Nc'-1)
matrix opaccum M' = indeps' if touse'     ///
, group(clustervar') opvar(res')
}
matrix V' = (fac')*xpxi'*M'*xpxi'
local vce                   "robust"
local vcetype               "Robust"
}
else {                            // IID Case
quietly summarize res2' if touse' , meanonly
local sum           = r(sum)
local s2            = sum'/(N'-k')
local df_r          = (N'-k')
matrix V'          = s2'*xpxi'
}

ereturn post b' V', esample(touse') buildfvinfo
ereturn scalar N       = N'
ereturn scalar rank    = k'
ereturn scalar df_r    = df_r'
ereturn local  vce     "vce'"
ereturn local  vcetype "vcetype'"
ereturn local clustvar "clustvar'"
ereturn local  cmd     "myregress10"
ereturn display
end

program define my_vce_parse, rclass
syntax  [, vce(string) ]

local case : word count vce'

if case' > 2 {
my_vce_error , typed(vce')
}

local 0 ", vce'"'
syntax  [, Robust CLuster * ]

if case' == 2 {
if "robust'" == "robust" | "cluster'" == "" {
my_vce_error , typed(vce')
}

capture confirm numeric variable options'
if _rc {
my_vce_error , typed(vce')
}

local clustervar "options'"
}
else {    // case = 1
if "robust'" == "" {
my_vce_error , typed(vce')
}

}

return clear
return local clustervar "clustervar'"
end

program define my_vce_error
syntax , typed(string)

display "{red}{bf:vce(typed')} invalid"'
error 498
end


The syntax command on line 5 puts whatever the user encloses in vce() into a local macro called vce. For example, if the user types

. myregress10 price mpg trunk , vce(hello there)


the local macro vce will contain “hello there”. If the user does not specify something in the vce() option, the local macro vce will be empty. Line 14 uses this condition to execute lines 15–21 only if the user has specified something in option vce().

When the user specifies something in the vce() option, line 15 calls the ado subroutine my_vce_parse to parse what is in the local macro vce. my_vce_parse stores the name of the cluster variable in r(clustervar) and deals with error conditions, as I discuss below. Line 16 stores “robust” into the local macro vcetype, and line 17 stores the contents of the local macro r(clustervar) created by my_vce_parse into the local macro and clustervar.

If the user does not specify something in vce(), the local macro vcetype will be empty and line 36 ensures that myregress10 will compute an IID estimator of the VCE.

Lines 19 and 20 are only executed if the local macro clustervar is not empty. Line 19 updates the touse variable, whose name is stored in the local macro touse, to account for missing values in the cluster variable, whose name is stored in clustervar. Line 20 sorts the dataset in the ascending order of the cluster variable. Users do not want estimation commands resorting their datasets. On line 2, I specified the sortpreserve option on program define to keep the dataset in the order it was in when myregress10 was executed by the user.

Lines 36–65 compute the requested estimator for the VCE. Recall that the local macro vcetype is empty or it contains “robust” and that the local macro clustervar is empty or it contains the name of the cluster variable. The if and else statements use the values stored in vcetype and clustervar to execute one of three blocks of code.

1. Lines 38–42 compute a robust estimator of the VCE when vcetype contains “robust” and clustervar is empty.
2. Lines 45–53 compute a cluster-robust of the VCE when vcetype contains “robust” and clustervar contains the name of the cluster variable.
3. Lines 60–64 compute an IID estimator of the VCE when vcetype does not contain “robust”.

Line 73 stores the name of the cluster variable in e(clustervar), if the local macro clustervar is not empty.

Lines 78–111 define the rclass ado-subroutine my_vce_parse, which performs two tasks. First, it stores the name of the cluster variable in the local macro r(clustervar) when the user specifies vce(cluster clustervar). Second, it finds cases in which the user specified a syntax error in vce() and returns an error in such cases.

Putting these parsing details into a subroutine makes the main command much easier to follow. I recommend that you encapsulate details in subroutines.

The ado-subroutine my_vce_parse is local to the ado-command myregress10; the name my_vce_parse is in a namespace local to myregress10, and my_vce_parse can only be executed from within myregress10.

Line 79 uses syntax to store whatever the user specified in the option vce() in the local macro vce. Line 81 puts the number of words in vce into the local macro case. Line 83 causes the ado-subroutine my_vce_error to display an error message and return error code 498 when there are more than two words in vce. (Recall that vce should contain either robust or cluster clustervar.)

Having ruled out the cases with more than two words, line 87 stores what the local macro vce contains in the local macro 0. Line 88 uses syntax to parse what is in the local macro 0. If the user specified vce(robust), or a valid abbreviation thereof, syntax stores “robust” in the local macro robust; otherwise, the local macro robust is empty. If the user specified vce(cluster something), or a valid abbreviation of cluster, syntax stores “cluster” in the local macro cluster; otherwise, the local macro cluster is empty. The option * causes syntax to put any remaining options into the local macro options. In this case, syntax will store the something in the local macro options.

Remember the trick used in lines 87 and 88. Option parsing is frequently made much easier by storing what a local macro contains in the local macro 0 and using syntax to parse it.

When there are two words in the local macro vce, lines 91–100 ensure that the first word is “cluster” and that the second word, stored in the local macro options, is the name of a numeric variable. When all is well, line 100 stores the name of this numeric variable in the local macro clustervar. Lines 95–98 use a subtle construction to display a custom error message. Rather than let confirm display an error message, lines 95–98 use capture and an if condition to display our custom error message. In detail, line 95 uses confirm to confirm that the local macro options contains the name of a numeric variable. capture puts the return code produced by confirm in the scalar _rc. When options contains the name of a numeric variable, confirm produces the return code 0 and capture stores 0 in _rc; otherwise, confirm produces a positive return code, and capture stores this positive return code in _rc.

When all is well, line 109 clears whatever was in r(), and line 110 stores the name of the cluster variable in r(clustervar).

Lines 113–118 define the ado-subroutine my_vce_error, which displays a custom error message. Like my_vce_parse, my_vce_error is local to myregress10.ado.

Done and undone

I added an option for a cluster-robust estimator of the VCE, and I made myregress10 accept the modern syntax for either a robust or a cluster-robust estimator of the VCE. In the process, I used subroutines in myregress10.ado to facilitate the parsing, and I discussed some advanced parsing tricks.

Although it may seem that I have covered every possible nuance, I have only dealt with a few. Type help syntax for more details about parsing options using the syntax command.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Stock, J. H., and M. W. Watson. 2010. Introduction to Econometrics. 3rd ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Wooldridge, J. M. 2015. Introductory Econometrics: A Modern Approach. 6th ed. Cincinnati, Ohio: South-Western.

Categories: Programming Tags:

## Understanding the generalized method of moments (GMM): A simple example

$$\newcommand{\Eb}{{\bf E}}$$This post was written jointly with Enrique Pinzon, Senior Econometrician, StataCorp.

The generalized method of moments (GMM) is a method for constructing estimators, analogous to maximum likelihood (ML). GMM uses assumptions about specific moments of the random variables instead of assumptions about the entire distribution, which makes GMM more robust than ML, at the cost of some efficiency. The assumptions are called moment conditions.

GMM generalizes the method of moments (MM) by allowing the number of moment conditions to be greater than the number of parameters. Using these extra moment conditions makes GMM more efficient than MM. When there are more moment conditions than parameters, the estimator is said to be overidentified. GMM can efficiently combine the moment conditions when the estimator is overidentified.

We illustrate these points by estimating the mean of a $$\chi^2(1)$$ by MM, ML, a simple GMM estimator, and an efficient GMM estimator. This example builds on Efficiency comparisons by Monte Carlo simulation and is similar in spirit to the example in Wooldridge (2001).

GMM weights and efficiency

GMM builds on the ideas of expected values and sample averages. Moment conditions are expected values that specify the model parameters in terms of the true moments. The sample moment conditions are the sample equivalents to the moment conditions. GMM finds the parameter values that are closest to satisfying the sample moment conditions.

The mean of a $$\chi^2$$ random variable with $$d$$ degree of freedom is $$d$$, and its variance is $$2d$$. Two moment conditions for the mean are thus

$\begin{eqnarray*} \Eb\left[Y – d \right]&=& 0 \\ \Eb\left[(Y – d )^2 – 2d \right]&=& 0 \end{eqnarray*}$

The sample moment equivalents are

$\begin{eqnarray} 1/N\sum_{i=1}^N (y_i – \widehat{d} )&=& 0 \tag{1} \\ 1/N\sum_{i=1}^N\left[(y_i – \widehat{d} )^2 – 2\widehat{d}\right] &=& 0 \tag{2} \end{eqnarray}$

We could use either sample moment condition (1) or sample moment condition (2) to estimate $$d$$. In fact, below we use each one and show that (1) provides a much more efficient estimator.

When we use both (1) and (2), there are two sample moment conditions and only one parameter, so we cannot solve this system of equations. GMM finds the parameters that get as close as possible to solving weighted sample moment conditions.

Uniform weights and optimal weights are two ways of weighting the sample moment conditions. The uniform weights use an identity matrix to weight the moment conditions. The optimal weights use the inverse of the covariance matrix of the moment conditions.

We begin by drawing a sample of a size 500 and use gmm to estimate the parameters using sample moment condition (1), which we illustrate is the sample as the sample average.

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate double y = rchi2(1)

. gmm (y - {d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .82949186
Iteration 1:   GMM criterion Q(b) =  1.262e-32
Iteration 2:   GMM criterion Q(b) =  9.545e-35

note: model is exactly identified

GMM estimation

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .9107644   .0548098    16.62   0.000     .8033392     1.01819
------------------------------------------------------------------------------
Instruments for equation 1: _cons

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------


The sample moment condition is the product of an observation-level error function that is specified inside the parentheses and an instrument, which is a vector of ones in this case. The parameter $$d$$ is enclosed in curly braces {}. We specify the onestep option because the number of parameters is the same as the number of moment conditions, which is to say that the estimator is exactly identified. When it is, each sample moment condition can be solved exactly, and there are no efficiency gains in optimally weighting the moment conditions.

We now illustrate that we could use the sample moment condition obtained from the variance to estimate $$d$$.

. gmm ((y-{d})^2 - 2*{d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  5.4361161
Iteration 1:   GMM criterion Q(b) =  .02909692
Iteration 2:   GMM criterion Q(b) =  .00004009
Iteration 3:   GMM criterion Q(b) =  5.714e-11
Iteration 4:   GMM criterion Q(b) =  1.172e-22

note: model is exactly identified

GMM estimation

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .7620814   .1156756     6.59   0.000     .5353613    .9888015
------------------------------------------------------------------------------
Instruments for equation 1: _cons


While we cannot say anything definitive from only one draw, we note that this estimate is further from the truth and that the standard error is much larger than those based on the sample average.

Now, we use gmm to estimate the parameters using uniform weights.

. matrix I = I(2)

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608
Iteration 1:   GMM criterion Q(b) =  .05343812
Iteration 2:   GMM criterion Q(b) =  .01852592
Iteration 3:   GMM criterion Q(b) =   .0185221
Iteration 4:   GMM criterion Q(b) =   .0185221

GMM estimation

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .7864099   .1050692     7.48   0.000     .5804781    .9923418
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons


The first set of parentheses specifies the first sample moment condition, and the second set of parentheses specifies the second sample moment condition. The options winitial(I) and onestep specify uniform weights.

Finally, we use gmm to estimate the parameters using two-step optimal weights. The weights are calculated using first-step consistent estimates.

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608
Iteration 1:   GMM criterion Q(b) =  .05343812
Iteration 2:   GMM criterion Q(b) =  .01852592
Iteration 3:   GMM criterion Q(b) =   .0185221
Iteration 4:   GMM criterion Q(b) =   .0185221

Step 2
Iteration 0:   GMM criterion Q(b) =  .02888076
Iteration 1:   GMM criterion Q(b) =  .00547223
Iteration 2:   GMM criterion Q(b) =  .00546176
Iteration 3:   GMM criterion Q(b) =  .00546175

GMM estimation

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500
GMM weight matrix:     Robust

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   .9566219   .0493218    19.40   0.000     .8599529    1.053291
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons


All four estimators are consistent. Below we run a Monte Carlo simulation to see their relative efficiencies. We are most interested in the efficiency gains afforded by optimal GMM. We include the sample average, the sample variance, and the ML estimator discussed in Efficiency comparisons by Monte Carlo simulation. Theory tells us that the optimally weighted GMM estimator should be more efficient than the sample average but less efficient than the ML estimator.

The code below for the Monte Carlo builds on Efficiency comparisons by Monte Carlo simulation, Maximum likelihood estimation by mlexp: A chi-squared example, and Monte Carlo simulations using Stata. Click gmmchi2sim.do to download this code.

. clear all
. set seed 12345
. matrix I = I(2)
. postfile sim  d_a d_v d_ml d_gmm d_gmme using efcomp, replace
. forvalues i = 1/2000 {
2.     quietly drop _all
3.     quietly set obs 500
4.     quietly generate double y = rchi2(1)
5.
.     quietly mean y
6.     local d_a         =  _b[y]
7.
.     quietly gmm ( (y-{d=d_a'})^2 - 2*{d}) , instruments( )  ///
8.     if e(converged)==1 {
9.             local d_v = _b[d:_cons]
10.     }
11.     else {
12.             local d_v = .
13.     }
14.
.     quietly mlexp (ln(chi2den({d=d_a'},y)))
15.     if e(converged)==1 {
16.             local d_ml  =  _b[d:_cons]
17.     }
18.     else {
19.             local d_ml  = .
20.     }
21.
.     quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
>         winitial(I) onestep conv_maxiter(200)
22.     if e(converged)==1 {
23.             local d_gmm = _b[d:_cons]
24.     }
25.     else {
26.             local d_gmm = .
27.     }
28.
.     quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
29.     if e(converged)==1 {
30.             local d_gmme = _b[d:_cons]
31.     }
32.     else {
33.             local d_gmme = .
34.     }
35.
.     post sim (d_a') (d_v') (d_ml') (d_gmm') (d_gmme')
36.
. }
. postclose sim
. use efcomp, clear
. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
d_a |      2,000     1.00017    .0625367   .7792076    1.22256
d_v |      1,996    1.003621    .1732559   .5623049   2.281469
d_ml |      2,000    1.002876    .0395273   .8701175   1.120148
d_gmm |      2,000    .9984172    .1415176   .5947328   1.589704
d_gmme |      2,000    1.006765    .0540633   .8224731   1.188156


The simulation results indicate that the ML estimator is the most efficient (d_ml, std. dev. 0.0395), followed by the efficient GMM estimator (d_gmme}, std. dev. 0.0541), followed by the sample average (d_a, std. dev. 0.0625), followed by the uniformly-weighted GMM estimator (d_gmm, std. dev. 0.1415), and finally followed by the sample-variance moment condition (d_v, std. dev. 0.1732).

The estimator based on the sample-variance moment condition does not converge for 4 of 2,000 draws; this is why there are only 1,996 observations on d_v when there are 2,000 observations for the other estimators. These convergence failures occurred even though we used the sample average as the starting value of the nonlinear solver.

For a better idea about the distributions of these estimators, we graph the densities of their estimates.

Figure 1: Densities of the estimators

The density plots illustrate the efficiency ranking that we found from the standard deviations of the estimates.

The uniformly weighted GMM estimator is less efficient than the sample average because it places the same weight on the sample average as on the much less efficient estimator based on the sample variance.

In each of the overidentified cases, the GMM estimator uses a weighted average of two sample moment conditions to estimate the mean. The first sample moment condition is the sample average. The second moment condition is the sample variance. As the Monte Carlo results showed, the sample variance provides a much less efficient estimator for the mean than the sample average.

The GMM estimator that places equal weights on the efficient and the inefficient estimator is much less efficient than a GMM estimator that places much less weight on the less efficient estimator.

We display the weight matrix from our optimal GMM estimator to see how the sample moments were weighted.

. quietly gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

. matlist e(W), border(rows)

-------------------------------------
| 1         | 2
|     _cons |     _cons
-------------+-----------+-----------
1            |           |
_cons |  1.621476 |
-------------+-----------+-----------
2            |           |
_cons | -.2610053 |  .0707775
-------------------------------------


The diagonal elements show that the sample-mean moment condition receives more weight than the less efficient sample-variance moment condition.

Done and undone

We used a simple example to illustrate how GMM exploits having more equations than parameters to obtain a more efficient estimator. We also illustrated that optimally weighting the different moments provides important efficiency gains over an estimator that uniformly weights the moment conditions.

Our cursory introduction to GMM is best supplemented with a more formal treatment like the one in Cameron and Trivedi (2005) or Wooldridge (2010).

Graph code appendix

use efcomp
local N = _N
kdensity d_a,     n(N') generate(x_a    den_a)    nograph
kdensity d_v,     n(N') generate(x_v    den_v)    nograph
kdensity d_ml,    n(N') generate(x_ml   den_ml)   nograph
kdensity d_gmm,   n(N') generate(x_gmm  den_gmm)  nograph
kdensity d_gmme,  n(N') generate(x_gmme den_gmme) nograph
twoway (line den_a x_a,       lpattern(solid))        ///
(line den_v x_v,       lpattern(dash))         ///
(line den_ml x_ml,     lpattern(dot))          ///
(line den_gmm x_gmm,   lpattern(dash_dot))     ///
(line den_gmme x_gmme, lpattern(shordash))
`

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2001. Applications of generalized method of moments estimation. Journal of Economic Perspectives 15(4): 87-100.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags: