## Probit model with sample selection by mlexp

Overview

In a previous post, David Drukker demonstrated how to use mlexp to estimate the degree of freedom parameter in a chi-squared distribution by maximum likelihood (ML). In this post, I am going to use mlexp to estimate the parameters of a probit model with sample selection. I will illustrate how to specify a more complex likelihood in mlexp and provide intuition for the probit model with sample selection. Our results match the heckprobit command; see [R] heckprobit for more details.

Probit model

For binary outcome $$y_i$$ and regressors $${\bf x}_i$$, the probit model assumes

$$$\label{eq:outcome} y_i = {\bf 1}({\bf x}_i{\boldsymbol \beta} + \epsilon_i > 0) \tag{1}$$$

where the error $$\epsilon_i$$ is standard normal. The indicator function $${\bf1}(\cdot)$$ outputs 1 when its input is true and outputs 0 otherwise.

The log likelihood of the probit model is

$\ln L = \sum_{i=1}^{N} y_i \ln \Phi({\bf x}_i{\boldsymbol \beta}) + (1-y_i)\ln\{1-\Phi({\bf x}_i{\boldsymbol \beta})\} \nonumber$

where $$\Phi$$ is the standard normal cumulative distribution function.

The probit model is widely used to model binary outcomes. But there are situations where it is not appropriate. Sometimes we observe a random sample where the outcome is missing on certain observations. If there is a relationship between the unobserved error of the outcome $$\epsilon_i$$ and the unobserved error that affects whether the outcome is observed $$\epsilon_{si}$$, then estimates made using the probit model will be inconsistent for $${\boldsymbol \beta}$$. For instance, this could happen when we model job satisfaction and our sample includes employed and unemployed individuals. The unobserved factors that affect your job satisfaction may be correlated with factors that affect your employment status. Samples like this are said to suffer from “selection on unobservables”.

Probit model with sample selection

Van de Ven and Van Pragg (1981) introduced the probit model with sample selection to allow for consistent estimation of $${\boldsymbol \beta}$$ in samples that suffer from selection on unobservables. The equation for the outcome (1) remains the same, but we add another equation. The selection process for the outcome is modeled as

$s_i = {\bf 1}({\bf z}_i{\boldsymbol \gamma} + \epsilon_{si} > 0) \nonumber$

where $$s_i=1$$ if we observed $$y_i$$ and $$s_i=0$$ otherwise, and $${\bf z}_i$$ are regressors that affect the selection process.

The errors $$\epsilon_i$$ and $$\epsilon_{si}$$ are assumed to be standard normal with

$\mbox{corr}(\epsilon_i,\epsilon_{si}) = \rho \nonumber$

Let $$S$$ be the set of observations where $$y_i$$ is observed. The likelihood for the probit model with sample selection is

$\begin{eqnarray*} \ln L &=& \sum_{i\in S}^{} y_i\ln\Phi_2({\bf x}_i{\boldsymbol \beta}, {\bf z}_i{\boldsymbol \gamma},\rho) + (1-y_i)\ln\Phi_2(-{\bf x}_i{\boldsymbol \beta}, {\bf z}_i{\boldsymbol \gamma},-\rho) + \cr & & \sum_{i\not\in S}^{} \ln \{1- \Phi({\bf z}_i{\boldsymbol \gamma})\} \end{eqnarray*}$

where $$\Phi_2$$ is the bivariate normal cumulative distribution function.

The data

We will simulate data from a probit model with sample selection and then estimate the parameters of the model using mlexp. We simulate a random sample of 7,000 observations.

. drop _all

. set seed 441

. set obs 7000
number of observations (_N) was 0, now 7,000

. generate x = .5*rchi2(2)

. generate z = rnormal()

. generate b = rbinomial(2,.5)


First, we generate the regressors. We use a $$\chi^2$$ variable with $$2$$ degrees of freedom $$x$$ scaled by $$0.5$$ as a regressor for the outcome. A standard normal variable $$z$$ is used as a selection regressor. The variable $$b$$ has a binomial($$2,0.5$$) distribution and will be used as a selection regressor.

. matrix cm = (1,.7 \ .7,1)

. drawnorm ey es, corr(cm)


Next, we draw the unobserved errors. The outcome $$y$$ and selection indicator $$s$$ will be generated with errors that have correlation $$0.7$$. We generate the errors with the drawnorm command.

. generate s = z + 1.3*0.b + 1.b + .5*2.b + es > 0

. generate y = .7*x + ey  + .5 > 0

. replace y = .  if !s
(1,750 real changes made, 1,750 to missing)


Finally, we generate the outcome and selection indicator. We specify the effect of $$b$$ on selection by using factor-variable notation. Every value of $$b$$ provides a different intercept for $$s$$. We set the outcome to missing for observations where $$s$$ is $$0$$.

Effect of ignoring sample selection

First, we will use mlexp to estimate the probit model, ignoring the sample selection. We use the cond() function to calculate different values of the likelihood based on the value of $$y$$. For cond(a,b,c), b is returned if a is true and c is returned otherwise. We use only the observations for which $$y$$ is not missing by specifying $$y$$ in the variables() option. The variables in the equation y are specified once, the first time the equation parameters are used in the likelihood. When the equation is used again, it is referred to as $$\{{\bf y}:\}$$.

. mlexp (ln(cond(y,normal({y: x _cons}),1-normal({y:})))), variables(y)

initial:       log likelihood = -3639.0227
alternative:   log likelihood = -2342.8722
rescale:       log likelihood = -1746.0961
Iteration 0:   log likelihood = -1746.0961
Iteration 1:   log likelihood = -1503.9519
Iteration 2:   log likelihood = -1485.2935
Iteration 3:   log likelihood = -1485.1677
Iteration 4:   log likelihood = -1485.1677

Maximum likelihood estimation

Log likelihood = -1485.1677                     Number of obs     =      5,250

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
x |    .813723   .0568938    14.30   0.000     .7022132    .9252328
_cons |   .7623006   .0386929    19.70   0.000     .6864639    .8381372
------------------------------------------------------------------------------


Both parameters are overestimated, and the true values are not in the estimated confidence intervals.

Accounting for sample selection

Now, we use mlexp to estimate the probit model with sample selection. We use the cond() function twice, once for the selection indicator value and once for the outcome value. We no longer need to specify the variables() option because we will use each observation in the data. We use the factor-variable operator ibn in the selection equation so that a separate intercept is used in the equation for each level of $$b$$.

. mlexp (ln(cond(s,cond(y,binormal({y: x _cons},{s: z ibn.b}, {rho}), binormal(
> -{y:},{s:}, -{rho})),1-normal({s:}))))

initial:       log likelihood =  -8491.053
alternative:   log likelihood =  -5898.851
rescale:       log likelihood =  -5898.851
rescale eq:    log likelihood = -5654.3504
Iteration 0:   log likelihood = -5654.3504
Iteration 1:   log likelihood = -5473.5319  (not concave)
Iteration 2:   log likelihood = -4401.6027  (not concave)
Iteration 3:   log likelihood = -4340.7398  (not concave)
Iteration 4:   log likelihood = -4333.6402  (not concave)
Iteration 5:   log likelihood = -4326.1744  (not concave)
Iteration 6:   log likelihood = -4316.4936  (not concave)
Iteration 7:   log likelihood =  -4261.307
Iteration 8:   log likelihood = -4154.7548
Iteration 9:   log likelihood = -4142.7991
Iteration 10:  log likelihood = -4141.7431
Iteration 11:  log likelihood = -4141.7306
Iteration 12:  log likelihood = -4141.7305

Maximum likelihood estimation

Log likelihood = -4141.7305                     Number of obs     =      7,000

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
y            |
x |   .7643362   .0532342    14.36   0.000      .659999    .8686734
_cons |   .5259657   .0406914    12.93   0.000      .446212    .6057195
-------------+----------------------------------------------------------------
s            |
z |   1.028631   .0260977    39.41   0.000      .977481    1.079782
|
b |
0  |   1.365497   .0440301    31.01   0.000       1.2792    1.451794
1  |   1.034018   .0297178    34.79   0.000     .9757726    1.092264
2  |    .530342   .0353022    15.02   0.000      .461151    .5995331
-------------+----------------------------------------------------------------
/rho |   .6854869   .0417266    16.43   0.000     .6037043    .7672696
------------------------------------------------------------------------------


Our estimates of the coefficient on $$x$$ and the constant intercept are closer to the true values. The confidence intervals also include the true values. The correlation $$\rho$$ is estimated to be $$0.69$$, and the true value of $$0.7$$ is in the confidence interval. This model obviously works better.

Conclusion

I have demonstrated how to estimate the parameters of a model with a moderately complex likelihood function: the probit model with sample selection using mlexp. I also illustrated how to generate data from this model and how its results differ from the simple probit model.

See [R] mlexp for more details about mlexp. In a future post, we will show how to make predictions after mlexp and how to estimate population average parameters using mlexp and margins.

Reference

Van de Ven, W. P. M. M., and B. M. S. Van Pragg. 1981. The demand for deductibles in private health insurance: A probit model with sample selection. Journal of Econometrics 17: 229{252.

Categories: Statistics Tags:

## Programming estimators in Stata: Why you should

Distributing a Stata command that implements a statistical method will get that method used by lots of people. They will thank you. And, they will cite you!

This post is the first in the series #StataProgramming about programing an estimation command in Stata that uses Mata to do the numerical work. In the process of showing you how to program an estimation command in Stata, I will discuss do-file programming, ado-file programming, and Mata programming. When the series ends, you will be able to write Stata commands.

Stata users like its predictable syntax and its estimation-postestimation structure that facilitates hypothesis testing, specification tests, and parameter interpretation. To help you write Stata commands that people want to use, I illustrate how Stata syntax is predictable and give an overview of the estimation-postestimation structure that you will want to emulate in your programs.

Stata structure by example

I use and describe some simulated data about the number of traffic accidents observed on 948 people.

Example 1: Accident data

. use http://www.stata.com/data/accident2.dta

. describe

Contains data from http://www.stata.com/data/accident2.dta
obs:           948
vars:             6                          23 Sep 2015 13:04
size:        22,752
--------------------------------------------------------------------------------
storage   display    value
variable name   type    format     label      variable label
--------------------------------------------------------------------------------
kids            float   %9.0g                 number of children
cvalue          float   %9.0g                 car value index
tickets         float   %9.0g                 number of tickets in last 2 years
traffic         float   %9.0g                 local traffic index, larger=>worse
male            float   %9.0g                 1=>man, 0=>woman
accidents       float   %9.0g                 number of traffic in last 5 years
--------------------------------------------------------------------------------
Sorted by:


Stata’s predictable syntax

I estimate the parameters of a Poisson regression model for accidents as a function of traffic conditions (traffic), an indicator for being a male driver (male), and the number of tickets received in the last two years (tickets).

Example 2: A Poisson model for accidents

. poisson accidents traffic male tickets , vce(robust)

Iteration 0:   log pseudolikelihood = -377.98594
Iteration 1:   log pseudolikelihood = -370.68001
Iteration 2:   log pseudolikelihood = -370.66527
Iteration 3:   log pseudolikelihood = -370.66527

Poisson regression                              Number of obs     =        948
Wald chi2(3)      =    1798.65
Prob > chi2       =     0.0000
Log pseudolikelihood = -370.66527               Pseudo R2         =     0.8191

------------------------------------------------------------------------------
|               Robust
accidents |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
traffic |   .0764399   .0165119     4.63   0.000     .0440772    .1088027
male |   3.228004   .1232081    26.20   0.000     2.986521    3.469488
tickets |   1.366614   .0328218    41.64   0.000     1.302284    1.430943
_cons |  -7.434478   .2413188   -30.81   0.000    -7.907454   -6.961502
------------------------------------------------------------------------------


I want to focus on the structure in this example so that you can use it to make your commands easier to use. In particular, I want to discuss the structure of the command syntax and to point out that the output is easy to read and interpret because it is a standard Stata output table. For estimators that table almost always reports estimates (often coefficients), standard errors, tests against zero and their $p$-values, and confidence intervals.

Stata syntax is predictable, which makes it easy to use. Stata users “speak Stata” and do not even notice the details. I highlight some of these details so that we can make the syntax of the commands we write predictable. Here are some of the standard syntax elements illustrated in example 2.

1. The command has four syntactical elements;
1. command name (poisson),
2. list of variable names (accidents traffic male tickets),
3. a comma,
4. an option (vce(robust)).
2. In the list of variable names, the name of the dependent variable is first and it is followed by the names of the independent variables.
3. The job of the comma is to separate the command name and variable list from the option or options.

The output is also structured; it is composed of an iteration log, a header, and a standard output table.

Estimation-postestimation framework

As a Stata user, I could now use the estimation-postestimation framework. For example, I could perform a Wald test of the hypothesis that the coefficient on male is 3.

Example 3: A Wald test of a linear restriction

. test male = 3

( 1)  [accidents]male = 3

chi2(  1) =    3.42
Prob > chi2 =    0.0642


or I could perform a Wald test of the nonlinear hypothesis that the ratio of the coefficient on male to the ratio of the coefficient on tickets is 2.

Example 4: A Wald test of a nonlinear restriction

. testnl _b[male]/_b[tickets] = 2

(1)  _b[male]/_b[tickets] = 2

chi2(1) =       19.65
Prob > chi2 =        0.0000


I could also predict the mean of accidents for each observation and summarize the results.

Example 5: Summarizing the predicted conditional means

. predict nhat
(option n assumed; predicted number of events)

. summarize nhat

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
nhat |        948    .8512658    2.971087   .0006086    29.0763


Finally, I could use margins to estimate conditional or population-averaged parameters that are functions of the parameters in the original model. I use margins to estimate the average number of accidents that would be observed if each individual received 0 tickets, or 1 ticket, or 2 tickets, …, or 7 tickets. See [R] margins, Long and Freese (2006, sec. 4.4.2-4.4.3), and Cameron and Trivedi (2010, 10.5.6{10.6.9) for introductions to estimating functions of the the model parameters by margins.

Example 6: Estimating functions of model parameters

. margins, at(tickets=(0 1 2 3 4 5 6 7))

Predictive margins                              Number of obs     =        948
Model VCE    : Robust

Expression   : Predicted number of events, predict()

1._at        : tickets         =           0

2._at        : tickets         =           1

3._at        : tickets         =           2

4._at        : tickets         =           3

5._at        : tickets         =           4

6._at        : tickets         =           5

7._at        : tickets         =           6

8._at        : tickets         =           7

------------------------------------------------------------------------------
|            Delta-method
|     Margin   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
_at |
1  |   .0097252   .0015387     6.32   0.000     .0067094     .012741
2  |   .0381426   .0048762     7.82   0.000     .0285854    .0476998
3  |   .1495971   .0148157    10.10   0.000      .120559    .1786353
4  |   .5867272   .0432256    13.57   0.000     .5020066    .6714478
5  |   2.301172   .1302033    17.67   0.000     2.045978    2.556366
6  |   9.025308   .5049176    17.87   0.000     8.035688    10.01493
7  |   35.39769   2.555679    13.85   0.000     30.38865    40.40673
8  |   138.8315   13.49606    10.29   0.000     112.3797    165.2832
------------------------------------------------------------------------------


The glue

The estimation results stored in e() are the glue that holds together the estimation-postestimation framework. The poisson command stores lots of stuff in e(). I could use ereturn list to list all this stuff, but there are many stored objects that do not interest you yet.

Most of the estimation-postestimation features that I discussed were implemented using e(b), e(V), and e(predict), which are the vector of point estimates, the estimated VCE, and the name of the command that implements predict after poisson.

I will show how to store what you need in e() in the #StataProgramming series.

Structure of Stata commands

Here is an outline of the tasks performed by a Stata estimation command.

1. Parse the input to the command.
2. Compute results.
3. Store results in e()
4. Display output.

You need to write a predict command to complete the estimation-postestimation framework. After you have stored the estimation results and written the predict command, margins works.

I will explain each of these steps in the #StataProgramming series of posts.

Use this structure to your advantage. To make your command easy to use, design it to have the predictable syntax implemented in other commands and make it work in the estimation-postestimation framework. This task is far easier than it sounds. In fact, it is just plain easy. The Stata language steers you in this direction.

Done and undone

I will teach you how to program an estimation command in Stata in the #StataProgramming series. I will also show you how do the numerical work in Mata. I discussed the following points, in this first post.

1. The predictable structure of Stata syntax makes Stata easy to use. You should emulate this structure, so that your commands are easy to use.
2. The estimation-postestimation framework makes inference and advanced estimation simple. It is easy for you to make your command work with this framework.
3. The estimation results stored in e(), and the predict command, are the glue that holds the estimation-postestimation framework together.

In the next post, I discuss do-file programming tools that I will subsequently use to parse the input to the command.

References

Cameron, A. C., and P. K. Trivedi. 2010. Microeconometrics Using Stata. Revised ed. College Station, Texas: Stata Press.

Long, J. S., and J. Freese. 2014. Regression models for categorical dependent variables using Stata. 3rd ed. College Station, Texas: Stata Press.

Categories: Programming Tags:

## Estimating parameters by maximum likelihood and method of moments using mlexp and gmm


Overview

We provide an introduction to parameter estimation by maximum likelihood and method of moments using mlexp and gmm, respectively (see [R] mlexp and [R] gmm). We include some background about these estimation techniques; see Pawitan (2001, Casella and Berger (2002), Cameron and Trivedi (2005), and Wooldridge (2010) for more details.

Maximum likelihood (ML) estimation finds the parameter values that make the observed data most probable. The parameters maximize the log of the likelihood function that specifies the probability of observing a particular set of data given a model.

Method of moments (MM) estimators specify population moment conditions and find the parameters that solve the equivalent sample moment conditions. MM estimators usually place fewer restrictions on the model than ML estimators, which implies that MM estimators are less efficient but more robust than ML estimators.

Using mlexp to estimate probit model parameters

A probit model for the binary dependent variable $$y$$ conditional on covariates $$\xb$$ with coefficients $$\betab$$ is

$y = \begin{cases} 1 & \mbox{ if } \xb\betab’ + \epsilon > 0\\ 0 & \mbox{ otherwise } \end{cases}$

where $$\epsilon$$ has a standard normal distribution. The log-likelihood function for the probit model is

$\label{E:b1} \ln\{L(\betab;\xb,y)\}= \sum_{i=1}^N y_i \ln\Phi(\xb_{i}\betab’) + (1-y_i) \ln\Phi(-\xb_{i}\betab’)$

where $$\Phi$$ denotes the cumulative standard normal.

We now use mlexp to estimate the coefficients of a probit model. We have data on whether an individual belongs to a union (union), the individual’s age (age), and the highest grade completed (grade).

. webuse union
(NLS Women 14-24 in 1968)

. mlexp ( union*lnnormal({b1}*age + {b2}*grade + {b0})    ///
>         + (1-union)*lnnormal(-({b1}*age + {b2}*grade + {b0})) )

initial:       log likelihood = -18160.456
alternative:   log likelihood = -1524604.4
rescale:       log likelihood = -14097.135
rescale eq:    log likelihood =  -14063.38
Iteration 0:   log likelihood =  -14063.38
Iteration 1:   log likelihood = -13796.715
Iteration 2:   log likelihood = -13796.336
Iteration 3:   log likelihood = -13796.336

Maximum likelihood estimation

Log likelihood = -13796.336                     Number of obs     =     26,200

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/b1 |   .0051821   .0013471     3.85   0.000     .0025418    .0078224
/b2 |   .0373899   .0035814    10.44   0.000     .0303706    .0444092
/b0 |  -1.404697   .0587797   -23.90   0.000    -1.519903   -1.289491
------------------------------------------------------------------------------


Defining a linear combination of the covariates makes it easier to specify the model and to read the output:

. mlexp ( union*lnnormal({xb:age grade _cons}) + (1-union)*lnnormal(-{xb:}) )

initial:       log likelihood = -18160.456
alternative:   log likelihood = -14355.672
rescale:       log likelihood = -14220.454
Iteration 0:   log likelihood = -14220.454
Iteration 1:   log likelihood = -13797.767
Iteration 2:   log likelihood = -13796.336
Iteration 3:   log likelihood = -13796.336

Maximum likelihood estimation

Log likelihood = -13796.336                     Number of obs     =     26,200

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
age |   .0051821   .0013471     3.85   0.000     .0025418    .0078224
grade |   .0373899   .0035814    10.44   0.000     .0303706    .0444092
_cons |  -1.404697   .0587797   -23.90   0.000    -1.519903   -1.289491
------------------------------------------------------------------------------


Using gmm to estimate parameters by MM

ML specifies a functional form for the distribution of $$y$$ conditional on $$\xb$$. Specifying $$\Eb[y|\xb]=\Phi(\xb\betab’)$$ is less restrictive because it imposes structure only on the first conditional moment instead of on all the conditional moments. Under correct model specification, the ML estimator is more efficient than the MM
estimator because it correctly specifies the conditional mean and all other conditional moments.

The model assumption $$\Eb[y|\xb]=\Phi(\xb\betab’)$$ implies the moment conditions $$\Eb[\{y-\Phi(\xb\betab’)\}\xb] = {\bf 0}$$. The sample moment equivalent is

$\sum_{i=1}^N [\{y_i-\Phi(\xb_i\betab’)\}\xb_i] = {\bf 0}$

In the gmm command below, we specify the residuals $$y_i-\Phi(\xb_i\betab’)$$ inside the parentheses and the variables that multiply them, known as instruments, in the option instruments().

. gmm ( union - normal({xb:age grade _cons}) ), instruments(age grade) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .07831137
Iteration 1:   GMM criterion Q(b) =  .00004813
Iteration 2:   GMM criterion Q(b) =  5.333e-09
Iteration 3:   GMM criterion Q(b) =  5.789e-17

note: model is exactly identified

GMM estimation

Number of parameters =   3
Number of moments    =   3
Initial weight matrix: Unadjusted                 Number of obs   =     26,200

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
age |   .0051436   .0013349     3.85   0.000     .0025272      .00776
grade |   .0383185   .0038331    10.00   0.000     .0308058    .0458312
_cons |  -1.415623   .0609043   -23.24   0.000    -1.534994   -1.296253
------------------------------------------------------------------------------
Instruments for equation 1: age grade _cons


The point estimates are similar to the ML estimates because both estimators are consistent.

Using gmm to estimate parameters by ML

When we maximize a log-likelihood function, we find the parameters that set the first derivative to 0. For example, setting the first derivative of the probit log-likelihood function with respect to $$\betab$$ to 0 in the sample yields

$\label{E:b2} \frac{\partial \ln\{L(\beta;\xb,y)\}}{\partial \betab} = \sum_{i=1}^N \left\{y_i \frac{\phi(\xb_{i}\betab’)}{\Phi(\xb_{i}\betab’)} – (1-y_i) \frac{\phi(-\xb_{i}\betab’)}{\Phi(-\xb_{i}\betab’)}\right\} \xb_{i} = {\bf 0}$

Below, we use gmm to find the parameters that solve these sample moment conditions:

. gmm ( union*normalden({xb:age grade _cons})/normal({xb:})       ///
>         -(1-union)*normalden(-{xb:})/normal(-{xb:}) ),          ///
>         instruments(age grade) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .19941827
Iteration 1:   GMM criterion Q(b) =  .00012506
Iteration 2:   GMM criterion Q(b) =  2.260e-09
Iteration 3:   GMM criterion Q(b) =  7.369e-19

note: model is exactly identified

GMM estimation

Number of parameters =   3
Number of moments    =   3
Initial weight matrix: Unadjusted                 Number of obs   =     26,200

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
age |   .0051821    .001339     3.87   0.000     .0025577    .0078065
grade |   .0373899   .0037435     9.99   0.000     .0300528     .044727
_cons |  -1.404697   .0601135   -23.37   0.000    -1.522517   -1.286876
------------------------------------------------------------------------------
Instruments for equation 1: age grade _cons


The point estimates match those reported by mlexp. The standard errors differ because gmm reports robust standard errors.

Summary

We showed how to easily estimate the probit model parameters by ML and by MM using mlexp and gmm, respectively. We also showed that you can estimate these parameters using restrictions imposed by conditional distributions or using weaker conditional moment restrictions. Finally, we illustrated that the equations imposed by the conditional distributions can be viewed as sample moment restrictions.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics Methods and Applications. 1st ed. New York: Cambridge University Press.

Casella, G., and R. L. Berger. 2002. Statistical Inference. 2nd ed. Pacific Grove, CA: Duxbury.

Pawitan, Y. 2001. In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford: Oxford University Press.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. MIT Press.

Categories: Statistics Tags:

## Efficiency comparisons by Monte Carlo simulation

Overview

In this post, I show how to use Monte Carlo simulations to compare the efficiency of different estimators. I also illustrate what we mean by efficiency when discussing statistical estimators.

I wrote this post to continue a dialog with my friend who doubted the usefulness of the sample average as an estimator for the mean when the data-generating process (DGP) is a $$\chi^2$$ distribution with $$1$$ degree of freedom, denoted by a $$\chi^2(1)$$ distribution. The sample average is a fine estimator, even though it is not the most efficient estimator for the mean. (Some researchers prefer to estimate the median instead of the mean for DGPs that generate outliers. I will address the trade-offs between these parameters in a future post. For now, I want to stick to estimating the mean.)

In this post, I also want to illustrate that Monte Carlo simulations can help explain abstract statistical concepts. I show how to use a Monte Carlo simulation to illustrate the meaning of an abstract statistical concept. (If you are new to Monte Carlo simulations in Stata, you might want to see Monte Carlo simulations using Stata.)

Consistent estimator A is said to be more asymptotically efficient than consistent estimator B if A has a smaller asymptotic variance than B; see Wooldridge (2010, sec. 14.4.2) for an especially useful discussion. Theoretical comparisons can sometimes ascertain that A is more efficient than B, but the magnitude of the difference is rarely identified. Comparisons of Monte Carlo simulation estimates of the variances of estimators A and B give both sign and magnitude for specific DGPs and sample sizes.

The sample average versus maximum likelihood

Many books discuss the conditions under which the maximum likelihood (ML) estimator is the efficient estimator relative to other estimators; see Wooldridge (2010, sec. 14.4.2) for an accessible introduction to the modern approach. Here I compare the ML estimator with the sample average for the mean when the DGP is a $$\chi^2(1)$$ distribution.

Example 1 below contains the commands I used. For an introduction to Monte Carlo simulations see Monte Carlo simulations using Stata, and for an introduction to using mlexp to estimate the parameter of a $$\chi^2$$ distribution see Maximum likelihood estimation by mlexp: A chi-squared example. In short, the commands do the following $$5,000$$ times:

1. Draw a sample of 500 observations from a $$\chi^2(1)$$ distribution.
2. Estimate the mean of each sample by the sample average, and store this estimate in m_a in the dataset efcomp.dta.
3. Estimate the mean of each sample by ML, and store this estimate in m_ml in the dataset efcomp.dta.

Example 1: The distributions of the sample average and the ML estimators

. clear all
. set seed 12345
. postfile sim  mu_a mu_ml using efcomp, replace
. forvalues i = 1/5000 {
2.     quietly drop _all
3.     quietly set obs 500
4.     quietly generate double y = rchi2(1)
5.     quietly mean y
6.     local mu_a         =  _b[y]
7.     quietly mlexp (ln(chi2den({d=1},y)))
8.     local mu_ml   =  _b[d:_cons]
9.     post sim (mu_a') (mu_ml')
10. }
. postclose sim
. use efcomp, clear
. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mu_a |      5,000    .9989277    .0620524   .7792076   1.232033
mu_ml |      5,000    1.000988    .0401992   .8660786   1.161492


The mean of the $$5,000$$ sample average estimates and the mean of the $$5,000$$ ML estimates are each close to the true value of $$1.0$$. The standard deviation of the $$5,000$$ sample average estimates is $$0.062$$, and it approximates the standard deviation of the sampling distribution of the sample average for this DGP and sample size. Similarly, the standard deviation of the $$5,000$$ ML estimates is $$0.040$$, and it approximates the standard deviation of the sampling distribution of the ML estimator for this DGP and sample size.

We conclude that the ML estimator has a lower variance than the sample average for this DGP and this sample size, because $$0.040$$ is smaller than $$0.062$$.

To get a picture of this difference, we plot the density of the sample average and the density of the ML estimator. (Each of these densities is estimated from $$5,000$$ observations, but estimation error can be ignored because more data would not change the key results.)

Example 2: Plotting the densities of the estimators

. kdensity mu_a,   n(5000) generate(x_a   den_a)   nograph

. kdensity mu_ml,  n(5000) generate(x_ml  den_ml)  nograph

. twoway (line den_a x_a) (line den_ml x_ml)


Densities of the sample average and ML estimators

The plots show that the ML estimator is more tightly distributed around the true value than the sample average.

That the ML estimator is more tightly distributed around the true value than the sample average is what it means for one consistent estimator to be more efficient than another.

Done and undone

I used Monte Carlo simulation to illustrate what it means for one estimator to be more efficient than another. In particular, we saw that the ML estimator is more efficient than the sample average for the mean of a $$\chi^2(1)$$ distribution.

Many other estimators fall between these two estimators in an efficiency ranking. Generalized method of moments estimators and some quasi-maximum likelihood estimators come to mind and might be worth adding to these simulations.

Reference

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Maximum likelihood estimation by mlexp: A chi-squared example

Overview

In this post, I show how to use mlexp to estimate the degree of freedom parameter of a chi-squared distribution by maximum likelihood (ML). One example is unconditional, and another example models the parameter as a function of covariates. I also show how to generate data from chi-squared distributions and I illustrate how to use simulation methods to understand an estimation technique.

The data

I want to show how to draw data from a $$\chi^2$$ distribution, and I want to illustrate that the ML estimator produces estimates close to the truth, so I use simulated data.

In the output below, I draw a $$2,000$$ observation random sample of data from a $$\chi^2$$ distribution with $$2$$ degrees of freedom, denoted by $$\chi^2(2)$$, and I summarize the results.

Example 1: Generating $$\chi^2(2)$$ data

. drop _all

. set obs 2000
number of observations (_N) was 0, now 2,000

. set seed 12345

. generate y = rchi2(2)

. summarize y

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
y |      2,000    2.030865    1.990052   .0028283   13.88213


The mean and variance of the $$\chi^2(2)$$ distribution are $$2$$ and $$4$$, respectively. The sample mean of $$2.03$$ and the sample variance of $$3.96=1.99^2$$ are close to the true values. I set the random-number seed to $$12345$$ so that you can replicate my example; type help seed for details.

mlexp and the log-likelihood function

The log-likelihood function for the ML estimator for the degree of freedom parameter $$d$$ of a $$\chi^2(d)$$ distribution is

${\mathcal L}(d) = \sum_{i=1}^N \ln[f(y_i,d)]$

where $$f(y_i,d)$$ is the density function for the $$\chi^2(d)$$ distribution. See Trivedi, 2005 and Wooldridge, 2010 for instructions to ML.

The mlexp command estimates parameters by maximizing the specified log-likelihood function. You specify the contribution of an observation to the log-likelihood function inside parentheses, and you enclose parameters inside the curly braces $$\{$$ and $$\}$$. I use mlexp to estimate $$d$$ in example 2.

Example 2: Using mlexp to estimate $$d$$

. mlexp ( ln(chi2den({d},y)) )

initial:       log likelihood =     -  (could not be evaluated)
feasible:      log likelihood = -5168.1594
rescale:       log likelihood = -3417.1592
Iteration 0:   log likelihood = -3417.1592
Iteration 1:   log likelihood = -3416.7063
Iteration 2:   log likelihood = -3416.7063

Maximum likelihood estimation

Log likelihood = -3416.7063                     Number of obs     =      2,000

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   2.033457   .0352936    57.62   0.000     1.964283    2.102631
------------------------------------------------------------------------------


The estimate of $$d$$ is very close to the true value of $$2.0$$, as expected.

Modeling the degree of freedom as a function of a covariate

When using ML in applied research, we almost always want to model the parameters of a distribution as a function of covariates. Below, I draw a covariate $$x$$ from Uniform(0,3) distribution, specify that $$d=1+x$$, and draw $$y$$ from a $$\chi^2(d)$$ distribution conditional on $$x$$. Having drawn data from the DGP, I estimate the parameters using mlexp.

Example 3: Using mlexp to estimate $$d=a+b x_i$$

. drop _all

. set obs 2000
number of observations (_N) was 0, now 2,000

. set seed 12345

. generate x = runiform(0,3)

. generate d = 1 + x

. generate y = rchi2(d)

. mlexp ( ln(chi2den({b}*x +{a},y)) )

initial:       log likelihood =     -  (could not be evaluated)
feasible:      log likelihood = -4260.0685
rescale:       log likelihood = -3597.6271
rescale eq:    log likelihood = -3597.6271
Iteration 0:   log likelihood = -3597.6271
Iteration 1:   log likelihood = -3596.5383
Iteration 2:   log likelihood =  -3596.538

Maximum likelihood estimation

Log likelihood =  -3596.538                     Number of obs     =      2,000

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/b |   1.061621   .0430846    24.64   0.000     .9771766    1.146065
/a |   .9524136   .0545551    17.46   0.000     .8454876     1.05934
------------------------------------------------------------------------------


The estimates of $$1.06$$ and $$0.95$$ are close to their true values.

mlexp makes this process easier by forming a linear combination of variables that you specify.

Example 4: A linear combination in mlexp

. mlexp ( ln(chi2den({xb: x _cons},y)) )

initial:       log likelihood =     -  (could not be evaluated)
feasible:      log likelihood = -5916.7648
rescale:       log likelihood = -3916.6106
Iteration 0:   log likelihood = -3916.6106
Iteration 1:   log likelihood = -3621.2905
Iteration 2:   log likelihood = -3596.5845
Iteration 3:   log likelihood =  -3596.538
Iteration 4:   log likelihood =  -3596.538

Maximum likelihood estimation

Log likelihood =  -3596.538                     Number of obs     =      2,000

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
x |   1.061621   .0430846    24.64   0.000     .9771766    1.146065
_cons |   .9524138   .0545551    17.46   0.000     .8454878     1.05934
------------------------------------------------------------------------------


The estimates are the same as in example 3, but the command was easier to write and the output is easier to read.

Done and undone

I have shown how to generate data from a $$\chi^2(d)$$ distribution when $$d$$ is a fixed number or a linear function of a covariate and how to estimate $$d$$ or the parameters of the model for $$d$$ by using mlexp.

The examples discussed above show how to use mlexp and illustrate an example of conditional maximum likelihood estimation.

mlexp can do much more than I have discussed here; see [R] mlexp for more details. Estimating the parameters of a conditional distribution is only the beginning of any research project. I will discuss interpreting these parameters in a future post.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Monte Carlo simulations using Stata

Overview

A Monte Carlo simulation (MCS) of an estimator approximates the sampling distribution of an estimator by simulation methods for a particular data-generating process (DGP) and sample size. I use an MCS to learn how well estimation techniques perform for specific DGPs. In this post, I show how to perform an MCS study of an estimator in Stata and how to interpret the results.

Large-sample theory tells us that the sample average is a good estimator for the mean when the true DGP is a random sample from a $$\chi^2$$ distribution with 1 degree of freedom, denoted by $$\chi^2(1)$$. But a friend of mine claims this estimator will not work well for this DGP because the $$\chi^2(1)$$ distribution will produce outliers. In this post, I use an MCS to see if the large-sample theory works well for this DGP in a sample of 500 observations.

A first pass at an MCS

I begin by showing how to draw a random sample of size 500 from a $$\chi^2(1)$$ distribution and how to estimate the mean and a standard error for the mean.

Example 1: The mean of simulated data

. drop _all
. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345
. generate y = rchi2(1)
. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------


I specified set seed 12345 to set the seed of the random-number generator so that the results will be reproducible. The sample average estimate of the mean from this random sample is $$0.91$$, and the estimated standard error is $$0.055$$.

If I had many estimates, each from an independently drawn random sample, I could estimate the mean and the standard deviation of the sampling distribution of the estimator. To obtain many estimates, I need to repeat the following process many times:

1. Draw from the DGP
2. Compute the estimate
3. Store the estimate.

I need to know how to store the many estimates to proceed with this process. I also need to know how to repeat the process many times and how to access Stata estimates, but I put these details into appendices I and II, respectively, because many readers are already familiar with these topics and I want to focus on how to store the results from many draws.

I want to put the many estimates someplace where they will become part of a dataset that I can subsequently analyze. I use the commands postfile, post, and postclose to store the estimates in memory and write all the stored estimates out to a dataset when I am done. Example 2 illustrates the process, when there are three draws.

Example 2: Estimated means of three draws

. set seed 12345

. postfile buffer mhat using mcs, replace

. forvalues i=1/3 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y])
7. }

. postclose buffer

. use mcs, clear

. list

+----------+
|     mhat |
|----------|
1. | .9107645 |
2. |  1.03821 |
3. | 1.039254 |
+----------+


The command

postfile buffer mhat using mcs, replace


creates a place in memory called buffer in which I can store the results that will eventually be written out to a dataset. mhat is the name of the variable that will hold the estimates in the new dataset called mcs.dta. The keyword using separates the new variable name from the name of the new dataset. I specified the option replace to replace any previous versions of msc.dta with the one created here.

I used

forvalues i=1/3 {


to repeat the process three times. (See appendix I if you want a refresher on this syntax.) The commands

quietly drop _all
quietly set obs 500
quietly generate y = rchi2(1)
quietly mean y


drop the previous data, draw a sample of size 500 from a $$\chi^2(1)$$ distribution, and estimate the mean. (The quietly before each command suppresses the output.) The command

post buffer (_b[y])


stores the estimated mean for the current draw in buffer for what will be the next observation on mhat. The command

postclose buffer


writes the stuff stored in buffer to the file mcs.dta. The commands

use mcs, clear
list


drop the last $$\chi^2(1)$$ sample from memory, read in the msc dataset, and list out the dataset.

Example 3 below is a modified version of example 2; I increased the number of draws and summarized the results.

Example 3: The mean of 2,000 estimated means

. set seed 12345

. postfile buffer mhat using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y])
7. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256


The average of the $$2,000$$ estimates is an estimator for the mean of the sampling distribution of the estimator, and it is close to the true value of $$1.0$$. The sample standard deviation of the $$2,000$$ estimates is an estimator for the standard deviation of the sampling distribution of the estimator, and it is close to the true value of $$\sqrt{\sigma^2/N}=\sqrt{2/500}\approx 0.0632$$, where $$\sigma^2$$ is the variance of the $$\chi^2(1)$$ random variable.

Including standard errors

The standard error of the estimator reported by mean is an estimate of the standard deviation of the sampling distribution of the estimator. If the large-sample distribution is doing a good job of approximating the sampling distribution of the estimator, the mean of the estimated standard
errors should be close to the sample standard deviation of the many mean estimates.

To compare the standard deviation of the estimates with the mean of the estimated standard errors, I modify example 3 to also store the standard errors.

Example 4: The mean of 2,000 standard errors

. set seed 12345

. postfile buffer mhat sehat using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y]) (_se[y])
7. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256
sehat |      2,000    .0629644    .0051703   .0464698   .0819693


Mechanically, the command

postfile buffer mhat sehat using mcs, replace


makes room in buffer for the new variables mhat and sehat, and

post buffer (_b[y]) (_se[y])


stores each estimated mean in the memory for mhat and each estimated standard error in the memory for sehat. (As in example 3, the command postclose buffer writes what is stored in memory to the new dataset.)

The sample standard deviation of the $$2,000$$ estimates is $$0.0625$$, and it is close to the mean of the $$2,000$$ estimated standard errors, which is $$0.0630$$.

You may be thinking I should have written “very close”, but how close is $$0.0625$$ to $$0.0630$$? Honestly, I cannot tell if these two numbers are sufficiently close to each other because the distance between them does not automatically tell me how reliable the resulting inference will be.

Estimating a rejection rate

In frequentist statistics, we reject a null hypothesis if the p-value is below a specified size. If the large-sample distribution approximates the finite-sample distribution well, the rejection rate of the test against the true null hypothesis should be close to the specified size.

To compare the rejection rate with the size of 5%, I modify example 4 to compute and store an indicator for whether I reject a Wald test against the true null hypothesis. (See appendix III for a discussion of the mechanics.)

Example 5: Estimating the rejection rate

. set seed 12345

. postfile buffer mhat sehat reject using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         quietly test _b[y]=1
7.         local r = (r(p)<.05)
8.         post buffer (_b[y]) (_se[y]) (r')
9. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256
sehat |      2,000    .0629644    .0051703   .0464698   .0819693
reject |      2,000       .0475     .212759          0          1


The rejection rate of $$0.048$$ is very close to the size of $$0.05$$.

Done and undone

In this post, I have shown how to perform an MCS of an estimator in Stata. I discussed the mechanics of using the post commands to store the many estimates and how to interpret the mean of the many estimates and the mean of the many estimated standard errors. I also recommended using an estimated rejection rate to evaluate the usefulness of the large-sample approximation to the sampling distribution of an estimator for a given DGP and sample size.

The example illustrates that the sample average performs as predicted by large-sample theory as an estimator for the mean. This conclusion does not mean that my friend's concerns about outliers were entirely misplaced. Other estimators that are more robust to outliers may have better properties. I plan to illustrate some of the trade-offs in future posts.

Appendix I: Repeating a process many times

This appendix provides a quick introduction to local macros and how to use them to repeat some commands many times; see [P] macro and [P] forvalues for more details.

I can store and access string information in local macros. Below, I store the string hello" in the local macro named value.

local value "hello"


To access the stored information, I adorn the name of the local macro. Specifically, I precede it with the single left quote () and follow it with the single right quote ('). Below, I access and display the value stored in the local macro value.

. display "value'"
hello


I can also store numbers as strings, as follows

. local value "2.134"
. display "value'"
2.134


To repeat some commands many times, I put them in a {\tt forvalues} loop. For example, the code below repeats the display command three times.

. forvalues i=1/3 {
2.    display "i is now i'"
3. }
i is now 1
i is now 2
i is now 3


The above example illustrates that forvalues defines a local macro that takes on each value in the specified list of values. In the above example, the name of the local macro is i, and the specified values are 1/3=$$\{1, 2, 3\}$$.

Appendix II: Accessing estimates

After a Stata estimation command, you can access the point estimate of a parameter named y by typing _b[y], and you can access the estimated standard error by typing _se[y]. The example below illustrates this process.

Example 6: Accessing estimated values

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate y = rchi2(1)

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------

. display  _b[y]
.91076444

. display _se[y]
.05486467


Appendix III: Getting a p-value computed by test

This appendix explains the mechanics of creating an indicator for whether a Wald test rejects the null hypothesis at a specific size.

I begin by generating some data and performing a Wald test against the true null hypothesis.

Example 7: Wald test results

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate y = rchi2(1)

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------

. test _b[y]=1

( 1)  y = 1

F(  1,   499) =    2.65
Prob > F =    0.1045


The results reported by test are stored in r(). Below, I use return list to see them, type help return list for details.

Example 8: Results stored by test

. return list

scalars:
r(drop) =  0
r(df_r) =  499
r(F) =  2.645393485924886
r(df) =  1
r(p) =  .1044817353734439


The p-value reported by test is stored in r(p). Below, I store a 0/1 indicator for whether the p-value is less than $$0.05|0 in the local macro r. (See appendix II for an introduction to local macros.) I complete the illustration by displaying that the local macro contains the value \(0$$.

. local r = (r(p)<.05)
. display "r'"
0

Categories: Programming Tags:

## Introduction to treatment effects in Stata: Part 2

This post was written jointly with David Drukker, Director of Econometrics, StataCorp.

In our last post, we introduced the concept of treatment effects and demonstrated four of the treatment-effects estimators that were introduced in Stata 13.  Today, we will talk about two more treatment-effects estimators that use matching.

Introduction

Last time, we introduced four estimators for estimating the average treatment effect (ATE) from observational data.  Each of these estimators has a different way of solving the missing-data problem that arises because we observe only the potential outcome for the treatment level received.  Today, we introduce estimators for the ATE that solve the missing-data problem by matching.

Matching pairs the observed outcome of a person in one treatment group with the outcome of the “closest” person in the other treatment group. The outcome of the closest person is used as a prediction for the missing potential outcome. The average difference between the observed outcome and the predicted outcome estimates the ATE.

What we mean by “closest” depends on our data. Matching subjects based on a single binary variable, such as sex, is simple: males are paired with males and females are paired with females. Matching on two categorical variables, such as sex and race, isn’t much more difficult. Matching on continuous variables, such as age or weight, can be trickier because of the sparsity of the data. It is unlikely that there are two 45-year-old white males who weigh 193 pounds in a sample. It is even less likely that one of those men self-selected into the treated group and the other self-selected into the untreated group. So, in such cases, we match subjects who have approximately the same weight and approximately the same age.

This example illustrates two points. First, there is a cost to matching on continuous covariates; the inability to find good matches with more than one continuous covariate causes large-sample bias in our estimator because our matches become increasingly poor.

Second, we must specify a measure of similarity. When matching directly on the covariates, distance measures are used and the nearest neighbor selected. An alternative is to match on an estimated probability of treatment, known as the propensity score.

Before we discuss estimators for observational data, we note that matching is sometimes used in experimental data to define pairs, with the treatment subsequently randomly assigned within each pair. This use of matching is related but distinct.

Nearest-neighbor matching

Nearest-neighbor matching (NNM) uses distance between covariate patterns to define “closest”. There are many ways to define the distance between two covariate patterns. We could use squared differences as a distance measure, but this measure ignores problems with scale and covariance. Weighting the differences by the inverse of the sample covariance matrix handles these issues. Other measures are also used, but these details are less important than the costs and benefits of NNM dropping the functional-form assumptions (linear, logit, probit, etc.) used in the estimators discussed last time.

Dropping the functional-form assumptions makes the NNM estimator much more flexible; it estimates the ATE for a much wider class of models. The cost of this flexibility is that the NNM estimator requires much more data and the amount of data it needs grows with each additional continuous covariate.

In the previous blog entry, we used an example of mother’s smoking status on birthweight. Let’s reconsider that example.

. webuse cattaneo2.dta, clear


Now, we use teffects nnmatch to estimate the ATE by NNM.

. teffects nnmatch (bweight mmarried mage fage medu prenatal1) (mbsmoke)

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : nearest-neighbor matching      Matches: requested =         1
Outcome model  : matching                                      min =         1
Distance metric: Mahalanobis                                   max =        16
------------------------------------------------------------------------------
|              AI Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ATE          |
mbsmoke |
(smoker  |
vs  |
nonsmoker)  |  -210.5435   29.32969    -7.18   0.000    -268.0286   -153.0584
------------------------------------------------------------------------------


The estimated ATE is -211, meaning that infants would weigh 211 grams less when all mothers smoked than when no mothers smoked.

The output also indicates that ties in distance caused at least one observation to be matched with 16 other observations, even though we requested only matching. NNM averages the outcomes of all the tied-in-distance observations, as it should. (They are all equally good and using all of them will reduce bias.)

NNM on discrete covariates does not guarantee exact matching. For example, some married women could be matched with single women. We probably prefer exact matching on discrete covariates, which we do now.

. teffects nnmatch (bweight mmarried mage fage medu prenatal1) (mbsmoke), ///
ematch(mmarried prenatal1)

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : nearest-neighbor matching      Matches: requested =         1
Outcome model  : matching                                      min =         1
Distance metric: Mahalanobis                                   max =        16
------------------------------------------------------------------------------
|              AI Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ATE          |
mbsmoke |
(smoker  |
vs  |
nonsmoker)  |  -209.5726   29.32603    -7.15   0.000    -267.0506   -152.0946
------------------------------------------------------------------------------


Exact matching on mmarried and prenatal1 changed the results a little bit.

Using more than one continuous covariate introduces large-sample bias, and we have three. The option biasadj() uses a linear model to remove the large-sample bias, as suggested by Abadie and Imbens (2006, 2011).

. teffects nnmatch (bweight mmarried mage fage medu prenatal1) (mbsmoke), ///
ematch(mmarried prenatal1)  biasadj(mage fage medu)

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : nearest-neighbor matching      Matches: requested =         1
Outcome model  : matching                                      min =         1
Distance metric: Mahalanobis                                   max =        16
------------------------------------------------------------------------------
|              AI Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ATE          |
mbsmoke |
(smoker  |
vs  |
nonsmoker)  |  -210.0558   29.32803    -7.16   0.000    -267.5377   -152.5739
------------------------------------------------------------------------------


In this case, the results changed by a small amount. In general, they can change a lot, and the amount increases with the number of continuous
covariates.

Propensity-score matching

NNM uses bias adjustment to remove the bias caused by matching on more than one continuous covariate. The generality of this approach makes it very appealing, but it can be difficult to think about issues of fit and model specification. Propensity-score matching (PSM) matches on an estimated probability of treatment known as the propensity score. There is no need for bias adjustment because we match on only one continuous covariate. PSM has the added benefit that we can use all the standard methods for checking the fit of binary regression models prior to matching.

We estimate the ATE by PSM using teffects psmatch.

. teffects psmatch (bweight) (mbsmoke mmarried mage fage medu prenatal1 )

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : propensity-score matching      Matches: requested =         1
Outcome model  : matching                                      min =         1
Treatment model: logit                                         max =        16
------------------------------------------------------------------------------
|              AI Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ATE          |
mbsmoke |
(smoker  |
vs  |
nonsmoker)  |  -229.4492   25.88746    -8.86   0.000    -280.1877   -178.7107
------------------------------------------------------------------------------


The estimated ATE is now -229, larger in magnitude than the NNM estimates but not significantly so.

How to choose among the six estimatorsg

We now have six estimators:

1. RA: Regression adjustment
2. IPW: Inverse probability weighting
3. IPWRA: Inverse probability weighting with regression adjustment
4. AIPW: Augmented inverse probability weighting
5. NNM: Nearest-neighbor matching
6. PSM: Propensity-score matching

The ATEs we estimated are

1. RA: -277.06
2. IPW: -275.56
3. IPWRA: -229.97
4. AIPW: -230.99
5. NNM: -210.06
6. PSM: -229.45

Which estimator should we use?

We would never suggest searching the above table for the result that most closely fits your wishes and biases. The choice of estimator needs to be made beforehand.

So, how do we choose?

Here are some rules of thumb:

1. Under correct specification, all the estimators should produce similar results. (Similar estimates do not guarantee correct specification because all the specifications could be wrong.)
2. When you know the determinants of treatment status, IPW is a natural base-case estimator.
3. When you instead know the determinants of the outcome, RA is a natural base-case estimator.
4. The doubly robust estimators, AIPW and IPWRA, give us an extra shot at correct specification.
5. When you have lots of continuous covariates, NNM will crucially hinge on the bias adjustment, and the computation gets to be extremely difficult.
6. When you know the determinants of treatment status, PSM is another base-case estimator.
7. The IPW estimators are not reliable when the estimated treatment probabilities get too close to 0 or 1.

Final thoughts

Before we go, we reiterate the cautionary note from our last entry. Nothing about the mathematics of treatment-effects estimators magically extracts causal relationships from observational data. We cannot thoughtlessly analyze our data using Stata’s teffects commands and infer a causal relationship. The models must be supported by scientific theory.

If you would like to learn more about treatment effects in Stata, there is an entire manual devoted to the treatment-effects features in Stata 14; it includes a basic introduction, an advanced introduction, and many worked examples. In Stata, type help teffects:

.  help teffects


Title

[TE] teffects—Treatment-effects estimation for observational data

Syntax

… <output omitted> …

The title [TE] teffects will be in blue, which means it’s clickable. Click on it to go to the Treatment-Effects Reference Manual.

Or download the manual from our website; visit

http://www.stata.com/manuals14/te/

References

Abadie, A., and Imbens, G. W. 2006. Large sample properties of matching estimators for average treatment effects. Econometrica 74: 235–267.

Abadie, A., and Imbens, G. W. 2011. Bias-corrected matching estimators for average treatment effects. Journal of Business and Economic Statistics 29: 1–11.

Cattaneo, M. D. 2010. Efficient semiparametric estimation of multi-valued treatment effects under ignorability. Journal of Econometrics 155: 138–154.

Categories: Statistics Tags:

## 2015 Stata Conference recap

We are happy to report another successful Stata Conference is in the books! Attendees had the opportunity to network, learn, and share their experiences with the Stata community.

We’d like to thank the organizers and everyone who participated in making this year’s conference one of the best yet. Here’s what attendees had to say on social media.

As the conference approached, the countdown began.

Guests attended several presentations led by Stata experts and mingled with fellow researchers and Stata developers during breaks.

A photo posted by Belen (@_belenchavez) on

And sadly, all good things must come to an end.

If you missed this year, save the date for the 2016 Stata Conference in Chicago on July 28 and 29.

We look forward to seeing you next year!

Categories: Meetings Tags:

## Spotlight on irt

New to Stata 14 is a suite of commands to fit item response theory (IRT) models. IRT models are used to analyze the relationship between the latent trait of interest and the items intended to measure the trait. Stata’s irt commands provide easy access to some of the commonly used IRT models for binary and polytomous responses, and irtgraph commands can be used to plot item characteristic functions and information functions.

To learn more about Stata’s IRT features, I refer you to the [IRT] manual; here I want to go beyond the manual and show you a couple of examples of what you can do with a little bit of Stata code.

Example 1

To get started, I want to show you how simple IRT analysis is in Stata.

When I use the nine binary items q1q9, all I need to type to fit a 1PL model is

irt 1pl q*

Equivalently, I can use a dash notation or explicitly spell out the variable names:

irt 1pl q1-q9
irt 1pl q1 q2 q3 q4 q5 q6 q7 q8 q9

I can also use parenthetical notation:

irt (1pl q1-q9)

Parenthetical notation is not very useful for a simple IRT model, but comes in handy when you want to fit a single IRT model to combinations of binary, ordinal, and nominal items:

irt (1pl q1-q5) (1pl q6-q9) (pcm x1-x10) ...

IRT graphs are equally simple to create in Stata; for example, to plot item characteristic curves (ICCs) for all the items in a model, I type

irtgraph icc

Yes, that’s it!

Example 2

Sometimes, I want to fit the same IRT model on two different groups and see how the estimated parameters differ between the groups. The exercise can be part of investigating differential item functioning (DIF) or parameter invariance.

I split the data into two groups, fit two separate 2PL models, and create two scatterplots to see how close the parameter estimates for discrimination and difficulty are for the two groups. For simplicity, my group variable is 1 for odd-numbered observations and 0 for even-numbered observations.

We see that the estimated parameters for item q8 appear to differ between the two groups.

Here is the code used in this example.

webuse masc1, clear

gen odd = mod(_n,2)

irt 2pl q* if odd
mat b_odd = e(b)'

irt 2pl q* if !odd
mat b_even = e(b)'

svmat double b_odd, names(group1)
svmat double b_even, names(group2)
replace group11 = . in 19
replace group21 = . in 19

gen lab1 = ""
replace lab1 = "q8" in 15

gen lab2 = ""
replace lab2 = "q8" in 16

corr group11 group21 if mod(_n,2)
local c1 : display %4.2f r(rho)'

twoway (scatter group11 group21, mlabel(lab1) mlabsize(large) mlabpos(7)) ///
(function x, range(0 2)) if mod(_n,2), ///
name(discr,replace) title("Discrimination parameter; {&rho} = c1'") ///
xtitle("Group 1 observations") ytitle("Group 2 observations") ///
legend(off)

corr group11 group21 if !mod(_n,2)
local c2 : display %4.2f r(rho)'

twoway (scatter group11 group21, mlabel(lab2) mlabsize(large) mlabpos(7)) ///
(function x, range(-2 3)) if !mod(_n,2), ///
name(diff,replace) title("Difficulty parameter; {&rho} = c2'") ///
xtitle("Group 1 observations") ytitle("Group 2 observations") ///
legend(off)

graph combine discr diff, xsize(8)


Example 3

Continuing with the example above, I want to show you how to use a likelihood-ratio test to test for item parameter differences between groups.

Using item q8 as an example, I want to fit one model that constrains item q8 parameters to be the same between the two groups and fit another model that allows these parameters to vary.

The first model is easy. I can fit a 2PL model for the entire dataset, which implicitly constrains the parameters to be equal for both groups. I store the estimates under the name equal.

. webuse masc1, clear
(Data from De Boeck & Wilson (2004))

. generate odd = mod(_n,2)
. quietly irt 2pl q*
. estimates store equal


To estimate the second model, I need the following:

. irt (2pl q1-q7 q9) (2pl q8 if odd) (2pl q8 if !odd)


Unfortunately, this is illegal syntax. I can, however, split the item into two new variables where each variable is restricted to the required subsample:

. generate q8_1 = q8 if odd
(400 missing values generated)

. generate q8_2 = q8 if !odd
(400 missing values generated)


I estimate the second IRT model, this time with items q8_1 and q8_2 taking place of the original q8:

. quietly irt 2pl q1-q7 q8_1 q8_2 q9
. estat report q8_1 q8_2

Two-parameter logistic model                    Number of obs     =        800
Log likelihood = -4116.2064
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
q8_1         |
Discrim |   1.095867   .2647727     4.14   0.000     .5769218    1.614812
Diff |  -1.886126   .3491548    -5.40   0.000    -2.570457   -1.201795
-------------+----------------------------------------------------------------
q8_2         |
Discrim |    1.93005   .4731355     4.08   0.000     1.002721    2.857378
Diff |  -1.544908   .2011934    -7.68   0.000     -1.93924   -1.150577
------------------------------------------------------------------------------


Now, I can perform the likelihood-ratio test:

. lrtest equal ., force

Likelihood-ratio test                                 LR chi2(2)  =      4.53
(Assumption: equal nested in .)                       Prob > chi2 =    0.1040


The test suggests the first model is preferable even though the two ICCs clearly differ:

. irtgraph icc q8_1 q8_2, ylabel(0(.25)1)


Summary

IRT models are used to analyze the relationship between the latent trait of interest and the items intended to measure the trait. Stata’s irt commands provide easy access to some of the commonly used IRT models, and irtgraph commands implement the most commonly used IRT plots. With just a few extra steps, you can easily create customized graphs, such as the ones demonstrated above, which incorporate information from separate IRT models.

Categories: Statistics Tags:

## Introduction to treatment effects in Stata: Part 1

This post was written jointly with David Drukker, Director of Econometrics, StataCorp.

The topic for today is the treatment-effects features in Stata.

Treatment-effects estimators estimate the causal effect of a treatment on an outcome based on observational data.

In today’s posting, we will discuss four treatment-effects estimators:

1. RA: Regression adjustment
2. IPW: Inverse probability weighting
3. IPWRA: Inverse probability weighting with regression adjustment
4. AIPW: Augmented inverse probability weighting

We’ll save the matching estimators for part 2.

We should note that nothing about treatment-effects estimators magically extracts causal relationships. As with any regression analysis of observational data, the causal interpretation must be based on a reasonable underlying scientific rationale.

Introduction

We are going to discuss treatments and outcomes.

A treatment could be a new drug and the outcome blood pressure or cholesterol levels. A treatment could be a surgical procedure and the outcome patient mobility. A treatment could be a job training program and the outcome employment or wages. A treatment could even be an ad campaign designed to increase the sales of a product.

Consider whether a mother’s smoking affects the weight of her baby at birth. Questions like this one can only be answered using observational data. Experiments would be unethical.

The problem with observational data is that the subjects choose whether to get the treatment. For example, a mother decides to smoke or not to smoke. The subjects are said to have self-selected into the treated and untreated groups.

In an ideal world, we would design an experiment to test cause-and-effect and treatment-and-outcome relationships. We would randomly assign subjects to the treated or untreated groups. Randomly assigning the treatment guarantees that the treatment is independent of the outcome, which greatly simplifies the analysis.

Causal inference requires the estimation of the unconditional means of the outcomes for each treatment level. We only observe the outcome of each subject conditional on the received treatment regardless of whether the data are observational or experimental. For experimental data, random assignment of the treatment guarantees that the treatment is independent of the outcome; so averages of the outcomes conditional on observed treatment estimate the unconditional means of interest. For observational data, we model the treatment assignment process. If our model is correct, the treatment assignment process is considered as good as random conditional on the covariates in our model.

Let’s consider an example. Figure 1 is a scatterplot of observational data similar to those used by Cattaneo (2010). The treatment variable is the mother’s smoking status during pregnancy, and the outcome is the birthweight of her baby.

The red points represent the mothers who smoked during pregnancy, while the green points represent the mothers who did not. The mothers themselves chose whether to smoke, and that complicates the analysis.

We cannot estimate the effect of smoking on birthweight by comparing the mean birthweights of babies of mothers who did and did not smoke. Why not? Look again at our graph. Older mothers tend to have heavier babies regardless of whether they smoked while pregnant. In these data, older mothers were also more likely to be smokers. Thus, mother’s age is related to both treatment status and outcome. So how should we proceed?

RA: The regression adjustment estimator

RA estimators model the outcome to account for the nonrandom treatment assignment.

We might ask, “How would the outcomes have changed had the mothers who smoked chosen not to smoke?” or “How would the outcomes have changed had the mothers who didn’t smoke chosen to smoke?”. If we knew the answers to these counterfactual questions, analysis would be easy: we would just subtract the observed outcomes from the counterfactual outcomes.

The counterfactual outcomes are called unobserved potential outcomes in the treatment-effects literature. Sometimes the word unobserved is dropped.

We can construct measurements of these unobserved potential outcomes, and our data might look like this:

In figure 2, the observed data are shown using solid points and the unobserved potential outcomes are shown using hollow points. The hollow red points represent the potential outcomes for the smokers had they not smoked. The hollow green points represent the potential outcomes for the nonsmokers had they smoked.

We can estimate the unobserved potential outcomes then by fitting separate linear regression models with the observed data (solid points) to the two treatment groups.

In figure 3, we have one regression line for nonsmokers (the green line) and a separate regression line for smokers (the red line).

Let’s understand what the two lines mean:

The green point on the left in figure 4, labeled Observed, is an observation for a mother who did not smoke. The point labeled E(y0) on the green regression line is the expected birthweight of the baby given the mother’s age and that she didn’t smoke. The point labeled E(y1) on the red regression line is the expected birthweight of the baby for the same mother had she smoked.

The difference between these expectations estimates the covariate-specific treatment effect for those who did not get the treatment.

Now, let’s look at the other counterfactual question.

The red point on the right in figure 4, labeled Observed in red, is an observation for a mother who smoked during pregnancy. The points on the green and red regression lines again represent the expected birthweights — the potential outcomes — of the mother’s baby under the two treatment conditions.

The difference between these expectations estimates the covariate-specific treatment effect for those who got the treatment.

Note that we estimate an average treatment effect (ATE), conditional on covariate values, for each subject. Furthermore, we estimate this effect for each subject, regardless of which treatment was actually received. Averages of these effects over all the subjects in the data estimate the ATE.

We could also use figure 4 to motivate a prediction of the outcome that each subject would obtain for each treatment level, regardless of the treatment recieved. The story is analogous to the one above. Averages of these predictions over all the subjects in the data estimate the potential-outcome means (POMs) for each treatment level.

It is reassuring that differences in the estimated POMs is the same estimate of the ATE discussed above.

The ATE on the treated (ATET) is like the ATE, but it uses only the subjects who were observed in the treatment group. This approach to calculating treatment effects is called regression adjustment (RA).

Let’s open a dataset and try this using Stata.

. webuse cattaneo2.dta, clear
(Excerpt from Cattaneo (2010) Journal of Econometrics 155: 138-154)

To estimate the POMs in the two treatment groups, we type

. teffects ra (bweight mage) (mbsmoke), pomeans


We specify the outcome model in the first set of parentheses with the outcome variable followed by its covariates. In this example, the outcome variable is bweight and the only covariate is mage.

We specify the treatment model — simply the treatment variable — in the second set of parentheses. In this example, we specify only the treatment variable mbsmoke. We’ll talk about covariates in the next section.

The result of typing the command is

. teffects ra (bweight mage) (mbsmoke), pomeans

Iteration 0:   EE criterion =  7.878e-24
Iteration 1:   EE criterion =  8.468e-26

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : regression adjustment
Outcome model  : linear
Treatment model: none
------------------------------------------------------------------------------
|               Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
POmeans      |
mbsmoke |
nonsmoker  |   3409.435   9.294101   366.84   0.000     3391.219    3427.651
smoker  |   3132.374   20.61936   151.91   0.000     3091.961    3172.787
------------------------------------------------------------------------------


The output reports that the average birthweight would be 3,132 grams if all mothers smoked and 3,409 grams if no mother smoked.

We can estimate the ATE of smoking on birthweight by subtracting the POMs: 3132.374 – 3409.435 = -277.061. Or we can reissue our teffects ra command with the ate option and get standard errors and confidence intervals:

. teffects ra (bweight mage) (mbsmoke), ate

Iteration 0:   EE criterion =  7.878e-24
Iteration 1:   EE criterion =  5.185e-26

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : regression adjustment
Outcome model  : linear
Treatment model: none
-------------------------------------------------------------------------------
|               Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
--------------+----------------------------------------------------------------
ATE           |
mbsmoke |
(smoker vs    |
nonsmoker)  |  -277.0611   22.62844   -12.24   0.000    -321.4121   -232.7102
--------------+----------------------------------------------------------------
POmean        |
mbsmoke |
nonsmoker  |   3409.435   9.294101   366.84   0.000     3391.219    3427.651
-------------------------------------------------------------------------------


The output reports the same ATE we calculated by hand: -277.061. The ATE is the average of the differences between the birthweights when each mother smokes and the birthweights when no mother smokes.

We can also estimate the ATET by using the teffects ra command with option atet, but we will not do so here.

IPW: The inverse probability weighting estimator

RA estimators model the outcome to account for the nonrandom treatment assignment. Some researchers prefer to model the treatment assignment process and not specify a model for the outcome.

We know that smokers tend to be older than nonsmokers in our data. We also hypothesize that mother’s age directly affects birthweight. We observed this in figure 1, which we show again below.

This figure shows that treatment assignment depends on mother’s age. We would like to have a method of adjusting for this dependence. In particular, we wish we had more upper-age green points and lower-age red points. If we did, the mean birthweight for each group would change. We don’t know how that would affect the difference in means, but we do know it would be a better estimate of the difference.

To achieve a similar result, we are going to weight smokers in the lower-age range and nonsmokers in the upper-age range more heavily, and weight smokers in the upper-age range and nonsmokers in the lower-age range less heavily.

We will fit a probit or logit model of the form

Pr(woman smokes) = F(a + b*age)

teffects uses logit by default, but we will specify the probit option for illustration.

Once we have fit that model, we can obtain the prediction Pr(woman smokes) for each observation in the data; we’ll call this pi. Then, in making our POMs calculations — which is just a mean calculation — we will use those probabilities to weight the observations. We will weight observations on smokers by 1/pi so that weights will be large when the probability of being a smoker is small. We will weight observations on nonsmokers by 1/(1-pi) so that weights will be large when the probability of being a nonsmoker is small.

That results in the following graph replacing figure 1:

In figure 5, larger circles indicate larger weights.

To estimate the POMs with this IPW estimator, we can type

. teffects ipw (bweight) (mbsmoke mage, probit), pomeans


The first set of parentheses specifies the outcome model, which is simply the outcome variable in this case; there are no covariates. The second set of parentheses specifies the treatment model, which includes the outcome variable (mbsmoke) followed by covariates (in this case, just mage) and the kind of model (probit).

The result is

. teffects ipw (bweight) (mbsmoke mage, probit), pomeans

Iteration 0:   EE criterion =  3.615e-15
Iteration 1:   EE criterion =  4.381e-25

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : inverse-probability weights
Outcome model  : weighted mean
Treatment model: probit
------------------------------------------------------------------------------
|               Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
POmeans      |
mbsmoke |
nonsmoker  |   3408.979   9.307838   366.25   0.000     3390.736    3427.222
smoker  |   3133.479   20.66762   151.61   0.000     3092.971    3173.986
------------------------------------------------------------------------------


Our output reports that the average birthweight would be 3,133 grams if all the mothers smoked and 3,409 grams if none of the mothers smoked.

This time, the ATE is -275.5, and if we typed

. teffects ipw (bweight) (mbsmoke mage, probit), ate
(Output omitted)


we would learn that the standard error is 22.68 and the 95% confidence interval is [-319.9,231.0].

Just as with teffects ra, if we wanted ATET, we could specify the teffects ipw command with the atet option.

IPWRA: The IPW with regression adjustment estimator

RA estimators model the outcome to account for the nonrandom treatment assignment. IPW estimators model the treatment to account for the nonrandom treatment assignment. IPWRA estimators model both the outcome and the treatment to account for the nonrandom treatment assignment.

IPWRA uses IPW weights to estimate corrected regression coefficients that are subsequently used to perform regression adjustment.

The covariates in the outcome model and the treatment model do not have to be the same, and they often are not because the variables that influence a subject’s selection of treatment group are often different from the variables associated with the outcome. The IPWRA estimator has the double-robust property, which means that the estimates of the effects will be consistent if either the treatment model or the outcome model — but not both — are misspecified.

Let’s consider a situation with more complex outcome and treatment models but still using our low-birthweight data.

The outcome model will include

1. mage: the mother’s age
2. prenatal1: an indicator for prenatal visit during the first trimester
3. mmarried: an indicator for marital status of the mother
4. fbaby: an indicator for being first born

The treatment model will include

1. all the covariates of the outcome model
2. mage^2
3. medu: years of maternal education

We will also specify the aequations option to report the coefficients of the outcome and treatment models.

. teffects ipwra (bweight mage prenatal1 mmarried fbaby)                ///
(mbsmoke mmarried c.mage##c.mage fbaby medu, probit)   ///
, pomeans aequations

Iteration 0:   EE criterion =  1.001e-20
Iteration 1:   EE criterion =  1.134e-25

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : IPW regression adjustment
Outcome model  : linear
Treatment model: probit
-------------------------------------------------------------------------------
|               Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
--------------+----------------------------------------------------------------
POmeans       |
mbsmoke |
nonsmoker  |   3403.336    9.57126   355.58   0.000     3384.576    3422.095
smoker  |   3173.369   24.86997   127.60   0.000     3124.624    3222.113
--------------+----------------------------------------------------------------
OME0          |
mage |   2.893051   2.134788     1.36   0.175    -1.291056    7.077158
prenatal1 |   67.98549   28.78428     2.36   0.018     11.56933    124.4017
mmarried |   155.5893   26.46903     5.88   0.000      103.711    207.4677
fbaby |   -71.9215   20.39317    -3.53   0.000    -111.8914   -31.95162
_cons |   3194.808   55.04911    58.04   0.000     3086.913    3302.702
--------------+----------------------------------------------------------------
OME1          |
mage |  -5.068833   5.954425    -0.85   0.395    -16.73929    6.601626
prenatal1 |   34.76923   43.18534     0.81   0.421    -49.87248    119.4109
mmarried |   124.0941   40.29775     3.08   0.002     45.11193    203.0762
fbaby |   39.89692   56.82072     0.70   0.483    -71.46966    151.2635
_cons |   3175.551   153.8312    20.64   0.000     2874.047    3477.054
--------------+----------------------------------------------------------------
TME1          |
mmarried |  -.6484821   .0554173   -11.70   0.000     -.757098   -.5398663
mage |   .1744327   .0363718     4.80   0.000     .1031452    .2457202
|
c.mage#c.mage |  -.0032559   .0006678    -4.88   0.000    -.0045647   -.0019471
|
fbaby |  -.2175962   .0495604    -4.39   0.000    -.3147328   -.1204595
medu |  -.0863631   .0100148    -8.62   0.000    -.1059917   -.0667345
_cons |  -1.558255   .4639691    -3.36   0.001    -2.467618   -.6488926
-------------------------------------------------------------------------------


The POmeans section of the output displays the POMs for the two treatment groups. The ATE is now calculated to be 3173.369 – 3403.336 = -229.967.

The OME0 and OME1 sections display the RA coefficients for the untreated and treated groups, respectively.

The TME1 section of the output displays the coefficients for the probit treatment model.

Just as in the two previous cases, if we wanted the ATE with standard errors, etc., we would specify the ate option. If we wanted ATET, we would specify the atet option.

AIPW: The augmented IPW estimator

IPWRA estimators model both the outcome and the treatment to account for the nonrandom treatment assignment. So do AIPW estimators.

The AIPW estimator adds a bias-correction term to the IPW estimator. If the treatment model is correctly specified, the bias-correction term is 0 and the model is reduced to the IPW estimator. If the treatment model is misspecified but the outcome model is correctly specified, the bias-correction term corrects the estimator. Thus, the bias-correction term gives the AIPW estimator the same double-robust property as the IPWRA estimator.

The syntax and output for the AIPW estimator is almost identical to that for the IPWRA estimator.

. teffects aipw (bweight mage prenatal1 mmarried fbaby)                 ///
(mbsmoke mmarried c.mage##c.mage fbaby medu, probit)    ///
, pomeans aequations

Iteration 0:   EE criterion =  4.632e-21
Iteration 1:   EE criterion =  5.810e-26

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : augmented IPW
Outcome model  : linear by ML
Treatment model: probit
-------------------------------------------------------------------------------
|               Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
--------------+----------------------------------------------------------------
POmeans       |
mbsmoke |
nonsmoker  |   3403.355   9.568472   355.68   0.000     3384.601    3422.109
smoker  |   3172.366   24.42456   129.88   0.000     3124.495    3220.237
--------------+----------------------------------------------------------------
OME0          |
mage |   2.546828   2.084324     1.22   0.222    -1.538373    6.632028
prenatal1 |   64.40859   27.52699     2.34   0.019     10.45669    118.3605
mmarried |   160.9513    26.6162     6.05   0.000     108.7845    213.1181
fbaby |   -71.3286   19.64701    -3.63   0.000     -109.836   -32.82117
_cons |   3202.746   54.01082    59.30   0.000     3096.886    3308.605
--------------+----------------------------------------------------------------
OME1          |
mage |  -7.370881    4.21817    -1.75   0.081    -15.63834    .8965804
prenatal1 |   25.11133   40.37541     0.62   0.534    -54.02302    104.2457
mmarried |   133.6617   40.86443     3.27   0.001      53.5689    213.7545
fbaby |   41.43991   39.70712     1.04   0.297    -36.38461    119.2644
_cons |   3227.169   104.4059    30.91   0.000     3022.537    3431.801
--------------+----------------------------------------------------------------
TME1          |
mmarried |  -.6484821   .0554173   -11.70   0.000     -.757098   -.5398663
mage |   .1744327   .0363718     4.80   0.000     .1031452    .2457202
|
c.mage#c.mage |  -.0032559   .0006678    -4.88   0.000    -.0045647   -.0019471
|
fbaby |  -.2175962   .0495604    -4.39   0.000    -.3147328   -.1204595
medu |  -.0863631   .0100148    -8.62   0.000    -.1059917   -.0667345
_cons |  -1.558255   .4639691    -3.36   0.001    -2.467618   -.6488926
-------------------------------------------------------------------------------


The ATE is 3172.366 – 3403.355 = -230.989.

Final thoughts

The example above used a continuous outcome: birthweight. teffects can also be used with binary, count, and nonnegative continuous outcomes.

The estimators also allow multiple treatment categories.

An entire manual is devoted to the treatment-effects features in Stata 13, and it includes a basic introduction, advanced discussion, and worked examples. If you would like to learn more, you can download the [TE] Treatment-effects Reference Manual from the Stata website.

More to come

Next time, in part 2, we will cover the matching estimators.

Reference

Cattaneo, M. D. 2010. Efficient semiparametric estimation of multi-valued treatment effects under ignorability. Journal of Econometrics 155: 138–154.