## Programming an estimation command in Stata: Where to store your stuff

If you tell me “I program in Stata”, it makes me happy, but I do not know what you mean. Do you write scripts to make your research reproducible, or do you write Stata commands that anyone can use and reuse? In the series #StataProgramming, I will show you how to write your own commands, but I start at the beginning. Discussing the difference between scripts and commands here introduces some essential programming concepts and constructions that I use to write scripts and commands.

Scripts versus commands

A script is a program that always performs the same tasks on the same inputs and produces exactly the same results. Scripts in Stata are known as do-files and the files containing them end in .do. For example, I could write a do-file to

1. read in the National Longitudinal Study of Youth (NLSY) dataset,
2. clean the data,
3. form a sample for some population, and
4. run a bunch of regressions on the sample.

This structure is at the heart of reproducible research; produce the same results from the same inputs every time. Do-files have a one-of structure. For example, I could not somehow tell this do-file that I want it to perform the analogous tasks on the Panel Study on Income Dynamics (PSID). Commands are reusable programs that take arguments to perform a task on any data of certain type. For example, regress performs ordinary least squares on the specified variables regardless of whether they come from the NLSY, PSID, or any other dataset. Stata commands are written in the automatic do-file (ado) language; the files containing them end in .ado. Stata commands written in the ado language are known as ado-commands.

An example do-file

The commands in code block 1 are contained in the file doex.do in the current working directory of my computer.

Code block 1: doex.do

// version 1.0.0  04Oct2015 (This line is comment)
version 14                     // version #.# fixes the version of Stata
use http://www.stata.com/data/accident2.dta
summarize accidents tickets


We execute the commands by typing do doex which produces

Example 1: Output from do doex

. do doex

. // version 1.0.0  04Oct2015 (This line is comment)
. version 14                     // version #.# fixes the version of Stata

. use http://www.stata.com/data/accident2.dta

. summarize accidents tickets

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
accidents |        948    .8512658    2.851856          0         20
tickets |        948    1.436709    1.849456          0          7

.
.
end of do-file

1. Line 1 in doex.do is a comment that helps to document the code but is not executed by Stata. The // initiates a comment. Anything following the // on that line is ignored by Stata.
2. In the comment on line 1, I put a version number and the date that I last changed this file. The date and the version help me keep track of the changes that I make as I work on the project. This information also helps me answer questions from others with whom I have shared a version of this file.
3. Line 2 specifies the definition of the Stata language that I use. Stata changes over time. Setting the version ensures that the do-file continues to run and that the results do not change as the Stata language evolves.
4. Line 3 reads in the accident.dta dataset.
5. Line 4 summarizes the variables accidents and tickets.

Storing stuff in Stata

Programming in Stata is like putting stuff into boxes, making Stata change the stuff in the boxes, and getting the changed stuff out of the boxes. For example, code block 2 contains the code for doex2.do, whose output I display in example 2

Code block 2: doex2.do

// version 1.0.0  04Oct2015 (This line is comment)
version 14                     // version #.# fixes the version of Stata
use http://www.stata.com/data/accident2.dta
generate ln_traffic = ln(traffic)
summarize ln_traffic


Example 2: Output from do doex2

. do doex2

. // version 1.0.0  04Oct2015 (This line is comment)
. version 14                     // version #.# fixes the version of Stata

. use http://www.stata.com/data/accident2.dta

. generate ln_traffic = ln(traffic)

. summarize ln_traffic

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
ln_traffic |        948    1.346907    1.004952  -5.261297   2.302408

.
.
end of do-file


In line 4 of code block 2, I generate the new variable ln_traffic which I summarize on line 5. doex2.do uses generate to change what is in the box ln_traffic and uses summarize to get a function of the changed stuff out of the box. Stata variables are the most frequently used box type in Stata, but when you are programming, you will also rely on Stata matrices.

There can only be one variable named traffic in a Stata dataset and its contents can be viewed or changed interactively, by a do-file, or by an ado-file command. Similarly, there can only be one Stata matrix named beta in a Stata session and its contents can be viewed or changed interactively, by a do-file, or by an ado-file command. Stata variables and Stata matrices are global boxes because there can only be one Stata variable or Stata matrix in a Stata session and its contents can be viewed or changed anywhere in a Stata session.

The opposite of global is local. If it is local in Stata, its contents can only be accessed or changed in the interactive session, in a particular do-file, or a in particular ado-file.

Although I am discussing do-files at the moment, remember that we are learning techniques to write commands. It is essential to understand the differences between global boxes and local boxes to program commands in Stata. Global boxes, like variables, could contain data that the users of your command do not want changed. For example, a command you write should never change a user’s variable in a way that was not requested.

Levels of Stata

The notion that there are levels of Stata can help explain the difference between global boxes and local boxes. Suppose that I run 2 do-files or ado-files. Think of the interactive Stata session as level 0 of Stata, and think of each do-file or ado-file as being Stata levels 1 and 2. Global boxes like variables and matrices live in global memory that can be accessed or changed from a Stata command executed in level 0, 1, or 2. Local boxes can only be accessed or changed by a Stata command within a particular level of Stata. (This description is not exactly how Stata works, but the details about how Stata really handles levels are not important here.)

Figure 1 depicts this structure.

Memory by Stata level

Figure 1 clarifies

• that commands executed at all Stata levels can access and change the objects in global memory,
• that only commands executed at Stata level 0 can access and change the objects local to Stata level 0,
• that only commands executed at Stata level 1 can access and change the objects local to Stata level 1, and
• that only commands executed at Stata level 2 can access and change the objects local to Stata level 2.

Global and local macros: Storing and extracting

Macros are Stata boxes that hold information as characters, also known as strings. Stata has both global macros and local macros. Global macros are global and local macros are local. Global macros can be accessed and changed by a command executed at any Stata level. Local macros can be accessed and changed only by a command executed at a specific Stata level.

The easiest way to begin to understand global macros is to put something into a global macro and then to get it back out. Code block 3 contains the code for global1.do which stores and the retrieves information from a global macro.

Code block 3: global1.do

// version 1.0.0  04Oct2015
version 14
global vlist "y x1 x2"
display "vlist contains $vlist"  Example 3: Output from do global1 . do global1 . // version 1.0.0 04Oct2015 . version 14 . global vlist "y x1 x2" . display "vlist contains$vlist"
vlist contains y x1 x2

.
end of do-file


Line 3 of code block 3 puts the string y x1 x2 into the global macro named vlist. To extract what I put into a global macro, I prefix the name of global macro with a $. Line 4 of the code block and its output in example 3 illustrate this usage by extracting and displaying the contents of vlist. Code block 4 contains the code for local1.do and its output is given in example 4. They illustrate how to put something into a local macro and how to extract something from it. Code block 4: local1.do // version 1.0.0 04Oct2015 version 14 local vlist "y x1 x2" display "vlist contains vlist'"  Example 4: Output from do global1 . do local1 . // version 1.0.0 04Oct2015 . version 14 . local vlist "y x1 x2" . display "vlist contains vlist'" vlist contains y x1 x2 . end of do-file  Line 3 of code block 3 puts the string y x1 x2 into the local macro named vlist. To extract what I put into a local macro I enclose the name of the local macro between a single left quote (‘) and a single right quote (’). Line 4 of code block 3 displays what is contained in the local macro vlist and its output in example 4 illustrates this usage. Getting stuff from Stata commands Now that we have boxes, I will show you how to store stuff computed by Stata in these boxes. Analysis commands, like summarize, store their results in r(). Estimation commands, like regress, store their results in e(). Somewhat tautologically, commands that store their results in r() are also known as r-class commands and commands that store their results in e() are also known as e-class commands. I can use return list to see results stored by an r-class command. Below, I list out what summarize has stored in r() and compute the mean from the stored results. Example 5: Getting results from an r-class command . use http://www.stata.com/data/accident2.dta, clear . summarize accidents Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- accidents | 948 .8512658 2.851856 0 20 . return list scalars: r(N) = 948 r(sum_w) = 948 r(mean) = .8512658227848101 r(Var) = 8.133081817331211 r(sd) = 2.851855854935732 r(min) = 0 r(max) = 20 r(sum) = 807 . local sum = r(sum) . local N = r(N) . display "The mean is " sum'/N' The mean is .85126582  Estimation commands are more formal than analysis commands, so they save more stuff. Official Stata estimation commands save lots of stuff, because they follow lots of rules that make postestimation easy for users. Do not be alarmed by the number of things stored by poisson. Below, I list out the results stored by poisson and create a Stata matrix that contains the coefficient estimates. Example 6: Getting results from an e-class command . poisson accidents traffic tickets male Iteration 0: log likelihood = -377.98594 Iteration 1: log likelihood = -370.68001 Iteration 2: log likelihood = -370.66527 Iteration 3: log likelihood = -370.66527 Poisson regression Number of obs = 948 LR chi2(3) = 3357.64 Prob > chi2 = 0.0000 Log likelihood = -370.66527 Pseudo R2 = 0.8191 ------------------------------------------------------------------------------ accidents | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- traffic | .0764399 .0129856 5.89 0.000 .0509887 .1018912 tickets | 1.366614 .0380641 35.90 0.000 1.29201 1.441218 male | 3.228004 .1145458 28.18 0.000 3.003499 3.45251 _cons | -7.434478 .2590086 -28.70 0.000 -7.942126 -6.92683 ------------------------------------------------------------------------------ . ereturn list scalars: e(rank) = 4 e(N) = 948 e(ic) = 3 e(k) = 4 e(k_eq) = 1 e(k_dv) = 1 e(converged) = 1 e(rc) = 0 e(ll) = -370.6652697757637 e(k_eq_model) = 1 e(ll_0) = -2049.485325326086 e(df_m) = 3 e(chi2) = 3357.640111100644 e(p) = 0 e(r2_p) = .8191422669899876 macros: e(cmdline) : "poisson accidents traffic tickets male" e(cmd) : "poisson" e(predict) : "poisso_p" e(estat_cmd) : "poisson_estat" e(chi2type) : "LR" e(opt) : "moptimize" e(vce) : "oim" e(title) : "Poisson regression" e(user) : "poiss_lf" e(ml_method) : "e2" e(technique) : "nr" e(which) : "max" e(depvar) : "accidents" e(properties) : "b V" matrices: e(b) : 1 x 4 e(V) : 4 x 4 e(ilog) : 1 x 20 e(gradient) : 1 x 4 functions: e(sample) . matrix b = e(b) . matrix list b b[1,4] accidents: accidents: accidents: accidents: traffic tickets male _cons y1 .07643992 1.366614 3.2280044 -7.434478  Done and Undone In this second post in the series #StataProgramming, I discussed the difference between scripts and commands, I provided an introduction to the concepts of global and local memory objects, I discussed global macros and local macros, and I showed how to access results stored by other commands. In the next post in the series #StataProgramming, I discuss an example that further illustrates the differences between global macros and local macros. Categories: Programming Tags: ## Probit model with sample selection by mlexp Overview In a previous post, David Drukker demonstrated how to use mlexp to estimate the degree of freedom parameter in a chi-squared distribution by maximum likelihood (ML). In this post, I am going to use mlexp to estimate the parameters of a probit model with sample selection. I will illustrate how to specify a more complex likelihood in mlexp and provide intuition for the probit model with sample selection. Our results match the heckprobit command; see [R] heckprobit for more details. Probit model For binary outcome $$y_i$$ and regressors $${\bf x}_i$$, the probit model assumes $$$\label{eq:outcome} y_i = {\bf 1}({\bf x}_i{\boldsymbol \beta} + \epsilon_i > 0) \tag{1}$$$ where the error $$\epsilon_i$$ is standard normal. The indicator function $${\bf1}(\cdot)$$ outputs 1 when its input is true and outputs 0 otherwise. The log likelihood of the probit model is $\ln L = \sum_{i=1}^{N} y_i \ln \Phi({\bf x}_i{\boldsymbol \beta}) + (1-y_i)\ln\{1-\Phi({\bf x}_i{\boldsymbol \beta})\} \nonumber$ where $$\Phi$$ is the standard normal cumulative distribution function. The probit model is widely used to model binary outcomes. But there are situations where it is not appropriate. Sometimes we observe a random sample where the outcome is missing on certain observations. If there is a relationship between the unobserved error of the outcome $$\epsilon_i$$ and the unobserved error that affects whether the outcome is observed $$\epsilon_{si}$$, then estimates made using the probit model will be inconsistent for $${\boldsymbol \beta}$$. For instance, this could happen when we model job satisfaction and our sample includes employed and unemployed individuals. The unobserved factors that affect your job satisfaction may be correlated with factors that affect your employment status. Samples like this are said to suffer from “selection on unobservables”. Probit model with sample selection Van de Ven and Van Pragg (1981) introduced the probit model with sample selection to allow for consistent estimation of $${\boldsymbol \beta}$$ in samples that suffer from selection on unobservables. The equation for the outcome (1) remains the same, but we add another equation. The selection process for the outcome is modeled as $s_i = {\bf 1}({\bf z}_i{\boldsymbol \gamma} + \epsilon_{si} > 0) \nonumber$ where $$s_i=1$$ if we observed $$y_i$$ and $$s_i=0$$ otherwise, and $${\bf z}_i$$ are regressors that affect the selection process. The errors $$\epsilon_i$$ and $$\epsilon_{si}$$ are assumed to be standard normal with $\mbox{corr}(\epsilon_i,\epsilon_{si}) = \rho \nonumber$ Let $$S$$ be the set of observations where $$y_i$$ is observed. The likelihood for the probit model with sample selection is $\begin{eqnarray*} \ln L &=& \sum_{i\in S}^{} y_i\ln\Phi_2({\bf x}_i{\boldsymbol \beta}, {\bf z}_i{\boldsymbol \gamma},\rho) + (1-y_i)\ln\Phi_2(-{\bf x}_i{\boldsymbol \beta}, {\bf z}_i{\boldsymbol \gamma},-\rho) + \cr & & \sum_{i\not\in S}^{} \ln \{1- \Phi({\bf z}_i{\boldsymbol \gamma})\} \end{eqnarray*}$ where $$\Phi_2$$ is the bivariate normal cumulative distribution function. The data We will simulate data from a probit model with sample selection and then estimate the parameters of the model using mlexp. We simulate a random sample of 7,000 observations. . drop _all . set seed 441 . set obs 7000 number of observations (_N) was 0, now 7,000 . generate x = .5*rchi2(2) . generate z = rnormal() . generate b = rbinomial(2,.5)  First, we generate the regressors. We use a $$\chi^2$$ variable with $$2$$ degrees of freedom $$x$$ scaled by $$0.5$$ as a regressor for the outcome. A standard normal variable $$z$$ is used as a selection regressor. The variable $$b$$ has a binomial($$2,0.5$$) distribution and will be used as a selection regressor. . matrix cm = (1,.7 \ .7,1) . drawnorm ey es, corr(cm)  Next, we draw the unobserved errors. The outcome $$y$$ and selection indicator $$s$$ will be generated with errors that have correlation $$0.7$$. We generate the errors with the drawnorm command. . generate s = z + 1.3*0.b + 1.b + .5*2.b + es > 0 . generate y = .7*x + ey + .5 > 0 . replace y = . if !s (1,750 real changes made, 1,750 to missing)  Finally, we generate the outcome and selection indicator. We specify the effect of $$b$$ on selection by using factor-variable notation. Every value of $$b$$ provides a different intercept for $$s$$. We set the outcome to missing for observations where $$s$$ is $$0$$. Effect of ignoring sample selection First, we will use mlexp to estimate the probit model, ignoring the sample selection. We use the cond() function to calculate different values of the likelihood based on the value of $$y$$. For cond(a,b,c), b is returned if a is true and c is returned otherwise. We use only the observations for which $$y$$ is not missing by specifying $$y$$ in the variables() option. The variables in the equation y are specified once, the first time the equation parameters are used in the likelihood. When the equation is used again, it is referred to as $$\{{\bf y}:\}$$. . mlexp (ln(cond(y,normal({y: x _cons}),1-normal({y:})))), variables(y) initial: log likelihood = -3639.0227 alternative: log likelihood = -2342.8722 rescale: log likelihood = -1746.0961 Iteration 0: log likelihood = -1746.0961 Iteration 1: log likelihood = -1503.9519 Iteration 2: log likelihood = -1485.2935 Iteration 3: log likelihood = -1485.1677 Iteration 4: log likelihood = -1485.1677 Maximum likelihood estimation Log likelihood = -1485.1677 Number of obs = 5,250 ------------------------------------------------------------------------------ | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- x | .813723 .0568938 14.30 0.000 .7022132 .9252328 _cons | .7623006 .0386929 19.70 0.000 .6864639 .8381372 ------------------------------------------------------------------------------  Both parameters are overestimated, and the true values are not in the estimated confidence intervals. Accounting for sample selection Now, we use mlexp to estimate the probit model with sample selection. We use the cond() function twice, once for the selection indicator value and once for the outcome value. We no longer need to specify the variables() option because we will use each observation in the data. We use the factor-variable operator ibn in the selection equation so that a separate intercept is used in the equation for each level of $$b$$. . mlexp (ln(cond(s,cond(y,binormal({y: x _cons},{s: z ibn.b}, {rho}), binormal( > -{y:},{s:}, -{rho})),1-normal({s:})))) initial: log likelihood = -8491.053 alternative: log likelihood = -5898.851 rescale: log likelihood = -5898.851 rescale eq: log likelihood = -5654.3504 Iteration 0: log likelihood = -5654.3504 Iteration 1: log likelihood = -5473.5319 (not concave) Iteration 2: log likelihood = -4401.6027 (not concave) Iteration 3: log likelihood = -4340.7398 (not concave) Iteration 4: log likelihood = -4333.6402 (not concave) Iteration 5: log likelihood = -4326.1744 (not concave) Iteration 6: log likelihood = -4316.4936 (not concave) Iteration 7: log likelihood = -4261.307 Iteration 8: log likelihood = -4154.7548 Iteration 9: log likelihood = -4142.7991 Iteration 10: log likelihood = -4141.7431 Iteration 11: log likelihood = -4141.7306 Iteration 12: log likelihood = -4141.7305 Maximum likelihood estimation Log likelihood = -4141.7305 Number of obs = 7,000 ------------------------------------------------------------------------------ | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- y | x | .7643362 .0532342 14.36 0.000 .659999 .8686734 _cons | .5259657 .0406914 12.93 0.000 .446212 .6057195 -------------+---------------------------------------------------------------- s | z | 1.028631 .0260977 39.41 0.000 .977481 1.079782 | b | 0 | 1.365497 .0440301 31.01 0.000 1.2792 1.451794 1 | 1.034018 .0297178 34.79 0.000 .9757726 1.092264 2 | .530342 .0353022 15.02 0.000 .461151 .5995331 -------------+---------------------------------------------------------------- /rho | .6854869 .0417266 16.43 0.000 .6037043 .7672696 ------------------------------------------------------------------------------  Our estimates of the coefficient on $$x$$ and the constant intercept are closer to the true values. The confidence intervals also include the true values. The correlation $$\rho$$ is estimated to be $$0.69$$, and the true value of $$0.7$$ is in the confidence interval. This model obviously works better. Conclusion I have demonstrated how to estimate the parameters of a model with a moderately complex likelihood function: the probit model with sample selection using mlexp. I also illustrated how to generate data from this model and how its results differ from the simple probit model. See [R] mlexp for more details about mlexp. In a future post, we will show how to make predictions after mlexp and how to estimate population average parameters using mlexp and margins. Reference Van de Ven, W. P. M. M., and B. M. S. Van Pragg. 1981. The demand for deductibles in private health insurance: A probit model with sample selection. Journal of Econometrics 17: 229{252. Categories: Statistics Tags: ## Programming estimators in Stata: Why you should Distributing a Stata command that implements a statistical method will get that method used by lots of people. They will thank you. And, they will cite you! This post is the first in the series #StataProgramming about programing an estimation command in Stata that uses Mata to do the numerical work. In the process of showing you how to program an estimation command in Stata, I will discuss do-file programming, ado-file programming, and Mata programming. When the series ends, you will be able to write Stata commands. Stata users like its predictable syntax and its estimation-postestimation structure that facilitates hypothesis testing, specification tests, and parameter interpretation. To help you write Stata commands that people want to use, I illustrate how Stata syntax is predictable and give an overview of the estimation-postestimation structure that you will want to emulate in your programs. Stata structure by example I use and describe some simulated data about the number of traffic accidents observed on 948 people. Example 1: Accident data . use http://www.stata.com/data/accident2.dta . describe Contains data from http://www.stata.com/data/accident2.dta obs: 948 vars: 6 23 Sep 2015 13:04 size: 22,752 -------------------------------------------------------------------------------- storage display value variable name type format label variable label -------------------------------------------------------------------------------- kids float %9.0g number of children cvalue float %9.0g car value index tickets float %9.0g number of tickets in last 2 years traffic float %9.0g local traffic index, larger=>worse male float %9.0g 1=>man, 0=>woman accidents float %9.0g number of traffic in last 5 years -------------------------------------------------------------------------------- Sorted by:  Stata’s predictable syntax I estimate the parameters of a Poisson regression model for accidents as a function of traffic conditions (traffic), an indicator for being a male driver (male), and the number of tickets received in the last two years (tickets). Example 2: A Poisson model for accidents . poisson accidents traffic male tickets , vce(robust) Iteration 0: log pseudolikelihood = -377.98594 Iteration 1: log pseudolikelihood = -370.68001 Iteration 2: log pseudolikelihood = -370.66527 Iteration 3: log pseudolikelihood = -370.66527 Poisson regression Number of obs = 948 Wald chi2(3) = 1798.65 Prob > chi2 = 0.0000 Log pseudolikelihood = -370.66527 Pseudo R2 = 0.8191 ------------------------------------------------------------------------------ | Robust accidents | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- traffic | .0764399 .0165119 4.63 0.000 .0440772 .1088027 male | 3.228004 .1232081 26.20 0.000 2.986521 3.469488 tickets | 1.366614 .0328218 41.64 0.000 1.302284 1.430943 _cons | -7.434478 .2413188 -30.81 0.000 -7.907454 -6.961502 ------------------------------------------------------------------------------  I want to focus on the structure in this example so that you can use it to make your commands easier to use. In particular, I want to discuss the structure of the command syntax and to point out that the output is easy to read and interpret because it is a standard Stata output table. For estimators that table almost always reports estimates (often coefficients), standard errors, tests against zero and their$p\$-values, and confidence intervals.

Stata syntax is predictable, which makes it easy to use. Stata users “speak Stata” and do not even notice the details. I highlight some of these details so that we can make the syntax of the commands we write predictable. Here are some of the standard syntax elements illustrated in example 2.

1. The command has four syntactical elements;
1. command name (poisson),
2. list of variable names (accidents traffic male tickets),
3. a comma,
4. an option (vce(robust)).
2. In the list of variable names, the name of the dependent variable is first and it is followed by the names of the independent variables.
3. The job of the comma is to separate the command name and variable list from the option or options.

The output is also structured; it is composed of an iteration log, a header, and a standard output table.

Estimation-postestimation framework

As a Stata user, I could now use the estimation-postestimation framework. For example, I could perform a Wald test of the hypothesis that the coefficient on male is 3.

Example 3: A Wald test of a linear restriction

. test male = 3

( 1)  [accidents]male = 3

chi2(  1) =    3.42
Prob > chi2 =    0.0642


or I could perform a Wald test of the nonlinear hypothesis that the ratio of the coefficient on male to the ratio of the coefficient on tickets is 2.

Example 4: A Wald test of a nonlinear restriction

. testnl _b[male]/_b[tickets] = 2

(1)  _b[male]/_b[tickets] = 2

chi2(1) =       19.65
Prob > chi2 =        0.0000


I could also predict the mean of accidents for each observation and summarize the results.

Example 5: Summarizing the predicted conditional means

. predict nhat
(option n assumed; predicted number of events)

. summarize nhat

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
nhat |        948    .8512658    2.971087   .0006086    29.0763


Finally, I could use margins to estimate conditional or population-averaged parameters that are functions of the parameters in the original model. I use margins to estimate the average number of accidents that would be observed if each individual received 0 tickets, or 1 ticket, or 2 tickets, …, or 7 tickets. See [R] margins, Long and Freese (2006, sec. 4.4.2-4.4.3), and Cameron and Trivedi (2010, 10.5.6{10.6.9) for introductions to estimating functions of the the model parameters by margins.

Example 6: Estimating functions of model parameters

. margins, at(tickets=(0 1 2 3 4 5 6 7))

Predictive margins                              Number of obs     =        948
Model VCE    : Robust

Expression   : Predicted number of events, predict()

1._at        : tickets         =           0

2._at        : tickets         =           1

3._at        : tickets         =           2

4._at        : tickets         =           3

5._at        : tickets         =           4

6._at        : tickets         =           5

7._at        : tickets         =           6

8._at        : tickets         =           7

------------------------------------------------------------------------------
|            Delta-method
|     Margin   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
_at |
1  |   .0097252   .0015387     6.32   0.000     .0067094     .012741
2  |   .0381426   .0048762     7.82   0.000     .0285854    .0476998
3  |   .1495971   .0148157    10.10   0.000      .120559    .1786353
4  |   .5867272   .0432256    13.57   0.000     .5020066    .6714478
5  |   2.301172   .1302033    17.67   0.000     2.045978    2.556366
6  |   9.025308   .5049176    17.87   0.000     8.035688    10.01493
7  |   35.39769   2.555679    13.85   0.000     30.38865    40.40673
8  |   138.8315   13.49606    10.29   0.000     112.3797    165.2832
------------------------------------------------------------------------------


The glue

The estimation results stored in e() are the glue that holds together the estimation-postestimation framework. The poisson command stores lots of stuff in e(). I could use ereturn list to list all this stuff, but there are many stored objects that do not interest you yet.

Most of the estimation-postestimation features that I discussed were implemented using e(b), e(V), and e(predict), which are the vector of point estimates, the estimated VCE, and the name of the command that implements predict after poisson.

I will show how to store what you need in e() in the #StataProgramming series.

Structure of Stata commands

Here is an outline of the tasks performed by a Stata estimation command.

1. Parse the input to the command.
2. Compute results.
3. Store results in e()
4. Display output.

You need to write a predict command to complete the estimation-postestimation framework. After you have stored the estimation results and written the predict command, margins works.

I will explain each of these steps in the #StataProgramming series of posts.

Use this structure to your advantage. To make your command easy to use, design it to have the predictable syntax implemented in other commands and make it work in the estimation-postestimation framework. This task is far easier than it sounds. In fact, it is just plain easy. The Stata language steers you in this direction.

Done and undone

I will teach you how to program an estimation command in Stata in the #StataProgramming series. I will also show you how do the numerical work in Mata. I discussed the following points, in this first post.

1. The predictable structure of Stata syntax makes Stata easy to use. You should emulate this structure, so that your commands are easy to use.
2. The estimation-postestimation framework makes inference and advanced estimation simple. It is easy for you to make your command work with this framework.
3. The estimation results stored in e(), and the predict command, are the glue that holds the estimation-postestimation framework together.

In the next post, I discuss do-file programming tools that I will subsequently use to parse the input to the command.

References

Cameron, A. C., and P. K. Trivedi. 2010. Microeconometrics Using Stata. Revised ed. College Station, Texas: Stata Press.

Long, J. S., and J. Freese. 2014. Regression models for categorical dependent variables using Stata. 3rd ed. College Station, Texas: Stata Press.

Categories: Programming Tags:

## Estimating parameters by maximum likelihood and method of moments using mlexp and gmm


Overview

We provide an introduction to parameter estimation by maximum likelihood and method of moments using mlexp and gmm, respectively (see [R] mlexp and [R] gmm). We include some background about these estimation techniques; see Pawitan (2001, Casella and Berger (2002), Cameron and Trivedi (2005), and Wooldridge (2010) for more details.

Maximum likelihood (ML) estimation finds the parameter values that make the observed data most probable. The parameters maximize the log of the likelihood function that specifies the probability of observing a particular set of data given a model.

Method of moments (MM) estimators specify population moment conditions and find the parameters that solve the equivalent sample moment conditions. MM estimators usually place fewer restrictions on the model than ML estimators, which implies that MM estimators are less efficient but more robust than ML estimators.

Using mlexp to estimate probit model parameters

A probit model for the binary dependent variable $$y$$ conditional on covariates $$\xb$$ with coefficients $$\betab$$ is

$y = \begin{cases} 1 & \mbox{ if } \xb\betab’ + \epsilon > 0\\ 0 & \mbox{ otherwise } \end{cases}$

where $$\epsilon$$ has a standard normal distribution. The log-likelihood function for the probit model is

$\label{E:b1} \ln\{L(\betab;\xb,y)\}= \sum_{i=1}^N y_i \ln\Phi(\xb_{i}\betab’) + (1-y_i) \ln\Phi(-\xb_{i}\betab’)$

where $$\Phi$$ denotes the cumulative standard normal.

We now use mlexp to estimate the coefficients of a probit model. We have data on whether an individual belongs to a union (union), the individual’s age (age), and the highest grade completed (grade).

. webuse union
(NLS Women 14-24 in 1968)

. mlexp ( union*lnnormal({b1}*age + {b2}*grade + {b0})    ///
>         + (1-union)*lnnormal(-({b1}*age + {b2}*grade + {b0})) )

initial:       log likelihood = -18160.456
alternative:   log likelihood = -1524604.4
rescale:       log likelihood = -14097.135
rescale eq:    log likelihood =  -14063.38
Iteration 0:   log likelihood =  -14063.38
Iteration 1:   log likelihood = -13796.715
Iteration 2:   log likelihood = -13796.336
Iteration 3:   log likelihood = -13796.336

Maximum likelihood estimation

Log likelihood = -13796.336                     Number of obs     =     26,200

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/b1 |   .0051821   .0013471     3.85   0.000     .0025418    .0078224
/b2 |   .0373899   .0035814    10.44   0.000     .0303706    .0444092
/b0 |  -1.404697   .0587797   -23.90   0.000    -1.519903   -1.289491
------------------------------------------------------------------------------


Defining a linear combination of the covariates makes it easier to specify the model and to read the output:

. mlexp ( union*lnnormal({xb:age grade _cons}) + (1-union)*lnnormal(-{xb:}) )

initial:       log likelihood = -18160.456
alternative:   log likelihood = -14355.672
rescale:       log likelihood = -14220.454
Iteration 0:   log likelihood = -14220.454
Iteration 1:   log likelihood = -13797.767
Iteration 2:   log likelihood = -13796.336
Iteration 3:   log likelihood = -13796.336

Maximum likelihood estimation

Log likelihood = -13796.336                     Number of obs     =     26,200

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
age |   .0051821   .0013471     3.85   0.000     .0025418    .0078224
grade |   .0373899   .0035814    10.44   0.000     .0303706    .0444092
_cons |  -1.404697   .0587797   -23.90   0.000    -1.519903   -1.289491
------------------------------------------------------------------------------


Using gmm to estimate parameters by MM

ML specifies a functional form for the distribution of $$y$$ conditional on $$\xb$$. Specifying $$\Eb[y|\xb]=\Phi(\xb\betab’)$$ is less restrictive because it imposes structure only on the first conditional moment instead of on all the conditional moments. Under correct model specification, the ML estimator is more efficient than the MM
estimator because it correctly specifies the conditional mean and all other conditional moments.

The model assumption $$\Eb[y|\xb]=\Phi(\xb\betab’)$$ implies the moment conditions $$\Eb[\{y-\Phi(\xb\betab’)\}\xb] = {\bf 0}$$. The sample moment equivalent is

$\sum_{i=1}^N [\{y_i-\Phi(\xb_i\betab’)\}\xb_i] = {\bf 0}$

In the gmm command below, we specify the residuals $$y_i-\Phi(\xb_i\betab’)$$ inside the parentheses and the variables that multiply them, known as instruments, in the option instruments().

. gmm ( union - normal({xb:age grade _cons}) ), instruments(age grade) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .07831137
Iteration 1:   GMM criterion Q(b) =  .00004813
Iteration 2:   GMM criterion Q(b) =  5.333e-09
Iteration 3:   GMM criterion Q(b) =  5.789e-17

note: model is exactly identified

GMM estimation

Number of parameters =   3
Number of moments    =   3
Initial weight matrix: Unadjusted                 Number of obs   =     26,200

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
age |   .0051436   .0013349     3.85   0.000     .0025272      .00776
grade |   .0383185   .0038331    10.00   0.000     .0308058    .0458312
_cons |  -1.415623   .0609043   -23.24   0.000    -1.534994   -1.296253
------------------------------------------------------------------------------
Instruments for equation 1: age grade _cons


The point estimates are similar to the ML estimates because both estimators are consistent.

Using gmm to estimate parameters by ML

When we maximize a log-likelihood function, we find the parameters that set the first derivative to 0. For example, setting the first derivative of the probit log-likelihood function with respect to $$\betab$$ to 0 in the sample yields

$\label{E:b2} \frac{\partial \ln\{L(\beta;\xb,y)\}}{\partial \betab} = \sum_{i=1}^N \left\{y_i \frac{\phi(\xb_{i}\betab’)}{\Phi(\xb_{i}\betab’)} – (1-y_i) \frac{\phi(-\xb_{i}\betab’)}{\Phi(-\xb_{i}\betab’)}\right\} \xb_{i} = {\bf 0}$

Below, we use gmm to find the parameters that solve these sample moment conditions:

. gmm ( union*normalden({xb:age grade _cons})/normal({xb:})       ///
>         -(1-union)*normalden(-{xb:})/normal(-{xb:}) ),          ///

Step 1
Iteration 0:   GMM criterion Q(b) =  .19941827
Iteration 1:   GMM criterion Q(b) =  .00012506
Iteration 2:   GMM criterion Q(b) =  2.260e-09
Iteration 3:   GMM criterion Q(b) =  7.369e-19

note: model is exactly identified

GMM estimation

Number of parameters =   3
Number of moments    =   3
Initial weight matrix: Unadjusted                 Number of obs   =     26,200

------------------------------------------------------------------------------
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
age |   .0051821    .001339     3.87   0.000     .0025577    .0078065
grade |   .0373899   .0037435     9.99   0.000     .0300528     .044727
_cons |  -1.404697   .0601135   -23.37   0.000    -1.522517   -1.286876
------------------------------------------------------------------------------
Instruments for equation 1: age grade _cons


The point estimates match those reported by mlexp. The standard errors differ because gmm reports robust standard errors.

Summary

We showed how to easily estimate the probit model parameters by ML and by MM using mlexp and gmm, respectively. We also showed that you can estimate these parameters using restrictions imposed by conditional distributions or using weaker conditional moment restrictions. Finally, we illustrated that the equations imposed by the conditional distributions can be viewed as sample moment restrictions.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics Methods and Applications. 1st ed. New York: Cambridge University Press.

Casella, G., and R. L. Berger. 2002. Statistical Inference. 2nd ed. Pacific Grove, CA: Duxbury.

Pawitan, Y. 2001. In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford: Oxford University Press.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. MIT Press.

Categories: Statistics Tags:

## Efficiency comparisons by Monte Carlo simulation

Overview

In this post, I show how to use Monte Carlo simulations to compare the efficiency of different estimators. I also illustrate what we mean by efficiency when discussing statistical estimators.

I wrote this post to continue a dialog with my friend who doubted the usefulness of the sample average as an estimator for the mean when the data-generating process (DGP) is a $$\chi^2$$ distribution with $$1$$ degree of freedom, denoted by a $$\chi^2(1)$$ distribution. The sample average is a fine estimator, even though it is not the most efficient estimator for the mean. (Some researchers prefer to estimate the median instead of the mean for DGPs that generate outliers. I will address the trade-offs between these parameters in a future post. For now, I want to stick to estimating the mean.)

In this post, I also want to illustrate that Monte Carlo simulations can help explain abstract statistical concepts. I show how to use a Monte Carlo simulation to illustrate the meaning of an abstract statistical concept. (If you are new to Monte Carlo simulations in Stata, you might want to see Monte Carlo simulations using Stata.)

Consistent estimator A is said to be more asymptotically efficient than consistent estimator B if A has a smaller asymptotic variance than B; see Wooldridge (2010, sec. 14.4.2) for an especially useful discussion. Theoretical comparisons can sometimes ascertain that A is more efficient than B, but the magnitude of the difference is rarely identified. Comparisons of Monte Carlo simulation estimates of the variances of estimators A and B give both sign and magnitude for specific DGPs and sample sizes.

The sample average versus maximum likelihood

Many books discuss the conditions under which the maximum likelihood (ML) estimator is the efficient estimator relative to other estimators; see Wooldridge (2010, sec. 14.4.2) for an accessible introduction to the modern approach. Here I compare the ML estimator with the sample average for the mean when the DGP is a $$\chi^2(1)$$ distribution.

Example 1 below contains the commands I used. For an introduction to Monte Carlo simulations see Monte Carlo simulations using Stata, and for an introduction to using mlexp to estimate the parameter of a $$\chi^2$$ distribution see Maximum likelihood estimation by mlexp: A chi-squared example. In short, the commands do the following $$5,000$$ times:

1. Draw a sample of 500 observations from a $$\chi^2(1)$$ distribution.
2. Estimate the mean of each sample by the sample average, and store this estimate in m_a in the dataset efcomp.dta.
3. Estimate the mean of each sample by ML, and store this estimate in m_ml in the dataset efcomp.dta.

Example 1: The distributions of the sample average and the ML estimators

. clear all
. set seed 12345
. postfile sim  mu_a mu_ml using efcomp, replace
. forvalues i = 1/5000 {
2.     quietly drop _all
3.     quietly set obs 500
4.     quietly generate double y = rchi2(1)
5.     quietly mean y
6.     local mu_a         =  _b[y]
7.     quietly mlexp (ln(chi2den({d=1},y)))
8.     local mu_ml   =  _b[d:_cons]
9.     post sim (mu_a') (mu_ml')
10. }
. postclose sim
. use efcomp, clear
. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mu_a |      5,000    .9989277    .0620524   .7792076   1.232033
mu_ml |      5,000    1.000988    .0401992   .8660786   1.161492


The mean of the $$5,000$$ sample average estimates and the mean of the $$5,000$$ ML estimates are each close to the true value of $$1.0$$. The standard deviation of the $$5,000$$ sample average estimates is $$0.062$$, and it approximates the standard deviation of the sampling distribution of the sample average for this DGP and sample size. Similarly, the standard deviation of the $$5,000$$ ML estimates is $$0.040$$, and it approximates the standard deviation of the sampling distribution of the ML estimator for this DGP and sample size.

We conclude that the ML estimator has a lower variance than the sample average for this DGP and this sample size, because $$0.040$$ is smaller than $$0.062$$.

To get a picture of this difference, we plot the density of the sample average and the density of the ML estimator. (Each of these densities is estimated from $$5,000$$ observations, but estimation error can be ignored because more data would not change the key results.)

Example 2: Plotting the densities of the estimators

. kdensity mu_a,   n(5000) generate(x_a   den_a)   nograph

. kdensity mu_ml,  n(5000) generate(x_ml  den_ml)  nograph

. twoway (line den_a x_a) (line den_ml x_ml)


Densities of the sample average and ML estimators

The plots show that the ML estimator is more tightly distributed around the true value than the sample average.

That the ML estimator is more tightly distributed around the true value than the sample average is what it means for one consistent estimator to be more efficient than another.

Done and undone

I used Monte Carlo simulation to illustrate what it means for one estimator to be more efficient than another. In particular, we saw that the ML estimator is more efficient than the sample average for the mean of a $$\chi^2(1)$$ distribution.

Many other estimators fall between these two estimators in an efficiency ranking. Generalized method of moments estimators and some quasi-maximum likelihood estimators come to mind and might be worth adding to these simulations.

Reference

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Maximum likelihood estimation by mlexp: A chi-squared example

Overview

In this post, I show how to use mlexp to estimate the degree of freedom parameter of a chi-squared distribution by maximum likelihood (ML). One example is unconditional, and another example models the parameter as a function of covariates. I also show how to generate data from chi-squared distributions and I illustrate how to use simulation methods to understand an estimation technique.

The data

I want to show how to draw data from a $$\chi^2$$ distribution, and I want to illustrate that the ML estimator produces estimates close to the truth, so I use simulated data.

In the output below, I draw a $$2,000$$ observation random sample of data from a $$\chi^2$$ distribution with $$2$$ degrees of freedom, denoted by $$\chi^2(2)$$, and I summarize the results.

Example 1: Generating $$\chi^2(2)$$ data

. drop _all

. set obs 2000
number of observations (_N) was 0, now 2,000

. set seed 12345

. generate y = rchi2(2)

. summarize y

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
y |      2,000    2.030865    1.990052   .0028283   13.88213


The mean and variance of the $$\chi^2(2)$$ distribution are $$2$$ and $$4$$, respectively. The sample mean of $$2.03$$ and the sample variance of $$3.96=1.99^2$$ are close to the true values. I set the random-number seed to $$12345$$ so that you can replicate my example; type help seed for details.

mlexp and the log-likelihood function

The log-likelihood function for the ML estimator for the degree of freedom parameter $$d$$ of a $$\chi^2(d)$$ distribution is

${\mathcal L}(d) = \sum_{i=1}^N \ln[f(y_i,d)]$

where $$f(y_i,d)$$ is the density function for the $$\chi^2(d)$$ distribution. See Trivedi, 2005 and Wooldridge, 2010 for instructions to ML.

The mlexp command estimates parameters by maximizing the specified log-likelihood function. You specify the contribution of an observation to the log-likelihood function inside parentheses, and you enclose parameters inside the curly braces $$\{$$ and $$\}$$. I use mlexp to estimate $$d$$ in example 2.

Example 2: Using mlexp to estimate $$d$$

. mlexp ( ln(chi2den({d},y)) )

initial:       log likelihood =     -  (could not be evaluated)
feasible:      log likelihood = -5168.1594
rescale:       log likelihood = -3417.1592
Iteration 0:   log likelihood = -3417.1592
Iteration 1:   log likelihood = -3416.7063
Iteration 2:   log likelihood = -3416.7063

Maximum likelihood estimation

Log likelihood = -3416.7063                     Number of obs     =      2,000

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d |   2.033457   .0352936    57.62   0.000     1.964283    2.102631
------------------------------------------------------------------------------


The estimate of $$d$$ is very close to the true value of $$2.0$$, as expected.

Modeling the degree of freedom as a function of a covariate

When using ML in applied research, we almost always want to model the parameters of a distribution as a function of covariates. Below, I draw a covariate $$x$$ from Uniform(0,3) distribution, specify that $$d=1+x$$, and draw $$y$$ from a $$\chi^2(d)$$ distribution conditional on $$x$$. Having drawn data from the DGP, I estimate the parameters using mlexp.

Example 3: Using mlexp to estimate $$d=a+b x_i$$

. drop _all

. set obs 2000
number of observations (_N) was 0, now 2,000

. set seed 12345

. generate x = runiform(0,3)

. generate d = 1 + x

. generate y = rchi2(d)

. mlexp ( ln(chi2den({b}*x +{a},y)) )

initial:       log likelihood =     -  (could not be evaluated)
feasible:      log likelihood = -4260.0685
rescale:       log likelihood = -3597.6271
rescale eq:    log likelihood = -3597.6271
Iteration 0:   log likelihood = -3597.6271
Iteration 1:   log likelihood = -3596.5383
Iteration 2:   log likelihood =  -3596.538

Maximum likelihood estimation

Log likelihood =  -3596.538                     Number of obs     =      2,000

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
/b |   1.061621   .0430846    24.64   0.000     .9771766    1.146065
/a |   .9524136   .0545551    17.46   0.000     .8454876     1.05934
------------------------------------------------------------------------------


The estimates of $$1.06$$ and $$0.95$$ are close to their true values.

mlexp makes this process easier by forming a linear combination of variables that you specify.

Example 4: A linear combination in mlexp

. mlexp ( ln(chi2den({xb: x _cons},y)) )

initial:       log likelihood =     -  (could not be evaluated)
feasible:      log likelihood = -5916.7648
rescale:       log likelihood = -3916.6106
Iteration 0:   log likelihood = -3916.6106
Iteration 1:   log likelihood = -3621.2905
Iteration 2:   log likelihood = -3596.5845
Iteration 3:   log likelihood =  -3596.538
Iteration 4:   log likelihood =  -3596.538

Maximum likelihood estimation

Log likelihood =  -3596.538                     Number of obs     =      2,000

------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
x |   1.061621   .0430846    24.64   0.000     .9771766    1.146065
_cons |   .9524138   .0545551    17.46   0.000     .8454878     1.05934
------------------------------------------------------------------------------


The estimates are the same as in example 3, but the command was easier to write and the output is easier to read.

Done and undone

I have shown how to generate data from a $$\chi^2(d)$$ distribution when $$d$$ is a fixed number or a linear function of a covariate and how to estimate $$d$$ or the parameters of the model for $$d$$ by using mlexp.

The examples discussed above show how to use mlexp and illustrate an example of conditional maximum likelihood estimation.

mlexp can do much more than I have discussed here; see [R] mlexp for more details. Estimating the parameters of a conditional distribution is only the beginning of any research project. I will discuss interpreting these parameters in a future post.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Categories: Statistics Tags:

## Monte Carlo simulations using Stata

Overview

A Monte Carlo simulation (MCS) of an estimator approximates the sampling distribution of an estimator by simulation methods for a particular data-generating process (DGP) and sample size. I use an MCS to learn how well estimation techniques perform for specific DGPs. In this post, I show how to perform an MCS study of an estimator in Stata and how to interpret the results.

Large-sample theory tells us that the sample average is a good estimator for the mean when the true DGP is a random sample from a $$\chi^2$$ distribution with 1 degree of freedom, denoted by $$\chi^2(1)$$. But a friend of mine claims this estimator will not work well for this DGP because the $$\chi^2(1)$$ distribution will produce outliers. In this post, I use an MCS to see if the large-sample theory works well for this DGP in a sample of 500 observations.

A first pass at an MCS

I begin by showing how to draw a random sample of size 500 from a $$\chi^2(1)$$ distribution and how to estimate the mean and a standard error for the mean.

Example 1: The mean of simulated data

. drop _all
. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345
. generate y = rchi2(1)
. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------


I specified set seed 12345 to set the seed of the random-number generator so that the results will be reproducible. The sample average estimate of the mean from this random sample is $$0.91$$, and the estimated standard error is $$0.055$$.

If I had many estimates, each from an independently drawn random sample, I could estimate the mean and the standard deviation of the sampling distribution of the estimator. To obtain many estimates, I need to repeat the following process many times:

1. Draw from the DGP
2. Compute the estimate
3. Store the estimate.

I need to know how to store the many estimates to proceed with this process. I also need to know how to repeat the process many times and how to access Stata estimates, but I put these details into appendices I and II, respectively, because many readers are already familiar with these topics and I want to focus on how to store the results from many draws.

I want to put the many estimates someplace where they will become part of a dataset that I can subsequently analyze. I use the commands postfile, post, and postclose to store the estimates in memory and write all the stored estimates out to a dataset when I am done. Example 2 illustrates the process, when there are three draws.

Example 2: Estimated means of three draws

. set seed 12345

. postfile buffer mhat using mcs, replace

. forvalues i=1/3 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y])
7. }

. postclose buffer

. use mcs, clear

. list

+----------+
|     mhat |
|----------|
1. | .9107645 |
2. |  1.03821 |
3. | 1.039254 |
+----------+


The command

postfile buffer mhat using mcs, replace


creates a place in memory called buffer in which I can store the results that will eventually be written out to a dataset. mhat is the name of the variable that will hold the estimates in the new dataset called mcs.dta. The keyword using separates the new variable name from the name of the new dataset. I specified the option replace to replace any previous versions of msc.dta with the one created here.

I used

forvalues i=1/3 {


to repeat the process three times. (See appendix I if you want a refresher on this syntax.) The commands

quietly drop _all
quietly set obs 500
quietly generate y = rchi2(1)
quietly mean y


drop the previous data, draw a sample of size 500 from a $$\chi^2(1)$$ distribution, and estimate the mean. (The quietly before each command suppresses the output.) The command

post buffer (_b[y])


stores the estimated mean for the current draw in buffer for what will be the next observation on mhat. The command

postclose buffer


writes the stuff stored in buffer to the file mcs.dta. The commands

use mcs, clear
list


drop the last $$\chi^2(1)$$ sample from memory, read in the msc dataset, and list out the dataset.

Example 3 below is a modified version of example 2; I increased the number of draws and summarized the results.

Example 3: The mean of 2,000 estimated means

. set seed 12345

. postfile buffer mhat using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y])
7. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256


The average of the $$2,000$$ estimates is an estimator for the mean of the sampling distribution of the estimator, and it is close to the true value of $$1.0$$. The sample standard deviation of the $$2,000$$ estimates is an estimator for the standard deviation of the sampling distribution of the estimator, and it is close to the true value of $$\sqrt{\sigma^2/N}=\sqrt{2/500}\approx 0.0632$$, where $$\sigma^2$$ is the variance of the $$\chi^2(1)$$ random variable.

Including standard errors

The standard error of the estimator reported by mean is an estimate of the standard deviation of the sampling distribution of the estimator. If the large-sample distribution is doing a good job of approximating the sampling distribution of the estimator, the mean of the estimated standard
errors should be close to the sample standard deviation of the many mean estimates.

To compare the standard deviation of the estimates with the mean of the estimated standard errors, I modify example 3 to also store the standard errors.

Example 4: The mean of 2,000 standard errors

. set seed 12345

. postfile buffer mhat sehat using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         post buffer (_b[y]) (_se[y])
7. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256
sehat |      2,000    .0629644    .0051703   .0464698   .0819693


Mechanically, the command

postfile buffer mhat sehat using mcs, replace


makes room in buffer for the new variables mhat and sehat, and

post buffer (_b[y]) (_se[y])


stores each estimated mean in the memory for mhat and each estimated standard error in the memory for sehat. (As in example 3, the command postclose buffer writes what is stored in memory to the new dataset.)

The sample standard deviation of the $$2,000$$ estimates is $$0.0625$$, and it is close to the mean of the $$2,000$$ estimated standard errors, which is $$0.0630$$.

You may be thinking I should have written “very close”, but how close is $$0.0625$$ to $$0.0630$$? Honestly, I cannot tell if these two numbers are sufficiently close to each other because the distance between them does not automatically tell me how reliable the resulting inference will be.

Estimating a rejection rate

In frequentist statistics, we reject a null hypothesis if the p-value is below a specified size. If the large-sample distribution approximates the finite-sample distribution well, the rejection rate of the test against the true null hypothesis should be close to the specified size.

To compare the rejection rate with the size of 5%, I modify example 4 to compute and store an indicator for whether I reject a Wald test against the true null hypothesis. (See appendix III for a discussion of the mechanics.)

Example 5: Estimating the rejection rate

. set seed 12345

. postfile buffer mhat sehat reject using mcs, replace

. forvalues i=1/2000 {
2.         quietly drop _all
3.         quietly set obs 500
4.         quietly generate y = rchi2(1)
5.         quietly mean y
6.         quietly test _b[y]=1
7.         local r = (r(p)<.05)
8.         post buffer (_b[y]) (_se[y]) (r')
9. }

. postclose buffer

. use mcs, clear

. summarize

Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
mhat |      2,000     1.00017    .0625367   .7792076    1.22256
sehat |      2,000    .0629644    .0051703   .0464698   .0819693
reject |      2,000       .0475     .212759          0          1


The rejection rate of $$0.048$$ is very close to the size of $$0.05$$.

Done and undone

In this post, I have shown how to perform an MCS of an estimator in Stata. I discussed the mechanics of using the post commands to store the many estimates and how to interpret the mean of the many estimates and the mean of the many estimated standard errors. I also recommended using an estimated rejection rate to evaluate the usefulness of the large-sample approximation to the sampling distribution of an estimator for a given DGP and sample size.

The example illustrates that the sample average performs as predicted by large-sample theory as an estimator for the mean. This conclusion does not mean that my friend's concerns about outliers were entirely misplaced. Other estimators that are more robust to outliers may have better properties. I plan to illustrate some of the trade-offs in future posts.

Appendix I: Repeating a process many times

This appendix provides a quick introduction to local macros and how to use them to repeat some commands many times; see [P] macro and [P] forvalues for more details.

I can store and access string information in local macros. Below, I store the string hello" in the local macro named value.

local value "hello"


To access the stored information, I adorn the name of the local macro. Specifically, I precede it with the single left quote () and follow it with the single right quote ('). Below, I access and display the value stored in the local macro value.

. display "value'"
hello


I can also store numbers as strings, as follows

. local value "2.134"
. display "value'"
2.134


To repeat some commands many times, I put them in a {\tt forvalues} loop. For example, the code below repeats the display command three times.

. forvalues i=1/3 {
2.    display "i is now i'"
3. }
i is now 1
i is now 2
i is now 3


The above example illustrates that forvalues defines a local macro that takes on each value in the specified list of values. In the above example, the name of the local macro is i, and the specified values are 1/3=$$\{1, 2, 3\}$$.

Appendix II: Accessing estimates

After a Stata estimation command, you can access the point estimate of a parameter named y by typing _b[y], and you can access the estimated standard error by typing _se[y]. The example below illustrates this process.

Example 6: Accessing estimated values

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate y = rchi2(1)

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------

. display  _b[y]
.91076444

. display _se[y]
.05486467


Appendix III: Getting a p-value computed by test

This appendix explains the mechanics of creating an indicator for whether a Wald test rejects the null hypothesis at a specific size.

I begin by generating some data and performing a Wald test against the true null hypothesis.

Example 7: Wald test results

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate y = rchi2(1)

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
|       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------

. test _b[y]=1

( 1)  y = 1

F(  1,   499) =    2.65
Prob > F =    0.1045


The results reported by test are stored in r(). Below, I use return list to see them, type help return list for details.

Example 8: Results stored by test

. return list

scalars:
r(drop) =  0
r(df_r) =  499
r(F) =  2.645393485924886
r(df) =  1
r(p) =  .1044817353734439


The p-value reported by test is stored in r(p). Below, I store a 0/1 indicator for whether the p-value is less than $$0.05|0 in the local macro r. (See appendix II for an introduction to local macros.) I complete the illustration by displaying that the local macro contains the value \(0$$.

. local r = (r(p)<.05)
. display "r'"
0

Categories: Programming Tags:

## Introduction to treatment effects in Stata: Part 2

This post was written jointly with David Drukker, Director of Econometrics, StataCorp.

In our last post, we introduced the concept of treatment effects and demonstrated four of the treatment-effects estimators that were introduced in Stata 13.  Today, we will talk about two more treatment-effects estimators that use matching.

Introduction

Last time, we introduced four estimators for estimating the average treatment effect (ATE) from observational data.  Each of these estimators has a different way of solving the missing-data problem that arises because we observe only the potential outcome for the treatment level received.  Today, we introduce estimators for the ATE that solve the missing-data problem by matching.

Matching pairs the observed outcome of a person in one treatment group with the outcome of the “closest” person in the other treatment group. The outcome of the closest person is used as a prediction for the missing potential outcome. The average difference between the observed outcome and the predicted outcome estimates the ATE.

What we mean by “closest” depends on our data. Matching subjects based on a single binary variable, such as sex, is simple: males are paired with males and females are paired with females. Matching on two categorical variables, such as sex and race, isn’t much more difficult. Matching on continuous variables, such as age or weight, can be trickier because of the sparsity of the data. It is unlikely that there are two 45-year-old white males who weigh 193 pounds in a sample. It is even less likely that one of those men self-selected into the treated group and the other self-selected into the untreated group. So, in such cases, we match subjects who have approximately the same weight and approximately the same age.

This example illustrates two points. First, there is a cost to matching on continuous covariates; the inability to find good matches with more than one continuous covariate causes large-sample bias in our estimator because our matches become increasingly poor.

Second, we must specify a measure of similarity. When matching directly on the covariates, distance measures are used and the nearest neighbor selected. An alternative is to match on an estimated probability of treatment, known as the propensity score.

Before we discuss estimators for observational data, we note that matching is sometimes used in experimental data to define pairs, with the treatment subsequently randomly assigned within each pair. This use of matching is related but distinct.

Nearest-neighbor matching

Nearest-neighbor matching (NNM) uses distance between covariate patterns to define “closest”. There are many ways to define the distance between two covariate patterns. We could use squared differences as a distance measure, but this measure ignores problems with scale and covariance. Weighting the differences by the inverse of the sample covariance matrix handles these issues. Other measures are also used, but these details are less important than the costs and benefits of NNM dropping the functional-form assumptions (linear, logit, probit, etc.) used in the estimators discussed last time.

Dropping the functional-form assumptions makes the NNM estimator much more flexible; it estimates the ATE for a much wider class of models. The cost of this flexibility is that the NNM estimator requires much more data and the amount of data it needs grows with each additional continuous covariate.

In the previous blog entry, we used an example of mother’s smoking status on birthweight. Let’s reconsider that example.

. webuse cattaneo2.dta, clear


Now, we use teffects nnmatch to estimate the ATE by NNM.

. teffects nnmatch (bweight mmarried mage fage medu prenatal1) (mbsmoke)

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : nearest-neighbor matching      Matches: requested =         1
Outcome model  : matching                                      min =         1
Distance metric: Mahalanobis                                   max =        16
------------------------------------------------------------------------------
|              AI Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ATE          |
mbsmoke |
(smoker  |
vs  |
nonsmoker)  |  -210.5435   29.32969    -7.18   0.000    -268.0286   -153.0584
------------------------------------------------------------------------------


The estimated ATE is -211, meaning that infants would weigh 211 grams less when all mothers smoked than when no mothers smoked.

The output also indicates that ties in distance caused at least one observation to be matched with 16 other observations, even though we requested only matching. NNM averages the outcomes of all the tied-in-distance observations, as it should. (They are all equally good and using all of them will reduce bias.)

NNM on discrete covariates does not guarantee exact matching. For example, some married women could be matched with single women. We probably prefer exact matching on discrete covariates, which we do now.

. teffects nnmatch (bweight mmarried mage fage medu prenatal1) (mbsmoke), ///
ematch(mmarried prenatal1)

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : nearest-neighbor matching      Matches: requested =         1
Outcome model  : matching                                      min =         1
Distance metric: Mahalanobis                                   max =        16
------------------------------------------------------------------------------
|              AI Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ATE          |
mbsmoke |
(smoker  |
vs  |
nonsmoker)  |  -209.5726   29.32603    -7.15   0.000    -267.0506   -152.0946
------------------------------------------------------------------------------


Exact matching on mmarried and prenatal1 changed the results a little bit.

Using more than one continuous covariate introduces large-sample bias, and we have three. The option biasadj() uses a linear model to remove the large-sample bias, as suggested by Abadie and Imbens (2006, 2011).

. teffects nnmatch (bweight mmarried mage fage medu prenatal1) (mbsmoke), ///

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : nearest-neighbor matching      Matches: requested =         1
Outcome model  : matching                                      min =         1
Distance metric: Mahalanobis                                   max =        16
------------------------------------------------------------------------------
|              AI Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ATE          |
mbsmoke |
(smoker  |
vs  |
nonsmoker)  |  -210.0558   29.32803    -7.16   0.000    -267.5377   -152.5739
------------------------------------------------------------------------------


In this case, the results changed by a small amount. In general, they can change a lot, and the amount increases with the number of continuous
covariates.

Propensity-score matching

NNM uses bias adjustment to remove the bias caused by matching on more than one continuous covariate. The generality of this approach makes it very appealing, but it can be difficult to think about issues of fit and model specification. Propensity-score matching (PSM) matches on an estimated probability of treatment known as the propensity score. There is no need for bias adjustment because we match on only one continuous covariate. PSM has the added benefit that we can use all the standard methods for checking the fit of binary regression models prior to matching.

We estimate the ATE by PSM using teffects psmatch.

. teffects psmatch (bweight) (mbsmoke mmarried mage fage medu prenatal1 )

Treatment-effects estimation                    Number of obs      =      4642
Estimator      : propensity-score matching      Matches: requested =         1
Outcome model  : matching                                      min =         1
Treatment model: logit                                         max =        16
------------------------------------------------------------------------------
|              AI Robust
bweight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ATE          |
mbsmoke |
(smoker  |
vs  |
nonsmoker)  |  -229.4492   25.88746    -8.86   0.000    -280.1877   -178.7107
------------------------------------------------------------------------------


The estimated ATE is now -229, larger in magnitude than the NNM estimates but not significantly so.

How to choose among the six estimatorsg

We now have six estimators:

2. IPW: Inverse probability weighting
3. IPWRA: Inverse probability weighting with regression adjustment
4. AIPW: Augmented inverse probability weighting
5. NNM: Nearest-neighbor matching
6. PSM: Propensity-score matching

The ATEs we estimated are

1. RA: -277.06
2. IPW: -275.56
3. IPWRA: -229.97
4. AIPW: -230.99
5. NNM: -210.06
6. PSM: -229.45

Which estimator should we use?

We would never suggest searching the above table for the result that most closely fits your wishes and biases. The choice of estimator needs to be made beforehand.

So, how do we choose?

Here are some rules of thumb:

1. Under correct specification, all the estimators should produce similar results. (Similar estimates do not guarantee correct specification because all the specifications could be wrong.)
2. When you know the determinants of treatment status, IPW is a natural base-case estimator.
3. When you instead know the determinants of the outcome, RA is a natural base-case estimator.
4. The doubly robust estimators, AIPW and IPWRA, give us an extra shot at correct specification.
5. When you have lots of continuous covariates, NNM will crucially hinge on the bias adjustment, and the computation gets to be extremely difficult.
6. When you know the determinants of treatment status, PSM is another base-case estimator.
7. The IPW estimators are not reliable when the estimated treatment probabilities get too close to 0 or 1.

Final thoughts

Before we go, we reiterate the cautionary note from our last entry. Nothing about the mathematics of treatment-effects estimators magically extracts causal relationships from observational data. We cannot thoughtlessly analyze our data using Stata’s teffects commands and infer a causal relationship. The models must be supported by scientific theory.

If you would like to learn more about treatment effects in Stata, there is an entire manual devoted to the treatment-effects features in Stata 14; it includes a basic introduction, an advanced introduction, and many worked examples. In Stata, type help teffects:

.  help teffects


Title

[TE] teffects—Treatment-effects estimation for observational data

Syntax

… <output omitted> …

The title [TE] teffects will be in blue, which means it’s clickable. Click on it to go to the Treatment-Effects Reference Manual.

http://www.stata.com/manuals14/te/

References

Abadie, A., and Imbens, G. W. 2006. Large sample properties of matching estimators for average treatment effects. Econometrica 74: 235–267.

Abadie, A., and Imbens, G. W. 2011. Bias-corrected matching estimators for average treatment effects. Journal of Business and Economic Statistics 29: 1–11.

Cattaneo, M. D. 2010. Efficient semiparametric estimation of multi-valued treatment effects under ignorability. Journal of Econometrics 155: 138–154.

Categories: Statistics Tags:

## 2015 Stata Conference recap

We are happy to report another successful Stata Conference is in the books! Attendees had the opportunity to network, learn, and share their experiences with the Stata community.

We’d like to thank the organizers and everyone who participated in making this year’s conference one of the best yet. Here’s what attendees had to say on social media.

As the conference approached, the countdown began.

Guests attended several presentations led by Stata experts and mingled with fellow researchers and Stata developers during breaks.

A photo posted by Belen (@_belenchavez) on

And sadly, all good things must come to an end.

If you missed this year, save the date for the 2016 Stata Conference in Chicago on July 28 and 29.

We look forward to seeing you next year!

Categories: Meetings Tags:

## Spotlight on irt

New to Stata 14 is a suite of commands to fit item response theory (IRT) models. IRT models are used to analyze the relationship between the latent trait of interest and the items intended to measure the trait. Stata’s irt commands provide easy access to some of the commonly used IRT models for binary and polytomous responses, and irtgraph commands can be used to plot item characteristic functions and information functions.

To learn more about Stata’s IRT features, I refer you to the [IRT] manual; here I want to go beyond the manual and show you a couple of examples of what you can do with a little bit of Stata code.

Example 1

To get started, I want to show you how simple IRT analysis is in Stata.

When I use the nine binary items q1q9, all I need to type to fit a 1PL model is

irt 1pl q*

Equivalently, I can use a dash notation or explicitly spell out the variable names:

irt 1pl q1-q9
irt 1pl q1 q2 q3 q4 q5 q6 q7 q8 q9

I can also use parenthetical notation:

irt (1pl q1-q9)

Parenthetical notation is not very useful for a simple IRT model, but comes in handy when you want to fit a single IRT model to combinations of binary, ordinal, and nominal items:

irt (1pl q1-q5) (1pl q6-q9) (pcm x1-x10) ...

IRT graphs are equally simple to create in Stata; for example, to plot item characteristic curves (ICCs) for all the items in a model, I type

irtgraph icc

Yes, that’s it!

Example 2

Sometimes, I want to fit the same IRT model on two different groups and see how the estimated parameters differ between the groups. The exercise can be part of investigating differential item functioning (DIF) or parameter invariance.

I split the data into two groups, fit two separate 2PL models, and create two scatterplots to see how close the parameter estimates for discrimination and difficulty are for the two groups. For simplicity, my group variable is 1 for odd-numbered observations and 0 for even-numbered observations.

We see that the estimated parameters for item q8 appear to differ between the two groups.

Here is the code used in this example.

webuse masc1, clear

gen odd = mod(_n,2)

irt 2pl q* if odd
mat b_odd = e(b)'

irt 2pl q* if !odd
mat b_even = e(b)'

svmat double b_odd, names(group1)
svmat double b_even, names(group2)
replace group11 = . in 19
replace group21 = . in 19

gen lab1 = ""
replace lab1 = "q8" in 15

gen lab2 = ""
replace lab2 = "q8" in 16

corr group11 group21 if mod(_n,2)
local c1 : display %4.2f r(rho)'

twoway (scatter group11 group21, mlabel(lab1) mlabsize(large) mlabpos(7)) ///
(function x, range(0 2)) if mod(_n,2), ///
name(discr,replace) title("Discrimination parameter; {&rho} = c1'") ///
xtitle("Group 1 observations") ytitle("Group 2 observations") ///
legend(off)

corr group11 group21 if !mod(_n,2)
local c2 : display %4.2f r(rho)'

twoway (scatter group11 group21, mlabel(lab2) mlabsize(large) mlabpos(7)) ///
(function x, range(-2 3)) if !mod(_n,2), ///
name(diff,replace) title("Difficulty parameter; {&rho} = c2'") ///
xtitle("Group 1 observations") ytitle("Group 2 observations") ///
legend(off)

graph combine discr diff, xsize(8)


Example 3

Continuing with the example above, I want to show you how to use a likelihood-ratio test to test for item parameter differences between groups.

Using item q8 as an example, I want to fit one model that constrains item q8 parameters to be the same between the two groups and fit another model that allows these parameters to vary.

The first model is easy. I can fit a 2PL model for the entire dataset, which implicitly constrains the parameters to be equal for both groups. I store the estimates under the name equal.

. webuse masc1, clear
(Data from De Boeck & Wilson (2004))

. generate odd = mod(_n,2)
. quietly irt 2pl q*
. estimates store equal


To estimate the second model, I need the following:

. irt (2pl q1-q7 q9) (2pl q8 if odd) (2pl q8 if !odd)


Unfortunately, this is illegal syntax. I can, however, split the item into two new variables where each variable is restricted to the required subsample:

. generate q8_1 = q8 if odd
(400 missing values generated)

. generate q8_2 = q8 if !odd
(400 missing values generated)


I estimate the second IRT model, this time with items q8_1 and q8_2 taking place of the original q8:

. quietly irt 2pl q1-q7 q8_1 q8_2 q9
. estat report q8_1 q8_2

Two-parameter logistic model                    Number of obs     =        800
Log likelihood = -4116.2064
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
q8_1         |
Discrim |   1.095867   .2647727     4.14   0.000     .5769218    1.614812
Diff |  -1.886126   .3491548    -5.40   0.000    -2.570457   -1.201795
-------------+----------------------------------------------------------------
q8_2         |
Discrim |    1.93005   .4731355     4.08   0.000     1.002721    2.857378
Diff |  -1.544908   .2011934    -7.68   0.000     -1.93924   -1.150577
------------------------------------------------------------------------------


Now, I can perform the likelihood-ratio test:

. lrtest equal ., force

Likelihood-ratio test                                 LR chi2(2)  =      4.53
(Assumption: equal nested in .)                       Prob > chi2 =    0.1040


The test suggests the first model is preferable even though the two ICCs clearly differ:

. irtgraph icc q8_1 q8_2, ylabel(0(.25)1)


Summary

IRT models are used to analyze the relationship between the latent trait of interest and the items intended to measure the trait. Stata’s irt commands provide easy access to some of the commonly used IRT models, and irtgraph commands implement the most commonly used IRT plots. With just a few extra steps, you can easily create customized graphs, such as the ones demonstrated above, which incorporate information from separate IRT models.

Categories: Statistics Tags: