Archive

Archive for the ‘Statistics’ Category

Bayesian binary item response theory models using bayesmh

This post was written jointly with Yulia Marchenko, Executive Director of Statistics, StataCorp.

Table of Contents

Overview
1PL model
2PL model
3PL model
4PL model
5PL model
Conclusion

Overview

Item response theory (IRT) is used for modeling the relationship between the latent abilities of a group of subjects and the examination items used for measuring their abilities. Stata 14 introduced a suite of commands for fitting IRT models using maximum likelihood; see, for example, the blog post Spotlight on irt by Rafal Raciborski and the [IRT] Item Response Theory manual for more details. In this post, we demonstrate how to fit Bayesian binary IRT models by using the redefine() option introduced for the bayesmh command in Stata 14.1. If you are not familiar with the concepts and jargon of Bayesian statistics, you may want to watch the introductory videos on the Stata Youtube channel before proceeding.

Introduction to Bayesian analysis, part 1 : The basic concepts
Introduction to Bayesian analysis, part 2: MCMC and the Metropolis-Hastings algorithm

We use the abridged version of the mathematics and science data from DeBoeck and Wilson (2004), masc1. The dataset includes 800 student responses to 9 test questions intended to measure mathematical ability.

The irt suite fits IRT models using data in the wide form – one observation per subject with items recorded in separate variables. To fit IRT models using bayesmh, we need data in the long form, where items are recorded as multiple observations per subject. We thus reshape the dataset in a long form: we have a single binary response variable, y, and two index variables, item and id, which identify the items and subjects, respectively. This allows us to formulate our IRT models as multilevel models. The following commands load and prepare the dataset.

. webuse masc1
(Data from De Boeck & Wilson (2004))

. generate id = _n

. quietly reshape long q, i(id) j(item)

. rename q y

To ensure that we include all levels of item and id in our models, we use fvset base none to keep the base categories.

. fvset base none id item

In what follows, we present eight Bayesian binary IRT models increasing in complexity and explanatory power. We perform Bayesian model comparison to gain insight into what would be the more appropriate model for the data at hand.

For high-dimensional models such as IRT models, you may see differences in the estimation results between different platforms or different flavors of Stata because of the nature of the Markov chain Monte Carlo (MCMC) sampling and finite numerical precision. These differences are not a source of concern; they will be within the range of the MCMC variability and will lead to similar inferential conclusions. The differences will diminish as the MCMC sample size increases. The results in this post are obtained from Stata/SE on the 64-bit Linux platform using the default 10,000 MCMC sample size.

Let the items be indexed by \(i=1,\dots,9\) and the subjects by \(j=1,\dots,800\). Let \(\theta_j\) be the latent mathematical ability of subject \(j\), and let \(Y_{ij}\) be the response of subject \(j\) to item \(i\).

Back to table of contents

1PL model

In the one-parameter logistic (1PL) model, the probability of getting a correct response is modeled as an inverse-logit function of location parameters \(b_i\), also called item difficulties, and a common slope parameter \(a\), also called item discrimination:

\[
P(Y_{ij}=1) = {\rm InvLogit}\{a(\theta_j-b_i)\} =
\frac{\exp\{a(\theta_j-b_i)\}}{1+\exp\{a(\theta_j-b_i)\}}
\]

Typically, the abilities are assumed to be normally distributed:
\[
\theta_j \sim {\rm N}(0,1)
\]
In a multilevel framework, the \(\theta_j\)’s represent random effects. In a Bayesian framework, we use the term “random effects” to refer to the parameters corresponding to levels of grouping variables identifying the hierarchy of the data.

A Bayesian formulation of the 1PL model also requires prior specification for the model parameters \(a\) and \(b_i\). The discrimination parameter \(a\) is assumed to be positive and is often modeled in the log scale. Because we have no prior knowledge about the discrimination and difficulty parameters, we assume that the prior distributions of \(\ln(a)\) and \(b_i\) have support on the whole real line, are symmetric, and are centered at 0. A normal prior distribution is thus a natural choice. We furthermore assume that \(\ln(a)\) and \(b_i\) are close to 0 and have prior variance of 1, which is an entirely subjective decision. We thus assign \(\ln(a)\) and \(b_i\) standard normal prior distributions:

\[\ln(a) \sim {\rm N}(0, 1)\] \[b_i \sim {\rm N}(0, 1) \]

To specify the likelihood function of the 1PL model in bayesmh, we use a nonlinear equation specification for the response variable y. The direct nonlinear specification for this model is

bayesmh y = ({discrim}*({subj:i.id}-{diff:i.item})), likelihood(logit) ...

where {discrim} is the discrimination parameter \(a\), {subj:i.id} are latent abilities \(\theta_j\), and {diff:i.item} are item difficulties \(b_i\). The logit model is used for the probability of a success, \(P(Y_{ij}=1)\). Specification {subj:i.id} in the above nonlinear expression is viewed as a substitutable expression for linear combinations of indicators associated with the id variable and parameters \(\theta_j\). This specification may be computationally prohibitive with a large number of subjects. A more efficient solution is to use the redefine() option to include subject random effects \(\theta_j\) in the model. The same argument may apply to the {diff:i.item} specification when there are many items. Thus, it may be computationally convenient to treat the \(b_i\) parameters as “random effects” in the specification and use the redefine() option to include them in the model.

A more efficient specification is thus

bayesmh y = ({discrim}*({subj:}-{diff:})), likelihood(logit) ///
               redefine(subj:i.id) redefine(diff:i.item) ... ///

where {subj:} and {diff:} in the nonlinear specification now represent the \(\theta_j\) and \(b_i\) parameters, respectively, without using expansions into linear combinations of indicator variables.

Below, we show the full bayesmh specification of the 1PL model and the output summary. In our examples, we treat the abilities {subj:i.id} as nuisance parameters and exclude them from the final results. The discrimination model parameter {discrim} must be positive and is thus initialized with 1. A longer burn-in period, burnin(5000), allows for longer adaptation of the MCMC sampler, which is needed given the large number of parameters in the model. Finally, the estimation results are stored for later model comparison.

. set seed 14

. bayesmh y = ({discrim}*({subj:}-{diff:})), likelihood(logit) ///
>         redefine(diff:i.item) redefine(subj:i.id)            ///
>         prior({subj:i.id},    normal(0, 1))                  ///
>         prior({discrim},      lognormal(0, 1))               ///
>         prior({diff:i.item},  normal(0, 1))                  ///
>         init({discrim} 1) exclude({subj:i.id})               ///
>         burnin(5000) saving(sim1pl, replace)
  
Burn-in ...
Simulation ...

Model summary
------------------------------------------------------------------------------
Likelihood: 
  y ~ logit({discrim}*(xb_subj-xb_diff))

Priors: 
  {diff:i.item} ~ normal(0,1)                                              (1)
    {subj:i.id} ~ normal(0,1)                                              (2)
      {discrim} ~ lognormal(0,1)
------------------------------------------------------------------------------
(1) Parameters are elements of the linear form xb_diff.
(2) Parameters are elements of the linear form xb_subj.

Bayesian logistic regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      7,200
                                                 Acceptance rate  =      .3074
                                                 Efficiency:  min =     .02691
                                                              avg =     .06168
Log marginal likelihood =          .                          max =     .09527
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
diff         |
        item |
          1  | -.6934123   .0998543   .003576  -.6934789  -.8909473  -.4917364
          2  | -.1234553   .0917187   .002972  -.1241642  -.3030341   .0597863
          3  | -1.782762   .1323252    .00566  -1.781142   -2.05219  -1.534451
          4  |  .3152835   .0951978   .003289   .3154714   .1279147   .4981263
          5  |  1.622545    .127213   .005561   1.619388   1.377123   1.883083
          6  |  .6815517   .0978777   .003712   .6788345   .4911366    .881128
          7  |  1.303482   .1173994   .005021   1.302328   1.084295   1.544913
          8  | -2.353975   .1620307   .008062  -2.351207  -2.672983  -2.053112
          9  | -1.168668   .1120243   .004526  -1.163922  -1.392936  -.9549209
-------------+----------------------------------------------------------------
     discrim |  .8644787   .0439804   .002681   .8644331   .7818035   .9494433
------------------------------------------------------------------------------

file sim1pl.dta saved

. estimates store est1pl

The sampling efficiency is acceptable, about 6% on average, with no indication of convergence problems. Although detailed convergence inspection of all parameters is outside the scope of this post, we recommend that you do so by using, for example, the bayesgraph diagnostics command.

Though we used informative priors for the model parameters, the estimation results from our Bayesian model are not that different from the maximum likelihood estimates obtained using the irt 1pl command (see example 1 in [IRT] irt 1pl). For example, the posterior mean estimate for {discrim} is 0.86 with an MCMC standard error of 0.003, whereas irt 1pl reports 0.85 with a standard error of 0.05.

The log-marginal likelihood is reported missing because we have excluded the {subj:i.id} parameters from the simulation results and the Laplace-Metropolis estimator of the log-marginal likelihood is not available in such cases. This estimator requires simulation results for all model parameters to compute the log-marginal likelihood.

Back to table of contents

2PL model

The two-parameter logistic (2PL) model extends the 1PL model by allowing for item-specific discrimination. The probability of correct response is now modeled as a function of item-specific slope parameters \(a_i\):
\[
P(Y_{ij}=1) = {\rm InvLogit}\{a_i(\theta_j-b_i)\} =
\frac{\exp\{a_i(\theta_j-b_i)\}}{1+\exp\{a_i(\theta_j-b_i)\}}
\]

The prior specification for \(\theta_j\) stays the same as in the 1PL model. We will, however, apply more elaborate prior specifications for the \(a_i\)’s and \(b_i\)’s. It is a good practice to use proper prior specifications without overwhelming the evidence from the data. The impact of the priors can be controlled by introducing additional hyperparameters. For example, Kim and Bolt (2007) proposed the use of a normal prior for the difficulty parameters with unknown mean and variance. Extending this approach to the discrimination parameters as well, we apply a hierarchical Bayesian model in which the \(\ln(a_i)\) and \(b_i\) parameters have the following prior specifications:

\[ \ln(a_i) \sim {\rm N}(\mu_a, \sigma_a^2) \] \[ b_i \sim {\rm N}(\mu_b, \sigma_b^2) \]

The mean hyperparameters, \(\mu_a\) and \(\mu_b\), and variance hyperparameters, \(\sigma_a^2\) and \(\sigma_b^2\), require informative prior specifications. We assume that the means are centered at 0 with a variation of 0.1:
\[
\mu_a, \mu_b \sim {\rm N}(0, 0.1)
\]

To lower the variability of the \(\ln(a_i)\) and \(b_i\) parameters, we apply an inverse-gamma prior with shape 10 and scale 1 for the variance parameters:

\[
\sigma_a^2, \sigma_b^2 \sim {\rm InvGamma}(10, 1)
\]

Thus, the prior mean of \(\sigma_a^2\) and \(\sigma_b^2\) is about 0.1.

In the bayesmh specification, the hyperparameters \(\mu_a\), \(\mu_b\), \(\sigma_a^2\), and \(\sigma_a^2\) are denoted as {mu_a}, {mu_b}, {var_a}, and {var_b}, respectively. We use the redefine(discrim:i.item) option to include in the model the discrimination parameters \(a_i\), referred to as {discrim:} in the likelihood specification.

Regarding the MCMC simulation, we change some of the default options. The hyperparameters {mu_a}, {mu_b}, {var_a}, and {var_b} are placed in separate blocks to improve the simulation efficiency. The discrimination parameters {discrim:i.item} must be positive and are thus initialized with 1s.

. set seed 14

. bayesmh y = ({discrim:}*({subj:}-{diff:})), likelihood(logit) ///
>         redefine(discrim:i.item) redefine(diff:i.item)        ///
>         redefine(subj:i.id)                                   ///
>         prior({subj:i.id},      normal(0, 1))                 ///
>         prior({discrim:i.item}, lognormal({mu_a}, {var_a}))   ///
>         prior({diff:i.item},    normal({mu_b}, {var_b}))      ///
>         prior({mu_a} {mu_b},    normal(0, 0.1))               ///
>         prior({var_a} {var_b},  igamma(10, 1))                ///
>         block({mu_a mu_b var_a var_b}, split)                 ///
>         init({discrim:i.item} 1)                              ///
>         exclude({subj:i.id}) burnin(5000) saving(sim2pl, replace)
  
Burn-in ...
Simulation ...

Model summary
------------------------------------------------------------------------------
Likelihood: 
  y ~ logit(xb_discrim*(xb_subj-xb_diff))

Priors: 
  {discrim:i.item} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.item} ~ normal({mu_b},{var_b})                                (2)
       {subj:i.id} ~ normal(0,1)                                           (3)

Hyperpriors: 
    {mu_a mu_b} ~ normal(0,0.1)
  {var_a var_b} ~ igamma(10,1)
------------------------------------------------------------------------------
(1) Parameters are elements of the linear form xb_discrim.
(2) Parameters are elements of the linear form xb_diff.
(3) Parameters are elements of the linear form xb_subj.

Bayesian logistic regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      7,200
                                                 Acceptance rate  =      .3711
                                                 Efficiency:  min =     .01617
                                                              avg =     .04923
Log marginal likelihood =          .                          max =      .1698
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
discrim      |
        item |
          1  |  1.430976   .1986011   .010953   1.413063   1.089405   1.850241
          2  |  .6954823   .1081209   .004677   .6897267   .4985004   .9276975
          3  |  .9838528   .1343908   .009079   .9780275   .7506566   1.259427
          4  |  .8167792   .1169157   .005601   .8136229   .5992495   1.067578
          5  |  .9402715   .1351977   .010584   .9370298   .6691103   1.214885
          6  |  .9666747   .1420065   .008099   .9616285   .7038868   1.245007
          7  |  .5651287   .0864522   .006201   .5617302   .3956216   .7431265
          8  |  1.354053   .2048404   .015547   1.344227   .9791096   1.761437
          9  |  .7065096   .1060773   .006573   .6999745   .5102749   .9271799
-------------+----------------------------------------------------------------
diff         |
        item |
          1  | -.5070314   .0784172   .003565   -.507922   -.671257  -.3596057
          2  | -.1467198    .117422   .003143  -.1456633  -.3895978   .0716841
          3  | -1.630259   .1900103   .013494  -1.612534  -2.033169  -1.304171
          4  |  .3273735   .1073891   .003565   .3231703   .1248782   .5492114
          5  |  1.529584   .1969554    .01549   1.507982   1.202271   1.993196
          6  |  .6325194    .115724   .005613   .6243691   .4272131   .8851649
          7  |  1.827013   .2884057   .019582    1.79828   1.349654   2.490633
          8  | -1.753744   .1939559   .014743  -1.738199  -2.211475  -1.438146
          9  | -1.384486   .2059005   .012105  -1.361195  -1.838918  -1.059687
-------------+----------------------------------------------------------------
        mu_a | -.1032615   .1148176   .003874   -.102376  -.3347816   .1277031
       var_a |  .1129835   .0356735   .001269   .1056105    .063403   .1981331
        mu_b | -.0696525   .2039387   .004949   -.072602  -.4641566   .3298393
       var_b |  .6216005   .2023137   .008293   .5843444   .3388551   1.101153
------------------------------------------------------------------------------

file sim2pl.dta saved

. estimates store est2pl

The average simulation efficiency is about 5%, but some of the parameters converge slower than the others, such as {diff:7.item}, which has the largest MCMC standard error (0.02) among the difficulty parameters. If this was a rigorous study, to lower the MCMC standard errors, we would recommend longer simulations with MCMC sample sizes of at least 50,000.

We can compare the 1PL and 2PL models by using the deviance information criterion (DIC) available with the bayesstats ic command.

. bayesstats ic est1pl est2pl, diconly

Deviance information criterion

------------------------
             |       DIC
-------------+----------
      est1pl |  8122.428
      est2pl |  8055.005
------------------------

DIC is often used in Bayesian model selection as an alternative to AIC and BIC criteria and can be easily obtained from an MCMC sample. Larger MCMC samples produce more reliable DIC estimates. Because different MCMC samples produce different sample DIC values and the sample approximation error in calculating DIC is not known, one should not rely solely on DIC when choosing a model.

Lower DIC values indicate better fit. The DIC of the 2PL model (8,055) is markedly lower than the DIC of the 1PL model (8,122), implying better fit of the 2PL model.

Back to table of contents

3PL model

The three-parameter logistic (3PL) model introduces lower asymptote parameters \(c_i\), also called guessing parameters. The probability of giving a correct response is given by

\[
P(Y_{ij}=1) = c_i + (1-c_i){\rm InvLogit}\{a_i(\theta_j-b_i)\} ,\ c_i > 0
\]

The guessing parameters may be difficult to estimate using maximum likelihood. Indeed, the irt 3pl command with the sepguessing option fails to converge, as you can verify by typing

. irt 3pl q1-q9, sepguessing

on the original dataset.

It is thus important to specify an informative prior for \(c_i\). We assume that the prior mean of the guessing parameters is about 0.1 and thus apply
\[
c_i \sim {\rm InvGamma}(10, 1)
\]

Similarly to the discrimination and difficulty parameters, the \(c_i\)’s are introduced as random-effects parameters in the bayesmh specification and are referred to as {gues:} in the likelihood specification.

Unlike 1PL and 2PL models, we cannot use the likelihood(logit) option to model the probability of success because the probability of correct response is no longer an inverse-logit transformation of the parameters. Instead, we use likelihood(binlogit(1), noglmtransform) to model the probability of success of a Bernoulli outcome directly.

To have a valid initialization of the MCMC sampler, we assign the \(c_i\)’s positive starting values, 0.1.

. set seed 14

. bayesmh y = ({gues:}+(1-{gues:})*invlogit({discrim:}*({subj:}-{diff:}))), ///
>                 likelihood(binlogit(1), noglmtransform)                   ///
>         redefine(discrim:i.item) redefine(diff:i.item)                    ///
>         redefine(gues:i.item)    redefine(subj:i.id)                      ///
>         prior({subj:i.id},      normal(0, 1))                             ///
>         prior({discrim:i.item}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.item},    normal({mu_b}, {var_b}))                  ///
>         prior({gues:i.item},    igamma(10, 1))                            ///
>         prior({mu_a} {mu_b},    normal(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b}, split)                             ///
>         init({discrim:i.item} 1 {gues:i.item} 0.1)                        ///
>         exclude({subj:i.id}) burnin(5000) saving(sim3pls, replace)
  
Burn-in ...
Simulation ...

Model summary
------------------------------------------------------------------------------
Likelihood: 
  y ~ binomial(xb_gues+(1-xb_gues)*invlogit(xb_discrim*(xb_subj-xb_diff)),1)

Priors: 
  {discrim:i.item} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.item} ~ normal({mu_b},{var_b})                                (2)
     {gues:i.item} ~ igamma(10,1)                                          (3)
       {subj:i.id} ~ normal(0,1)                                           (4)

Hyperpriors: 
    {mu_a mu_b} ~ normal(0,0.1)
  {var_a var_b} ~ igamma(10,1)
------------------------------------------------------------------------------
(1) Parameters are elements of the linear form xb_discrim.
(2) Parameters are elements of the linear form xb_diff.
(3) Parameters are elements of the linear form xb_gues.
(4) Parameters are elements of the linear form xb_subj.

Bayesian binomial regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      7,200
                                                 Acceptance rate  =      .3496
                                                 Efficiency:  min =      .0148
                                                              avg =     .03748
Log marginal likelihood =          .                          max =      .2044
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
discrim      |
        item |
          1  |  1.712831   .2839419   .018436   1.681216   1.232644   2.351383
          2  |  .8540871   .1499645   .008265   .8414399   .6058463   1.165732
          3  |  1.094723   .1637954    .01126   1.081756    .817031   1.454845
          4  |  1.090891   .2149095   .013977   1.064651   .7488589   1.588164
          5  |  1.363236   .2525573   .014858   1.338075   .9348136   1.954695
          6  |  1.388325   .3027436   .024245   1.336303   .9466695   2.068181
          7  |  .9288217   .2678741   .021626   .8750048   .5690308   1.603375
          8  |  1.457763   .2201065    .01809   1.438027   1.068937   1.940431
          9  |  .7873631    .127779   .007447   .7796568    .563821    1.06523
-------------+----------------------------------------------------------------
diff         |
        item |
          1  | -.2933734   .0976177   .006339  -.2940499  -.4879558  -.0946848
          2  |  .2140365    .157158   .008333   .2037788  -.0553537   .5550411
          3  | -1.326351   .1981196   .013101  -1.326817  -1.706671  -.9307443
          4  |  .6367877   .1486799   .007895   .6277349   .3791045   .9509913
          5  |  1.616056   .1799378    .00966   1.606213   1.303614   2.006817
          6  |  .8354059    .124184    .00656   .8191839    .614221   1.097801
          7  |  2.066205   .3010858   .018377   2.034757   1.554484   2.709601
          8  | -1.555583   .1671435   .012265   -1.54984   -1.89487  -1.267001
          9  | -.9775626   .2477279   .016722  -.9936727  -1.431964  -.4093629
-------------+----------------------------------------------------------------
gues         |
        item |
          1  |  .1078598   .0337844     .0019   .1020673   .0581353   .1929404
          2  |  .1128113   .0372217   .002162   .1065996   .0596554   .2082417
          3  |   .123031   .0480042   .002579   .1127147   .0605462   .2516237
          4  |  .1190103   .0390721   .002369   .1123544   .0617698   .2095427
          5  |  .0829503   .0185785   .001275   .0807116   .0514752   .1232547
          6  |  .1059315   .0289175   .001708   .1022741   .0584959   .1709483
          7  |  .1235553   .0382661   .002964   .1186648   .0626495   .2067556
          8  |  .1142118   .0408348   .001733   .1062507   .0592389   .2134006
          9  |  .1270767   .0557821   .003939    .113562   .0621876   .2825752
-------------+----------------------------------------------------------------
        mu_a |   .109161   .1218499   .005504   .1126253   -.135329   .3501061
       var_a |   .108864   .0331522   .001053   .1030106   .0604834   .1860996
        mu_b |  .0782094   .1974657   .004367   .0755023  -.3067717   .4638104
       var_b |  .5829738   .1803167   .006263   .5562159   .3260449   1.034225
------------------------------------------------------------------------------

file sim3pls.dta saved

. estimates store est3pls

The estimated posterior means of the \(c_i\)’s range between 0.08 and 0.13. Clearly, the introduction of guessing parameters has an impact on the item discrimination and difficulty parameters. For example, the estimated posterior means of \(\mu_a\) and \(\mu_b\) shift from -0.10 and -0.07, respectively, for the 2PL model to 0.11 and 0.08, respectively, for the 3PL model.

Because the estimated guessing parameters are not that different, one may ask whether item-specific guessing parameters are really necessary. To answer this question, we fit a model with a common guessing parameter, {gues}, and compare it with the previous model.

. set seed 14

. bayesmh y = ({gues}+(1-{gues})*invlogit({discrim:}*({subj:}-{diff:}))), ///
>                 likelihood(binlogit(1), noglmtransform)                 ///
>         redefine(discrim:i.item) redefine(diff:i.item)                  ///
>         redefine(subj:i.id)                                             ///
>         prior({subj:i.id},      normal(0, 1))                           ///
>         prior({discrim:i.item}, lognormal({mu_a}, {var_a}))             ///
>         prior({diff:i.item},    normal({mu_b}, {var_b}))                ///
>         prior({gues},           igamma(10, 1))                          ///
>         prior({mu_a} {mu_b},    normal(0, 0.1))                         ///
>         prior({var_a} {var_b},  igamma(10, 1))                          ///
>         block({mu_a mu_b var_a var_b gues}, split)                      ///
>         init({discrim:i.item} 1 {gues} 0.1)                             ///
>         exclude({subj:i.id}) burnin(5000) saving(sim3pl, replace)
  
Burn-in ...
Simulation ...

Model summary
------------------------------------------------------------------------------
Likelihood: 
  y ~ binomial({gues}+(1-{gues})*invlogit(xb_discrim*(xb_subj-xb_diff)),1)

Priors: 
  {discrim:i.item} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.item} ~ normal({mu_b},{var_b})                                (2)
       {subj:i.id} ~ normal(0,1)                                           (3)
            {gues} ~ igamma(10,1)

Hyperpriors: 
    {mu_a mu_b} ~ normal(0,0.1)
  {var_a var_b} ~ igamma(10,1)
------------------------------------------------------------------------------
(1) Parameters are elements of the linear form xb_discrim.
(2) Parameters are elements of the linear form xb_diff.
(3) Parameters are elements of the linear form xb_subj.

Bayesian binomial regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      7,200
                                                 Acceptance rate  =      .3753
                                                 Efficiency:  min =     .01295
                                                              avg =     .03714
Log marginal likelihood =          .                          max =      .1874
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
discrim      |
        item |
          1  |  1.692894   .2748163   .021944   1.664569   1.232347   2.299125
          2  |  .8313512   .1355267    .00606   .8218212   .5928602   1.125729
          3  |  1.058833   .1611742   .014163   1.054126   .7676045   1.393611
          4  |  1.041808   .1718472   .008782   1.029867   .7398569   1.397073
          5  |  1.534997   .3208687   .023965   1.497019   1.019998   2.266078
          6  |   1.38296   .2581948   .019265   1.355706   .9559487   1.979358
          7  |  .8310222   .1698206   .012896   .8107371   .5736484   1.248736
          8  |  1.442949   .2266268   .017562   1.431204   1.066646   1.930829
          9  |    .77944   .1159669   .007266   .7750891   .5657258   1.014941
-------------+----------------------------------------------------------------
diff         |
        item |
          1  | -.3043161   .0859905   .005373  -.2968324  -.4870583  -.1407109
          2  |  .1814508   .1289251   .006543   .1832146  -.0723988   .4313265
          3  | -1.391216   .1924384   .014986  -1.373093  -1.809343  -1.050919
          4  |  .5928491   .1262631   .006721   .5829347    .356614    .857743
          5  |  1.617348   .1929263   .011604   1.601534   1.293032   2.061096
          6  |   .817635   .1172884   .006125    .812838   .5990503   1.064322
          7  |  2.006949   .2743517    .01785   1.981052   1.556682   2.594236
          8  | -1.576235   .1747855   .013455  -1.559435  -1.952676  -1.272108
          9  | -1.039362   .1840773    .01138   -1.02785  -1.432058  -.7160181
-------------+----------------------------------------------------------------
        gues |  .1027336   .0214544   .001753   .1022211   .0627299   .1466367
        mu_a |  .1009741    .123915   .006567   .0965353  -.1343028   .3510697
       var_a |  .1121003   .0344401   .001154   .1059563   .0628117   .1970842
        mu_b |  .0632173   .1979426   .004572   .0666684  -.3292497   .4482957
       var_b |  .5861236   .1818885   .006991   .5574743   .3239369   1.053172
------------------------------------------------------------------------------

file sim3pl.dta saved

. estimates store est3pl

We can again compare the two 3PL models by using the bayesstats ic command:

. bayesstats ic est3pls est3pl, diconly

Deviance information criterion

------------------------
             |       DIC
-------------+----------
     est3pls |  8049.425
      est3pl |  8049.426
------------------------

Although the estimated DICs of the two 3PL models are essentially the same, we decide for demonstration purposes to proceed with the model with item-specific guessing parameters.

Back to table of contents

4PL model

The four-parameter logistic (4PL) model extends the 3PL model by adding item-specific upper asymptote parameters \(d_i\):
\[
P(Y_{ij}=1) = c_i + (d_i-c_i){\rm InvLogit}\{a_i(\theta_j-b_i)\}
,\ c_i < d_i < 1 \] The \(d_i\) parameter can be viewed as an upper limit on the probability of correct response to the \(i\)th item. The probability of giving correct answers by subjects with very high ability can thus be no greater than \(d_i\). We restrict the \(d_i\)'s to the (0.8,1) range and assign them a \({\rm Uniform}(0.8,1)\) prior. For other parameters, we use the same priors as in the 3PL model. In the bayesmh specification of the model, the condition \(c_i < d_i\) is incorporated in the likelihood, and the condition \(d_i < 1\) is implied by the specified prior for the \(d_i\)'s. We initialize the \(d_i\)'s to 0.9. We use the notable option to suppress the long table output.

. set seed 14

. bayesmh y =                                                              ///
>       (({gues:}+({d:}-{gues:})*invlogit({discrim:}*({subj:}-{diff:})))*  ///
>         cond({gues:}<{d:},1,.)), likelihood(binlogit(1), noglmtransform) ///
>         redefine(discrim:i.item) redefine(diff:i.item)                   ///
>         redefine(gues:i.item)    redefine(d:i.item)  redefine(subj:i.id) ///
>         prior({subj:i.id},      normal(0, 1))                            ///
>         prior({discrim:i.item}, lognormal({mu_a}, {var_a}))              ///
>         prior({diff:i.item},    normal({mu_b}, {var_b}))                 ///
>         prior({gues:i.item},    igamma(10, 1))                           ///
>         prior({d:i.item},       uniform(0.8, 1))                         ///
>         prior({mu_a} {mu_b},    normal(0, 0.1))                          ///
>         prior({var_a} {var_b},  igamma(10, 1))                           ///
>         block({mu_a mu_b var_a var_b}, split)                            ///
>         init({discrim:i.item} 1 {gues:i.item} 0.1 {d:i.item} 0.9)        ///
>         exclude({subj:i.id}) burnin(5000) saving(sim4pls, replace) notable
  
Burn-in ...
Simulation ...

Model summary
------------------------------------------------------------------------------
Likelihood: 
  y ~ binomial(<expr1>,1)

Priors: 
  {discrim:i.item} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.item} ~ normal({mu_b},{var_b})                                (2)
     {gues:i.item} ~ igamma(10,1)                                          (3)
        {d:i.item} ~ uniform(0.8,1)                                        (4)
       {subj:i.id} ~ normal(0,1)                                           (5)

Hyperpriors: 
    {mu_a mu_b} ~ normal(0,0.1)
  {var_a var_b} ~ igamma(10,1)

Expression: 
  expr1 : (xb_gues+(xb_d-xb_gues)*invlogit(xb_discrim*(xb_subj-xb_diff)))* con
          d(xb_gues<xb_d,1,.)
------------------------------------------------------------------------------
(1) Parameters are elements of the linear form xb_discrim.
(2) Parameters are elements of the linear form xb_diff.
(3) Parameters are elements of the linear form xb_gues.
(4) Parameters are elements of the linear form xb_d.
(5) Parameters are elements of the linear form xb_subj.

Bayesian binomial regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      7,200
                                                 Acceptance rate  =      .3639
                                                 Efficiency:  min =    .004825
                                                              avg =      .0219
Log marginal likelihood =          .                          max =      .1203
Note: There is a high autocorrelation after 500 lags.

file sim4pls.dta saved

. estimates store est4pls
 

We use bayesstats summary to display results of selected model parameters.

. bayesstats summary {d:i.item} {mu_a var_a mu_b var_b}

Posterior summary statistics                      MCMC sample size =    10,000
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
d            |
        item |
          1  |  .9598183   .0255321   .001948   .9621874   .9044441   .9981723
          2  |  .9024564   .0565702   .007407   .9019505   .8066354   .9944216
          3  |  .9525519   .0281878   .002845   .9551054   .8972454   .9971564
          4  |  .8887963   .0561697   .005793   .8859503   .8036236   .9916784
          5  |  .8815547   .0588907   .007215   .8708021   .8031737   .9926549
          6  |  .8891188   .0586482   .006891    .881882   .8024593   .9935512
          7  |   .874271   .0561718   .008087   .8635082   .8018176   .9880433
          8  |  .9663644   .0147606   .001121   .9667563   .9370666   .9950912
          9  |   .889164   .0486038   .005524   .8834207   .8084921   .9857415
-------------+----------------------------------------------------------------
        mu_a |  .3336887   .1436216   .009742    .334092   .0562924   .6164115
       var_a |  .1221547   .0406908   .002376   .1144729   .0642768   .2229326
        mu_b | -.0407488   .1958039   .005645  -.0398847  -.4220523   .3323791
       var_b |  .4991736   .1612246    .00629   .4660071   .2802531   .9023824
------------------------------------------------------------------------------

The bayesmh command issued a note indicating high autocorrelation for some of the model parameters. This may be related to slower MCMC convergence or more substantial problems in the model specification. It is thus worthwhile to inspect the individual autocorrelation of the parameters. We can do so by using the bayesstats ess command. The parameters with lower estimated sample size (ESS) have higher autocorrelation and vice versa.

. bayesstats ess {d:i.item} {mu_a var_a mu_b var_b}

Efficiency summaries    MCMC sample size =    10,000
 
----------------------------------------------------
             |        ESS   Corr. time    Efficiency
-------------+--------------------------------------
d            |
        item |
          1  |     171.82        58.20        0.0172
          2  |      58.33       171.43        0.0058
          3  |      98.17       101.87        0.0098
          4  |      94.02       106.36        0.0094
          5  |      66.62       150.11        0.0067
          6  |      72.44       138.05        0.0072
          7  |      48.25       207.26        0.0048
          8  |     173.30        57.70        0.0173
          9  |      77.41       129.19        0.0077
-------------+--------------------------------------
        mu_a |     217.35        46.01        0.0217
       var_a |     293.34        34.09        0.0293
        mu_b |    1203.20         8.31        0.1203
       var_b |     656.92        15.22        0.0657
----------------------------------------------------

We observe that the parameters with ESS lower than 200 are among the asymptote parameter’s \(d_i\)’s. This may be caused, for example, by overparameterization of the likelihood model and subsequent nonidentifiability, which is not resolved by the specified priors.

We can also fit a model with a common upper asymptote parameter, \(d\), and compare it with the model with the item-specific upper asymptote.

. set seed 14

. bayesmh y =                                                              ///
>         (({gues:}+({d}-{gues:})*invlogit({discrim:}*({subj:}-{diff:})))* ///
>         cond({gues:}<{d},1,.)), likelihood(binlogit(1), noglmtransform)  ///
>         redefine(discrim:i.item) redefine(diff:i.item)                   ///
>         redefine(gues:i.item)    redefine(subj:i.id)                     ///
>         prior({subj:i.id},      normal(0, 1))                            ///
>         prior({discrim:i.item}, lognormal({mu_a}, {var_a}))              ///
>         prior({diff:i.item},    normal({mu_b}, {var_b}))                 ///
>         prior({gues:i.item},    igamma(10, 1))                           ///
>         prior({d},              uniform(0.8, 1))                         ///
>         prior({mu_a} {mu_b},    normal(0, 0.1))                          ///
>         prior({var_a} {var_b},  igamma(10, 1))                           ///
>         block({mu_a mu_b var_a var_b d}, split)                          ///
>         init({discrim:i.item} 1 {gues:i.item} 0.1 {d} 0.9)               ///
>         exclude({subj:i.id}) burnin(5000) saving(sim4pl, replace) notable
  
Burn-in ...
Simulation ...

Model summary
------------------------------------------------------------------------------
Likelihood: 
  y ~ binomial(<expr>>,1)

Priors: 
  {discrim:i.item} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.item} ~ normal({mu_b},{var_b})                                (2)
     {gues:i.item} ~ igamma(10,1)                                          (3)
       {subj:i.id} ~ normal(0,1)                                           (4)
               {d} ~ uniform(0.8,1)

Hyperpriors: 
    {mu_a mu_b} ~ normal(0,0.1)
  {var_a var_b} ~ igamma(10,1)

Expression: 
  expr1 : (xb_gues+({d}-xb_gues)*invlogit(xb_discrim*(xb_subj-xb_diff)))* cond
          (xb_gues<{d},1,.)
------------------------------------------------------------------------------
(1) Parameters are elements of the linear form xb_discrim.
(2) Parameters are elements of the linear form xb_diff.
(3) Parameters are elements of the linear form xb_gues.
(4) Parameters are elements of the linear form xb_subj.

Bayesian binomial regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      7,200
                                                 Acceptance rate  =      .3877
                                                 Efficiency:  min =      .0107
                                                              avg =     .03047
Log marginal likelihood =          .                          max =      .1626

file sim4pl.dta saved

. estimates store est4pl

. bayesstats summary {d mu_a var_a mu_b var_b}

Posterior summary statistics                      MCMC sample size =    10,000
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
           d |  .9664578   .0144952   .001293   .9668207   .9371181   .9924572
        mu_a |  .2206696   .1387873    .01113   .2208302  -.0483587   .4952625
       var_a |  .1245785   .0391551   .001806   .1188779   .0658243   .2187058
        mu_b |  .0371722   .2020157    .00501   .0331742  -.3481366   .4336587
       var_b |  .5603447   .1761812   .006817   .5279243   .3157048   .9805077
------------------------------------------------------------------------------

We now compare the two 4PL models by using the bayesstats ic command:

. bayesstats ic est4pls est4pl, diconly

Deviance information criterion

------------------------
             |       DIC
-------------+----------
     est4pls |  8050.805
      est4pl |  8037.075
------------------------

The DIC of the more complex 4PL model (8,051) is substantially higher than the DIC of the simpler model (8,037). This and the potential nonidentifiability of the more complex est4pls model, indicated by high autocorrelation in the simulated MCMC sample, compel us to proceed with the model with a common upper asymptote, est4pl.

The posterior distribution of \(d\) has an estimated 95% equal-tailed credible interval of (0.93, 0.99) and is concentrated about 0.97. The \({\rm Uniform}(0.8,1)\) prior on \(d\) does not seem to be too restrictive. The estimated DIC of the est4pl model (8,037) is lower than the DIC of the est3pls 3PL model from the previous section (8,049), implying that the introduction of the upper asymptote parameter \(d\) does improve the model fit.

Back to table of contents

5PL model

The five-parameter logistic (5PL) model extends the 4PL model by adding item-specific asymmetry parameters \(e_i\):
\[
P(Y_{ij}=1) = c_i + (d_i-c_i){\rm InvLogit}\big[{\{a_i(\theta_j-b_i)\}}^{e_i}\big]
,\ c_i < d_i < 1, \ 0 < e_i < 1 \] In the previous section, we found the 4PL model with common upper asymptote \(d\), est4pl, to be the best one so far. We thus consider here a 5PL model with common upper asymptote \(d\).

Typically, we expect the \(e_i\) parameters to be close to 1. Similarly to the upper asymptote parameter \(d\), the \(e_i\) parameters are assumed to be in the (0.8,1) range and are assigned \({\rm Uniform}(0.8,1)\) prior. We initialize the \(e_i\)s to 0.9. We again use the notable option to suppress the long table output, and we display a subset of results by using bayesstats summary. (We could have used bayesmh's noshow() option instead to achieve the same result.)

. set seed 14

. bayesmh y =                                                               ///
>   (({gues:}+({d}-{gues:})*(invlogit({discrim:}*({subj:}-{diff:})))^{e:})* ///
>         cond({gues:}<{d},1,.)), likelihood(binlogit(1), noglmtransform)   ///
>         redefine(discrim:i.item) redefine(diff:i.item)                    ///
>         redefine(gues:i.item)    redefine(e:i.item)  redefine(subj:i.id)  ///
>         prior({subj:i.id},      normal(0, 1))                             ///
>         prior({discrim:i.item}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.item},    normal({mu_b}, {var_b}))                  ///
>         prior({gues:i.item},    igamma(10, 1))                            ///
>         prior({d},              uniform(0.8, 1))                          ///
>         prior({e:i.item},       uniform(0.8, 1))                          ///
>         prior({mu_a} {mu_b},    normal(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b d}, split)                           ///
>         init({discrim:i.item} 1 {gues:i.item} 0.1 {d} {e:i.item} 0.9)     ///
>         exclude({subj:i.id}) burnin(5000) saving(sim5pls, replace) notable
  
Burn-in ...
Simulation ...

Model summary
------------------------------------------------------------------------------
Likelihood: 
  y ~ binomial(<expr1>,1)

Priors: 
  {discrim:i.item} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.item} ~ normal({mu_b},{var_b})                                (2)
     {gues:i.item} ~ igamma(10,1)                                          (3)
        {e:i.item} ~ uniform(0.8,1)                                        (4)
       {subj:i.id} ~ normal(0,1)                                           (5)
               {d} ~ uniform(0.8,1)

Hyperpriors: 
    {mu_a mu_b} ~ normal(0,0.1)
  {var_a var_b} ~ igamma(10,1)

Expression: 
  expr1 : (xb_gues+({d}-xb_gues)*(invlogit(xb_discrim*(xb_subj-xb_diff)))^xb_e
          )* cond(xb_gues<{d},1,.)
------------------------------------------------------------------------------
(1) Parameters are elements of the linear form xb_discrim.
(2) Parameters are elements of the linear form xb_diff.
(3) Parameters are elements of the linear form xb_gues.
(4) Parameters are elements of the linear form xb_e.
(5) Parameters are elements of the linear form xb_subj.

Bayesian binomial regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      7,200
                                                 Acceptance rate  =      .3708
                                                 Efficiency:  min =    .007341
                                                              avg =     .02526
Log marginal likelihood =          .                          max =      .1517

file sim5pls.dta saved

. estimates store est5pls

. bayesstats summary {e:i.item} {d mu_a var_a mu_b var_b}

Posterior summary statistics                      MCMC sample size =    10,000
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
e            |
        item |
          1  |   .897859   .0578428   .006083   .8939272   .8050315   .9957951
          2  |  .9042669   .0585023   .005822     .90525   .8053789   .9956565
          3  |    .88993   .0562398   .005013    .887011    .803389   .9930454
          4  |  .9010241   .0574186   .006492   .9042044   .8030981   .9925598
          5  |  .9126369   .0545625    .00521   .9178927   .8098596   .9964487
          6  |  .9037269   .0583833   .006814   .9086704   .8054932   .9961268
          7  |  .9136308   .0558911   .005373   .9203899   .8112029    .996217
          8  |   .889775   .0568656   .005119   .8849938    .803912   .9938777
          9  |  .8808435    .056257   .004743   .8727194   .8030522   .9904972
-------------+----------------------------------------------------------------
           d |  .9671374   .0144004   .001165   .9670598   .9382404   .9933374
        mu_a |  .2770211   .1353777    .00832   .2782552   .0141125   .5418087
       var_a |   .122635   .0404159   .002148   .1160322   .0666951   .2208711
        mu_b |  .1211885   .1929743   .004955   .1199136  -.2515431    .503733
       var_b |  .5407642   .1747674   .006353   .5088269   .3016315   .9590086
------------------------------------------------------------------------------

We also want to compare the above model with a simpler one using a common asymmetry parameter \(e\).

. set seed 14

. bayesmh y =                                                               ///
>    (({gues:}+({d}-{gues:})*(invlogit({discrim:}*({subj:}-{diff:})))^{e})* ///
>         cond({gues:}<{d},1,.)), likelihood(binlogit(1), noglmtransform)   ///
>         redefine(discrim:i.item) redefine(diff:i.item)                    ///
>         redefine(gues:i.item)    redefine(subj:i.id)                      ///
>         prior({subj:i.id},      normal(0, 1))                             ///
>         prior({discrim:i.item}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.item},    normal({mu_b}, {var_b}))                  ///
>         prior({gues:i.item},    igamma(10, 1))                            ///
>         prior({d} {e},          uniform(0.8, 1))                          ///
>         prior({mu_a} {mu_b},    normal(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b d e}, split)                         ///
>         init({discrim:i.item} 1 {gues:i.item} 0.1 {d e} 0.9)              ///
>         exclude({subj:i.id}) burnin(5000) saving(sim5pl, replace) notable
  
Burn-in ...
Simulation ...

Model summary
------------------------------------------------------------------------------
Likelihood: 
  y ~ binomial(<expr1>,1)

Priors: 
  {discrim:i.item} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.item} ~ normal({mu_b},{var_b})                                (2)
     {gues:i.item} ~ igamma(10,1)                                          (3)
       {subj:i.id} ~ normal(0,1)                                           (4)
             {d e} ~ uniform(0.8,1)

Hyperpriors: 
    {mu_a mu_b} ~ normal(0,0.1)
  {var_a var_b} ~ igamma(10,1)

Expression: 
  expr1 : (xb_gues+({d}-xb_gues)*(invlogit(xb_discrim*(xb_subj-xb_diff)))^{e})
          * cond(xb_gues<{d},1,.)
------------------------------------------------------------------------------
(1) Parameters are elements of the linear form xb_discrim.
(2) Parameters are elements of the linear form xb_diff.
(3) Parameters are elements of the linear form xb_gues.
(4) Parameters are elements of the linear form xb_subj.

Bayesian binomial regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      7,200
                                                 Acceptance rate  =      .3805
                                                 Efficiency:  min =    .008179
                                                              avg =     .02768
Log marginal likelihood =          .                          max =     .08904

file sim5pl.dta saved

. estimates store est5pl

. bayesstats summary {e d mu_a var_a mu_b var_b}

Posterior summary statistics                      MCMC sample size =    10,000
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
           e |  .9118363   .0558178   .004194   .9175841   .8063153   .9960286
           d |  .9655166   .0147373   .001495   .9659029   .9354708   .9924492
        mu_a |  .2674271   .1368926   .008485    .270597   .0102798   .5443345
       var_a |  .1250759   .0428095   .002635   .1173619   .0654135   .2340525
        mu_b |  .1015121   .2048178   .006864    .103268  -.3052377   .4934158
       var_b |  .5677309   .1824591   .006981   .5331636   .3079868   1.016762
------------------------------------------------------------------------------

We use bayesstats ic to compare the DIC values of the two 5PL models:

. bayesstats ic est5pls est5pl, diconly

Deviance information criterion

------------------------
             |       DIC
-------------+----------
     est5pls |  8030.894
      est5pl |  8034.517
------------------------

The estimated DIC of the more complex est5pls model (8,031) is lower than the DIC of the simpler model (8,035), suggesting a better fit.

Back to table of contents

Conclusion

Finally, we compare all eight fitted models.

. bayesstats ic est1pl est2pl est3pl est3pls est4pl est4pls est5pl est5pls, ///
>         diconly

Deviance information criterion

------------------------
             |       DIC
-------------+----------
      est1pl |  8122.428
      est2pl |  8055.005
      est3pl |  8049.426
     est3pls |  8049.425
      est4pl |  8037.075
     est4pls |  8050.805
      est5pl |  8034.517
     est5pls |  8030.894
------------------------

The est5pls model has the lowest overall DIC. To confirm this result, we run another set of simulations with a larger MCMC sample size of 50,000. (We simply added the mcmcsize(50000) option to the bayesmh specification of the above eight models.) The following DIC values, based on the larger MCMC sample size, are more reliably estimated.

. bayesstats ic est1pl est2pl est3pl est3pls est4pl est4pls est5pl est5pls, ///
>         diconly

Deviance information criterion

------------------------
             |       DIC
-------------+----------
      est1pl |  8124.015
      est2pl |  8052.068
      est3pl |  8047.067
     est3pls |  8047.738
      est4pl |  8032.417
     est4pls |  8049.712
      est5pl |  8031.375
     est5pls |  8031.905
------------------------

Again, the 5PL models have the lowest DIC values and seem to provide the best fit. However, the DIC differences between models est4pl, est5pl, and est5pls are minimal and may very well be within the estimation error. Regardless, these three models appear to be better than the simpler 1PL, 2PL, and 3PL models.

Additional model checking may be needed to assess the models' fit, and we should not rely solely on the DIC values to make our final model selection. A practitioner may still prefer the simpler est4pl 4PL model to the 5PL models even though it has a slightly higher DIC. In fact, given that the posterior mean estimate of the upper asymptote parameter \(d\) is 0.96 with a 95% equal-tailed credible interval of (0.94, 0.99), some practitioners may prefer the even simpler est3pl 3PL model.

References

De Boeck, P., and M. Wilson, ed. 2004. Explanatory Item Response Models: A Generalized Linear and Nonlinear Approach. New York: Springer.

Kim, J.-S., and D. M. Bolt. 2007. Estimating item response theory models using Markov chain Monte Carlo methods. Educational Measurement: Issues and Practice 26: 38-51.

regress, probit, or logit?


In a previous post I illustrated that the probit model and the logit model produce statistically equivalent estimates of marginal effects. In this post, I compare the marginal effect estimates from a linear probability model (linear regression) with marginal effect estimates from probit and logit models.

My simulations show that when the true model is a probit or a logit, using a linear probability model can produce inconsistent estimates of the marginal effects of interest to researchers. The conclusions hinge on the probit or logit model being the true model.

Simulation results

For all simulations below, I use a sample size of 10,000 and 5,000 replications. The true data-generating processes (DGPs) are constructed using one discrete covariate and one continuous covariate. I study the average effect of a change in the continuous variable on the conditional probability (AME) and the average effect of a change in the discrete covariate on the conditional probability (ATE). I also look at the effect of a change in the continuous variable on the conditional probability, evaluated at the mean value of the covariates (MEM), and the effect of a change in the discrete covariate on the conditional probability, evaluated at the mean value of the covariates (TEM).

In Table 1, I present the results of a simulation when the true DGP satisfies the assumptions of a logit model. I show the average of the AME and the ATE estimates and the 5% rejection rate of the true null hypotheses. I also provide an approximate true value of the AME and ATE. I obtain the approximate true values by computing the ATE and AME, at the true values of the coefficients, using a sample of 20 million observations. I will provide more details on the simulation in a later section.

Table 1: Average Marginal and Treatment Effects: True DGP Logit

Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Logit Regress (LPM)
AME of x1 -.084 -.084 -.094
5% Rejection Rate .050 .99
ATE of x2 .092 .091 .091
5% Rejection Rate .058 .058

From Table 1, we see that the logit model estimates are close to the true value and that the rejection rate of the true null hypothesis is close to 5%. For the linear probability model, the rejection rate is 99% for the AME. For the ATE, the rejection rate and point estimates are close to what is estimated using a logit.

For the MEM and TEM, we have the following:

Table 2: Marginal and Treatment E ects at Mean Values: True DGP Logit

Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Logit Regress (LPM)
MEM of x1 -.099 -.099 -.094
5% Rejection Rate .054 .618
TEM of x2 .109 .109 .092
5% Rejection Rate .062 .073

Again, logit estimates behave as expected. For the linear probability model, the rejection rate of the true null hypothesis is 62% for the MEM. For the TEM the rejection rate is 7.3%, and the estimated effect is smaller than the true effect.

For the AME and ATE, when the true GDP is a probit, we have the following:

Table 3: Average Marginal and Treatment Effects: True DGP Probit

Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Probit Regress (LPM)
AME of x1 -.094 -.094 -.121
5% Rejection Rate .047 1
ATE of x2 .111 .111 .111
5% Rejection Rate .065 .061

The probit model estimates are close to the true value, and the rejection rate of the true null hypothesis is close to 5%. For the linear probability model, the rejection rate is 100% for the AME. For the ATE, the rejection rate and point estimates are close to what is estimated using a probit.

For the MEM and TEM, we have the following:

Table 4: Marginal and Treatment Effects at Mean Values: True DGP Probit

Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Probit Regress (LPM)
MEM of x1 -.121 -.122 -.121
5% Rejection Rate .063 .054
TEM of x2 .150 .150 .110
5% Rejection Rate .059 .158

For the MEM, the probit and linear probability model produce reliable inference. For the TEM, the probit marginal effects behave as expected, but the linear probability model has a rejection rate of 16%, and the point estimates are not close to the true value.

Simulation design

Below is the code I used to generate the data for my simulations. In the first part, lines 6 to 13, I generate outcome variables that satisfy the assumptions of the logit model, y, and the probit model, yp. In the second part, lines 15 to 19, I compute the marginal effects for the logit and probit models. I have a continuous and a discrete covariate. For the discrete covariate, the marginal effect is a treatment effect. In the third part, lines 21 to 29, I compute the marginal effects evaluated at the means. I will use these estimates later to compute approximations to the true values of the effects.

program define mkdata
    syntax, [n(integer 1000)]
    clear
    quietly set obs `n'
    // 1. Generating data from probit, logit, and misspecified 
    generate x1    = rchi2(2)-2
    generate x2    = rbeta(4,2)>.2
    generate u     = runiform()
    generate e     = ln(u) -ln(1-u) 
    generate ep    = rnormal()
    generate xb    = .5*(1 - x1 + x2)
    generate y     =  xb + e > 0
    generate yp    = xb + ep > 0 
    // 2. Computing probit & logit marginal and treatment effects 
    generate m1   = exp(xb)*(-.5)/(1+exp(xb))^2
    generate m2   = exp(1 -.5*x1)/(1+ exp(1 -.5*x1 )) - ///
	              exp(.5 -.5*x1)/(1+ exp(.5 -.5*x1 ))
    generate m1p  = normalden(xb)*(-.5)
    generate m2p  = normal(1 -.5*x1 ) - normal(.5 -.5*x1)
    // 3. Computing marginal and treatment effects at means
    quietly mean x1 x2 
    matrix A        = r(table)
    scalar a        = .5 -.5*A[1,1] + .5*A[1,2]
    scalar b1       =  1 -.5*A[1,1]
    scalar b0       = .5 -.5*A[1,1]
    generate mean1  = exp(a)*(-.5)/(1+exp(a))^2
    generate mean2  = exp(b1)/(1+ exp(b1)) - exp(b0)/(1+ exp(b0))
    generate mean1p = normalden(a)*(-.5)
    generate mean2p = normal(b1) - normal(b0)
end

I approximate the true marginal effects using a sample of 20 million observations. This is a reasonable strategy in this case. For example, take the average marginal effect for a continuous covariate, \(x_{k}\), in the case of the probit model:

\[\begin{equation*}
\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}
\end{equation*}\]

The expression above is an approximation of \(E\left(\phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\right)\). To obtain this expected value, we would need to integrate over the distribution of all the covariates. This is not practical and would limit my choice of covariates. Instead, I draw a sample of 20 million observations, compute \(\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\), and take it to be the true value. I follow the same logic for the other marginal effects.

Below is the code I use to compute the approximate true marginal effects. I draw the 20 million observations, compute the averages that I wil use in my simulation, and create locals for each approximate true value.

. mkdata, n(`L')
(2 missing values generated)

. local values "m1 m2 mean1 mean2 m1p m2p mean1p mean2p"

. local means  "mx1 mx2 meanx1 meanx2 mx1p mx2p meanx1p meanx2p"

. local n : word count `values'

. 
. forvalues i= 1/`n' {
  2.         local a: word `i' of `values'
  3.         local b: word `i' of `means'
  4.         sum `a', meanonly
  5.         local `b' = r(mean)
  6. }

Now, I am ready to run all the simulations that I used to produce the results in the previous section. The code that I used for the simulations for the TEM and the MEM when the true DGP is a logit is given by:

. postfile lpm y1l y1l_r y1lp y1lp_r y2l y2l_r y2lp y2lp_r ///
>                 using simslpm, replace 

. forvalues i=1/`R' {
  2.         quietly {
  3.                 mkdata, n(`N')
  4.                 logit  y x1 i.x2, vce(robust) 
  5.                 margins, dydx(*) atmeans post  vce(unconditional)
  6.                 local y1l = _b[x1]
  7.                 test _b[x1] = `meanx1'
  8.                 local y1l_r   = (r(p)<.05) 
  9.                 local y2l = _b[1.x2]
 10.                 test _b[1.x2] = `meanx2'
 11.                 local y2l_r   = (r(p)<.05) 
 12.                 regress  y x1 i.x2, vce(robust) 
 13.                 margins, dydx(*) atmeans post  vce(unconditional)
 14.                 local y1lp = _b[x1]
 15.                 test _b[x1] = `meanx1'
 16.                 local y1lp_r   = (r(p)<.05) 
 17.                 local y2lp = _b[1.x2]
 18.                 test _b[1.x2] = `meanx2'
 19.                 local y2lp_r   = (r(p)<.05) 
 20.                 post lpm (`y1l') (`y1l_r') (`y1lp') (`y1lp_r') ///
>                          (`y2l') (`y2l_r') (`y2lp') (`y2lp_r')
 21.         }
 22. }

. postclose lpm

. use simslpm, clear 

. sum 

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
         y1l |      5,000   -.0985646      .00288  -.1083639  -.0889075
       y1l_r |      5,000       .0544     .226828          0          1
        y1lp |      5,000   -.0939211    .0020038  -.1008612  -.0868043
      y1lp_r |      5,000       .6182    .4858765          0          1
         y2l |      5,000    .1084959     .065586  -.1065291   .3743112
-------------+---------------------------------------------------------
       y2l_r |      5,000       .0618     .240816          0          1
        y2lp |      5,000    .0915894     .055462  -.0975456   .3184061
      y2lp_r |      5,000       .0732    .2604906          0          1

For the results for the AME and the ATE when the true DGP is a logit, I use margins without the atmeans option. The other cases are similar. I use robust standard errors for all computations because my likelihood model is an approximation to the true likelihood, and I use the option vce(unconditional) to account for the fact that I am using two-step M-estimation. See Wooldridge (2010) for more details on two-step M-estimation.

You can obtain the code used to produce these results here.

Conclusion

Using a probit or a logit model yields equivalent marginal effects. I provide evidence that the same cannot be said of the marginal effect estimates of the linear probability model when compared with those of the logit and probit models.

Acknowledgment

This post was inspired by a question posed by Stephen Jenkins after my previous post.

Reference

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

probit or logit: ladies and gentlemen, pick your weapon

We often use probit and logit models to analyze binary outcomes. A case can be made that the logit model is easier to interpret than the probit model, but Stata’s margins command makes any estimator easy to interpret. Ultimately, estimates from both models produce similar results, and using one or the other is a matter of habit or preference.

I show that the estimates from a probit and logit model are similar for the computation of a set of effects that are of interest to researchers. I focus on the effects of changes in the covariates on the probability of a positive outcome for continuous and discrete covariates. I evaluate these effects on average and at the mean value of the covariates. In other words, I study the average marginal effects (AME), the average treatment effects (ATE), the marginal effects at the mean values of the covariates (MEM), and the treatment effects at the mean values of the covariates (TEM).

First, I present the results. Second, I discuss the code used for the simulations.

Results

In Table 1, I present the results of a simulation with 4,000 replications when the true data generating process (DGP) satisfies the assumptions of a probit model. I show the average of the AME and the ATE estimates and the 5% rejection rate of the true null hypothesis that arise after probit and logit estimation. I also provide an approximate true value of the AME and ATE. I obtain the approximate true values by computing the ATE and AME, at the true values of the coefficients, using a sample of 20 million observations. I will provide more details on the simulation in a later section.

Table 1: Average Marginal and Treatment Effects: True DGP Probit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
AME of x1 -.1536 -.1537 -.1537
5% Rejection Rate .050 .052
ATE of x2 .1418 .1417 .1417
5% Rejection Rate .050 .049

For the MEM and TEM, we have the following:

Table 2: Marginal and Treatment Effects at Mean Values: True DGP Probit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
MEM of x1 -.1672 -.1673 -.1665
5% Rejection Rate .056 .06
TEM of x2 .1499 .1498 .1471
5% Rejection Rate .053 .058

The logit estimates are close to the true value and have a rejection rate that is close to 5%. Fitting the parameters of our model using logit when the true DGP satisfies the assumptions of a probit model does not lead us astray.

If the true DGP satisfies the assumptions of the logit model, the conclusions are the same. I present the results in the next two tables.

Table 3: Average Marginal and Treatment Effects: True DGP Logit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
AME of x1 -.1090 -.1088 -.1089
5% Rejection Rate .052 .052
ATE of x2 .1046 .1044 .1045
5% Rejection Rate .053 .051

Table 4: Marginal and Treatment Effects at Mean Values: True DGP Logit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
MEM of x1 -.1146 -.1138 -.1146
5% Rejection Rate .050 .051
TEM of x2 .1086 .1081 .1085
5% Rejection Rate .058 .058

Why?

Maximum likelihood estimators find the parameters that maximize the likelihood that our data will fit the distributional assumptions that we make. The likelihood chosen is an approximation to the true likelihood, and it is a helpful approximation if the true likelihood and our approximating are close to each other. Viewing likelihood-based models as useful approximations, instead of as models of a true likelihood, is the basis of quasilikelihood theory. For more details, see White (1996) and Wooldridge (2010).

It is assumed that the unobservable random variable in the probit model and logit model comes from a standard normal and logistic distribution, respectively. The cumulative distribution functions (CDFs) in these two cases are close to each other, especially around the mean. Therefore, estimators under these two sets of assumptions produce similar results. To illustrate these arguments, we can plot the two CDFs and their differences as follows:

Graph 1: Normal and Logistic CDF’s and their Difference
graph1

The difference between the CDFs approaches zero as you get closer to the mean, from the right or from the left, and it is always smaller than .15.

Simulation design

Below is the code I used to generate the data for my simulations. In the first part, lines 4 to 12, I generate outcome variables that satisfy the assumptions of the probit model, y1, and the logit model, y2. In the second part, lines 13 to 16, I compute the marginal effects for the logit and probit models. I have a continuous and a discrete covariate. For the discrete covariate, the marginal effect is a treatment effect. In the third part, lines 17 to 25, I compute the marginal effects evaluated at the means. I will use these estimates later to compute approximations to the true values of the effects.

program define mkdata
        syntax, [n(integer 1000)]
        clear
        quietly set obs `n'
        // 1. Generating data from probit, logit, and misspecified 
        generate x1    = rnormal()
        generate x2    = rbeta(2,4)>.5
        generate e1    = rnormal()
        generate u     = runiform()
        generate e2    = ln(u) -ln(1-u)
        generate xb    = .5*(1 -x1 + x2)
        generate y1    =  xb + e1 > 0
        generate y2    =  xb + e2 > 0
        // 2. Computing probit & logit marginal and treatment effects 
        generate m1    = normalden(xb)*(-.5)
        generate m2    = normal(1 -.5*x1 ) - normal(.5 -.5*x1)
        generate m1l   = exp(xb)*(-.5)/(1+exp(xb))^2
        generate m2l   = exp(1 -.5*x1)/(1+ exp(1 -.5*x1 )) - ///
                         exp(.5 -.5*x1)/(1+ exp(.5 -.5*x1 ))
        // 3. Computing probit & logit marginal and treatment effects at means
        quietly mean x1 x2 
        matrix A         = r(table)
        scalar a         = .5 -.5*A[1,1] + .5*A[1,2]
        scalar b1        =  1 -.5*A[1,1]
        scalar b0        = .5 -.5*A[1,1]
        generate mean1   = normalden(a)*(-.5)
        generate mean2   = normal(b1) - normal(b0)
        generate mean1l  = exp(a)*(-.5)/(1+exp(a))^2
        generate mean2l  = exp(b1)/(1+ exp(b1)) - exp(b0)/(1+ exp(b0))
end

I approximate the true marginal effects using a sample of 20 million observations. This is a reasonable strategy in this case. For example, take the average marginal effect for a continuous covariate, \(x_{k}\), in the case of the probit model:

\[\begin{equation*}
\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}
\end{equation*}\]

The expression above is an approximation to \(E\left(\phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\right)\). To obtain this expected value, we would need to integrate over the distribution of all the covariates. This is not practical and would limit my choice of covariates. Instead, I draw a sample of 20 million observations, compute \(\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\), and take it to be the true value. I follow the same logic for the other marginal effects.

Below is the code I use to compute the approximate true marginal effects. I draw the 20 million observations, then I compute the averages that I am going to use in my simulation, and I create locals for each approximate true value.

. mkdata, n(20000000)

. local values "m1 m2 m1l m2l mean1 mean2 mean1l mean2l"

. local means  "mx1 mx2 mx1l mx2l meanx1 meanx2 meanx1l meanx2l"

. local n : word count `values'

. forvalues i= 1/`n' {
  2.         local a: word `i' of `values'
  3.         local b: word `i' of `means'
  4.         sum `a', meanonly
  5.         local `b' = r(mean)
  6. }

Now I am ready to run all the simulations that I used to produce the results in the previous section. The code that I used for the simulations for the ATE and the AME when the true DGP is a probit is given by

. postfile mprobit y1p y1p_r y1l y1l_r y2p y2p_r y2l y2l_r ///
>                 using simsmprobit, replace 

. forvalues i=1/4000 {
  2.         quietly {
  3.                 mkdata, n(10000)
  4.                 probit y1 x1 i.x2, vce(robust)
  5.                 margins, dydx(*) atmeans post 
  6.                 local y1p = _b[x1]
  7.                 test _b[x1] = `meanx1'
  8.                 local y1p_r   = (r(p)<.05) 
  9.                 local y2p = _b[1.x2]
 10.                 test _b[1.x2] = `meanx2'
 11.                 local y2p_r   = (r(p)<.05) 
 12.                 logit  y1 x1 i.x2, vce(robust)
 13.                 margins, dydx(*) atmeans post 
 14.                 local y1l = _b[x1]
 15.                 test _b[x1] = `meanx1'
 16.                 local y1l_r   = (r(p)<.05) 
 17.                 local y2l = _b[1.x2]
 18.                 test _b[1.x2] = `meanx2'
 19.                 local y2l_r   = (r(p)<.05) 
 20.                 post mprobit (`y1p') (`y1p_r') (`y1l') (`y1l_r') ///
>                            (`y2p') (`y2p_r') (`y2l') (`y2l_r')
 21.         }
 22. }
. use simsprobit
. summarize

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
         y1p |      4,000   -.1536812    .0038952  -.1697037  -.1396532
       y1p_r |      4,000         .05    .2179722          0          1
         y1l |      4,000   -.1536778    .0039179  -.1692524  -.1396366
       y1l_r |      4,000      .05175    .2215496          0          1
         y2p |      4,000     .141708    .0097155   .1111133   .1800973
-------------+---------------------------------------------------------
       y2p_r |      4,000       .0495    .2169367          0          1
         y2l |      4,000    .1416983    .0097459   .1102069   .1789895
       y2l_r |      4,000        .049     .215895          0          1

For the results in the case of the MEM and the TEM when the true DGP is a probit, I use margins with the option atmeans. The other cases are similar. I use robust standard error for all computations to account for the fact that my likelihood model is an approximation to the true likelihood, and I use the option vce(unconditional) to account for the fact that I am using two-step M-estimation. See Wooldridge (2010) for more details on two-step M-estimation.

You can download the code used to produce the results by clicking this link: pvsl.do

Concluding Remarks

I provided simulation evidence that illustrates that the differences between using estimates of effects after probit or logit is negligible. The reason lies in the theory of quasilikelihood and, specifically, in that the cumulative distribution functions of the probit and logit models are similar, especially around the mean.

References

White, H. 1996. Estimation, Inference, and Specification Analysis>. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Using mlexp to estimate endogenous treatment effects in a heteroskedastic probit model

I use features new to Stata 14.1 to estimate an average treatment effect (ATE) for a heteroskedastic probit model with an endogenous treatment. In 14.1, we added new prediction statistics after mlexp that margins can use to estimate an ATE.

I am building on a previous post in which I demonstrated how to use mlexp to estimate the parameters of a probit model with an endogenous treatment and used margins to estimate the ATE for the model Using mlexp to estimate endogenous treatment effects in a probit model. Currently, no official commands estimate the heteroskedastic probit model with an endogenous treatment, so in this post I show how mlexp can be used to extend the models estimated by Stata.

Heteroskedastic probit model

For binary outcome \(y_i\) and regressors \({\bf x}_i\), the probit model assumes

\[\begin{equation}
y_i = {\bf 1}({\bf x}_i{\boldsymbol \beta} + \epsilon_i > 0)
\end{equation}\]

The indicator function \({\bf 1}(\cdot)\) outputs 1 when its input is true and outputs 0 otherwise. The error \(\epsilon_i\) is standard normal.

Assuming that the error has constant variance may not always be wise. Suppose we are studying a certain business decision. Large firms, because they have the resources to take chances, may exhibit more variation in the factors that affect their decision than small firms.

In the heteroskedastic probit model, regressors \({\bf w}_i\) determine the variance of \(\epsilon_i\). Following Harvey (1976), we have

\[\begin{equation} \mbox{Var}\left(\epsilon_i\right) = \left\{\exp\left({\bf
w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber \end{equation}\]

Heteroskedastic probit model with treatment

In this section, I review the potential-outcome framework used to define an ATE and extend it for the heteroskedastic probit model. For each treatment level, there is an outcome that we would observe if a person were to select that treatment level. When the outcome is binary and there are two treatment levels, we can specify how the potential outcomes \(y_{0i}\) and \(y_{1i}\) are generated from the regressors \({\bf x}_i\) and the error terms \(\epsilon_{0i}\) and \(\epsilon_{1i}\):

\[\begin{eqnarray*}
y_{0i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_0 + \epsilon_{0i} > 0) \cr
y_{1i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_1 + \epsilon_{1i} > 0)
\end{eqnarray*}\]

We assume a heteroskedastic probit model for the potential outcomes. The errors are normal with mean \(0\) and conditional variance generated by regressors \({\bf w}_i\). In this post, we assume equal variance of the potential outcome errors.

\[\begin{equation}
\mbox{Var}\left(\epsilon_{0i}\right) = \mbox{Var}\left(\epsilon_{1i}\right) =
\left\{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber
\end{equation}\]

The heteroskedastic probit model for potential outcomes \(y_{0i}\) and \(y_{1i}\) with treatment \(t_i\) assumes that we observe the outcome

\[\begin{equation}
y_i = (1-t_i) y_{0i} + t_i y_{1i}
\nonumber
\end{equation}\]

So we observe \(y_{1i}\) under the treatment (\(t_{i}=1\)) and \(y_{0i}\) when the treatment is withheld (\(t_{i}=0\)).

The treatment \(t_i\) is determined by regressors \({\bf z}_i\) and error \(u_i\):

\[\begin{equation}
t_i = {\bf 1}({\bf z}_i{\boldsymbol \psi} + u_i > 0)
\nonumber
\end{equation}\]

The treatment error \(u_i\) is normal with mean zero, and we allow its variance to be determined by another set of regressors \({\bf v}_i\):

\[\begin{equation}
\mbox{Var}\left(u_i\right) =
\left\{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)\right\}^2 \nonumber
\end{equation}\]

Heteroskedastic probit model with endogenous treatment

In the previous post, I described how to model endogeneity for the treatment \(t_i\) by correlating the outcome errors \(\epsilon_{0i}\) and \(\epsilon_{1i}\) with the treatment error \(u_i\). We use the same framework for modeling endogeneity here. The variance of the errors may change depending on the heteroskedasticity regressors \({\bf w}_i\) and \({\bf v}_i\), but their correlation remains constant. The errors \(\epsilon_{0i}\), \(\epsilon_{1i}\), and \(u_i\) are trivariate normal with correlation

\[\begin{equation}
\left[\begin{matrix}
1 & \rho_{01} & \rho_{t} \cr
\rho_{01} & 1 & \rho_{t} \cr
\rho_{t} & \rho_{t} & 1
\end{matrix}\right]
\nonumber
\end{equation}\]

Now we have all the pieces we need to write the log likelihood of the heteroskedastic probit model with an endogenous treatment. The form of the likelihood is similar to what was given in the previous post. Now the inputs to the bivariate normal cumulative distribution function, \(\Phi_2\), are standardized by dividing by the conditional standard deviations of the errors.

The log likelihood for observation \(i\) is

\[\begin{eqnarray*}
\ln L_i = & & {\bf 1}(y_i =1 \mbox{ and } t_i = 1) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\} + \cr
& & {\bf 1}(y_i=0 \mbox{ and } t_i=1)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr
& & {\bf 1}(y_i=1 \mbox{ and } t_i=0) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr
& & {\bf 1}(y_i=0 \mbox{ and } t_i = 0)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\}
\end{eqnarray*}\]

The data

We will simulate data from a heteroskedastic probit model with an endogenous treatment and then estimate the parameters of the model with mlexp. Then, we will use margins to estimate the ATE.

. set seed 323

. set obs 10000
number of observations (_N) was 0, now 10,000

. generate x = .8*rnormal() + 4

. generate b = rpoisson(1)

. generate z = rnormal()

. matrix cm = (1, .3,.7 \ .3, 1, .7 \ .7, .7, 1)

. drawnorm ey0 ey1 et, corr(cm)

We simulate a random sample of 10,000 observations. The treatment and outcome regressors are generated in a similar manner to their creation in the last post. As in the last post, we generate the errors with drawnorm to have correlation \(0.7\).

. generate g = runiform()

. generate h = rnormal()

. quietly replace ey0 = ey0*exp(.5*g)

. quietly replace ey1 = ey1*exp(.5*g)

. quietly replace et = et*exp(.1*h)

. generate t = .5*x - .1*b + .5*z - 2.4 + et > 0

. generate y0 = .6*x - .8 + ey0 > 0

. generate y1 = .3*x - 1.3 + ey1 > 0

. generate y = (1-t)*y0 + t*y1

The uniform variable g is generated as a regressor for the outcome error variance, while h is a regressor for the treatment error variance. We scale the errors by using the variance regressors so that they are heteroskedastic, and then we generate the treatment and outcome indicators.

Estimating the model parameters

Now, we will use mlexp to estimate the parameters of the heteroskedastic probit model with an endogenous treatment. As in the previous post, we use the cond() function to calculate different values of the likelihood based on the different values of \(y\) and \(t\). We use the factor-variable operator ibn on \(t\) in equation y to allow for a different intercept at each level of \(t\). An interaction between \(t\) and \(x\) is also specified in equation y. This allows for a different coefficient on \(x\) at each level of \(t\).

. mlexp (ln(cond(t, ///                                          
>         cond(y,binormal({y: i.t#c.x ibn.t}/exp({g:g}), ///
>             {t: x b z _cons}/exp({h:h}),{rho}), /// 
>                 binormal(-{y:}/exp({g:}),{t:}/exp({h:}),-{rho})), ///
>         cond(y,binormal({y:}/exp({g:}),-{t:}/exp({h:}),-{rho}), ///
>                 binormal(-{y:}/exp({g:}),-{t:}/exp({h:}),{rho}) ///
>         )))), vce(robust)

initial:       log pseudolikelihood = -13862.944
alternative:   log pseudolikelihood = -16501.619
rescale:       log pseudolikelihood = -13858.877
rescale eq:    log pseudolikelihood = -11224.877
Iteration 0:   log pseudolikelihood = -11224.877  (not concave)
Iteration 1:   log pseudolikelihood = -10644.625  
Iteration 2:   log pseudolikelihood = -10074.998  
Iteration 3:   log pseudolikelihood = -9976.6027  
Iteration 4:   log pseudolikelihood = -9973.0988  
Iteration 5:   log pseudolikelihood = -9973.0913  
Iteration 6:   log pseudolikelihood = -9973.0913  

Maximum likelihood estimation

Log pseudolikelihood = -9973.0913               Number of obs     =     10,000

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
y            |
       t#c.x |
          0  |   .6178115   .0334521    18.47   0.000     .5522467    .6833764
          1  |   .2732094   .0365742     7.47   0.000     .2015253    .3448936
             |
           t |
          0  |  -.8403294   .1130197    -7.44   0.000    -1.061844   -.6188149
          1  |  -1.215177   .1837483    -6.61   0.000    -1.575317   -.8550371
-------------+----------------------------------------------------------------
g            |
           g |   .4993187   .0513297     9.73   0.000     .3987143    .5999232
-------------+----------------------------------------------------------------
t            |
           x |   .4985802   .0183033    27.24   0.000     .4627065    .5344539
           b |  -.1140255   .0132988    -8.57   0.000    -.1400908   -.0879603
           z |   .4993995   .0150844    33.11   0.000     .4698347    .5289643
       _cons |  -2.402772   .0780275   -30.79   0.000    -2.555703   -2.249841
-------------+----------------------------------------------------------------
h            |
           h |   .1011185   .0199762     5.06   0.000     .0619658    .1402713
-------------+----------------------------------------------------------------
        /rho |   .7036964   .0326734    21.54   0.000     .6396577    .7677351
------------------------------------------------------------------------------

Our parameter estimates are close to their true values.

Estimating the ATE

The ATE of \(t\) is the expected value of the difference between \(y_{1i}\) and \(y_{0i}\), the average difference between the potential outcomes. Using the law of iterated expectations, we have

\[\begin{eqnarray*}
E(y_{1i}-y_{0i})&=& E\left\{ E\left(y_{1i}-y_{0i}|{\bf x}_i,{\bf w}_i\right)\right\} \cr
&=& E\left\lbrack\Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{
\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}-
\Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{
\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}\right\rbrack \cr
\end{eqnarray*}\]

This can be estimated as a mean of predictions.

Now, we estimate the ATE by using margins. We specify the normal probability expression in the expression() option. We use the expression function xb() to get the linear predictions for the outcome equation and the outcome error variance equation. We can now predict these linear forms after mlexp in Stata 14.1. We specify r.t so that margins will take the difference of the expression under t=1 and t=0. We specify vce(unconditional) to obtain standard errors for the population ATE rather than the sample ATE; we specified vce(robust) for mlexp so that we could specify vce(unconditional) for margins. The contrast(nowald) option is specified to omit the Wald test for the difference.

. margins r.t, expression(normal(xb(y)/exp(xb(g)))) ///
>     vce(unconditional) contrast(nowald)

Contrasts of predictive margins

Expression   : normal(xb(y)/exp(xb(g)))

--------------------------------------------------------------
             |            Unconditional
             |   Contrast   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
           t |
   (1 vs 0)  |  -.4183043   .0202635     -.4580202   -.3785885
--------------------------------------------------------------

We estimate that the ATE of \(t\) on \(y\) is \(-0.42\). So taking the treatment decreases the probability of a positive outcome by \(0.42\) on average over the population.

We will compare this estimate to the average difference of \(y_{1}\) and \(y_{0}\) in the sample. We can do this because we simulated the data. In practice, only one potential outcome is observed for every observation, and this average difference cannot be computed.

. generate diff = y1 - y0

. sum diff

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
        diff |     10,000      -.4164    .5506736         -1          1

In our sample, the average difference of \(y_{1}\) and \(y_{0}\) is also \(-0.42\).

Conclusion

I have demonstrated how to estimate the parameters of a model that is not available in Stata: the heteroskedastic probit model with an endogenous treatment using mlexp. See [R] mlexp for more details about mlexp. I have also demonstrated how to use margins to estimate the ATE for the heteroskedastic probit model with an endogenous treatment. See [R] margins for more details about mlexp.

Reference

Harvey, A. C. 1976. Estimating regression models with multiplicative heteroscedasticity. Econometrica 44: 461-465.

Understanding the generalized method of moments (GMM): A simple example

\(\newcommand{\Eb}{{\bf E}}\)This post was written jointly with Enrique Pinzon, Senior Econometrician, StataCorp.

The generalized method of moments (GMM) is a method for constructing estimators, analogous to maximum likelihood (ML). GMM uses assumptions about specific moments of the random variables instead of assumptions about the entire distribution, which makes GMM more robust than ML, at the cost of some efficiency. The assumptions are called moment conditions.

GMM generalizes the method of moments (MM) by allowing the number of moment conditions to be greater than the number of parameters. Using these extra moment conditions makes GMM more efficient than MM. When there are more moment conditions than parameters, the estimator is said to be overidentified. GMM can efficiently combine the moment conditions when the estimator is overidentified.

We illustrate these points by estimating the mean of a \(\chi^2(1)\) by MM, ML, a simple GMM estimator, and an efficient GMM estimator. This example builds on Efficiency comparisons by Monte Carlo simulation and is similar in spirit to the example in Wooldridge (2001).

GMM weights and efficiency

GMM builds on the ideas of expected values and sample averages. Moment conditions are expected values that specify the model parameters in terms of the true moments. The sample moment conditions are the sample equivalents to the moment conditions. GMM finds the parameter values that are closest to satisfying the sample moment conditions.

The mean of a \(\chi^2\) random variable with \(d\) degree of freedom is \(d\), and its variance is \(2d\). Two moment conditions for the mean are thus

\[\begin{eqnarray*}
\Eb\left[Y – d \right]&=& 0 \\
\Eb\left[(Y – d )^2 – 2d \right]&=& 0
\end{eqnarray*}\]

The sample moment equivalents are

\[\begin{eqnarray}
1/N\sum_{i=1}^N (y_i – \widehat{d} )&=& 0 \tag{1} \\
1/N\sum_{i=1}^N\left[(y_i – \widehat{d} )^2 – 2\widehat{d}\right] &=& 0 \tag{2}
\end{eqnarray}\]

We could use either sample moment condition (1) or sample moment condition (2) to estimate \(d\). In fact, below we use each one and show that (1) provides a much more efficient estimator.

When we use both (1) and (2), there are two sample moment conditions and only one parameter, so we cannot solve this system of equations. GMM finds the parameters that get as close as possible to solving weighted sample moment conditions.

Uniform weights and optimal weights are two ways of weighting the sample moment conditions. The uniform weights use an identity matrix to weight the moment conditions. The optimal weights use the inverse of the covariance matrix of the moment conditions.

We begin by drawing a sample of a size 500 and use gmm to estimate the parameters using sample moment condition (1), which we illustrate is the sample as the sample average.

. drop _all

. set obs 500
number of observations (_N) was 0, now 500

. set seed 12345

. generate double y = rchi2(1)

. gmm (y - {d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .82949186  
Iteration 1:   GMM criterion Q(b) =  1.262e-32  
Iteration 2:   GMM criterion Q(b) =  9.545e-35  

note: model is exactly identified

GMM estimation 

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
          /d |   .9107644   .0548098    16.62   0.000     .8033392     1.01819
------------------------------------------------------------------------------
Instruments for equation 1: _cons

. mean y

Mean estimation                   Number of obs   =        500

--------------------------------------------------------------
             |       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
           y |   .9107644   .0548647      .8029702    1.018559
--------------------------------------------------------------

The sample moment condition is the product of an observation-level error function that is specified inside the parentheses and an instrument, which is a vector of ones in this case. The parameter \(d\) is enclosed in curly braces {}. We specify the onestep option because the number of parameters is the same as the number of moment conditions, which is to say that the estimator is exactly identified. When it is, each sample moment condition can be solved exactly, and there are no efficiency gains in optimally weighting the moment conditions.

We now illustrate that we could use the sample moment condition obtained from the variance to estimate \(d\).

. gmm ((y-{d})^2 - 2*{d})  , instruments( ) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  5.4361161  
Iteration 1:   GMM criterion Q(b) =  .02909692  
Iteration 2:   GMM criterion Q(b) =  .00004009  
Iteration 3:   GMM criterion Q(b) =  5.714e-11  
Iteration 4:   GMM criterion Q(b) =  1.172e-22  

note: model is exactly identified

GMM estimation 

Number of parameters =   1
Number of moments    =   1
Initial weight matrix: Unadjusted                 Number of obs   =        500

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
          /d |   .7620814   .1156756     6.59   0.000     .5353613    .9888015
------------------------------------------------------------------------------
Instruments for equation 1: _cons

While we cannot say anything definitive from only one draw, we note that this estimate is further from the truth and that the standard error is much larger than those based on the sample average.

Now, we use gmm to estimate the parameters using uniform weights.

. matrix I = I(2)

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608  
Iteration 1:   GMM criterion Q(b) =  .05343812  
Iteration 2:   GMM criterion Q(b) =  .01852592  
Iteration 3:   GMM criterion Q(b) =   .0185221  
Iteration 4:   GMM criterion Q(b) =   .0185221  

GMM estimation 

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
          /d |   .7864099   .1050692     7.48   0.000     .5804781    .9923418
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons

The first set of parentheses specifies the first sample moment condition, and the second set of parentheses specifies the second sample moment condition. The options winitial(I) and onestep specify uniform weights.

Finally, we use gmm to estimate the parameters using two-step optimal weights. The weights are calculated using first-step consistent estimates.

. gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

Step 1
Iteration 0:   GMM criterion Q(b) =   6.265608  
Iteration 1:   GMM criterion Q(b) =  .05343812  
Iteration 2:   GMM criterion Q(b) =  .01852592  
Iteration 3:   GMM criterion Q(b) =   .0185221  
Iteration 4:   GMM criterion Q(b) =   .0185221  

Step 2
Iteration 0:   GMM criterion Q(b) =  .02888076  
Iteration 1:   GMM criterion Q(b) =  .00547223  
Iteration 2:   GMM criterion Q(b) =  .00546176  
Iteration 3:   GMM criterion Q(b) =  .00546175  

GMM estimation 

Number of parameters =   1
Number of moments    =   2
Initial weight matrix: user                       Number of obs   =        500
GMM weight matrix:     Robust

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
          /d |   .9566219   .0493218    19.40   0.000     .8599529    1.053291
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons

All four estimators are consistent. Below we run a Monte Carlo simulation to see their relative efficiencies. We are most interested in the efficiency gains afforded by optimal GMM. We include the sample average, the sample variance, and the ML estimator discussed in Efficiency comparisons by Monte Carlo simulation. Theory tells us that the optimally weighted GMM estimator should be more efficient than the sample average but less efficient than the ML estimator.

The code below for the Monte Carlo builds on Efficiency comparisons by Monte Carlo simulation, Maximum likelihood estimation by mlexp: A chi-squared example, and Monte Carlo simulations using Stata. Click gmmchi2sim.do to download this code.

. clear all
. set seed 12345
. matrix I = I(2)
. postfile sim  d_a d_v d_ml d_gmm d_gmme using efcomp, replace
. forvalues i = 1/2000 {
  2.     quietly drop _all
  3.     quietly set obs 500
  4.     quietly generate double y = rchi2(1)
  5. 
.     quietly mean y 
  6.     local d_a         =  _b[y]
  7. 
.     quietly gmm ( (y-{d=`d_a'})^2 - 2*{d}) , instruments( )  ///
>       winitial(unadjusted) onestep conv_maxiter(200) 
  8.     if e(converged)==1 {
  9.             local d_v = _b[d:_cons]
 10.     }
 11.     else {
 12.             local d_v = .
 13.     }
 14. 
.     quietly mlexp (ln(chi2den({d=`d_a'},y)))
 15.     if e(converged)==1 {
 16.             local d_ml  =  _b[d:_cons]
 17.     }
 18.     else {
 19.             local d_ml  = .
 20.     }
 21. 
.     quietly gmm ( y - {d=`d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
>         winitial(I) onestep conv_maxiter(200) 
 22.     if e(converged)==1 {
 23.             local d_gmm = _b[d:_cons]
 24.     }
 25.     else {
 26.             local d_gmm = .
 27.     }
 28. 
.     quietly gmm ( y - {d=`d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( )  ///
>        winitial(unadjusted, independent) conv_maxiter(200) 
 29.     if e(converged)==1 {
 30.             local d_gmme = _b[d:_cons]
 31.     }
 32.     else {
 33.             local d_gmme = .
 34.     }
 35. 
.     post sim (`d_a') (`d_v') (`d_ml') (`d_gmm') (`d_gmme') 
 36. 
. }
. postclose sim
. use efcomp, clear 
. summarize

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
         d_a |      2,000     1.00017    .0625367   .7792076    1.22256
         d_v |      1,996    1.003621    .1732559   .5623049   2.281469
        d_ml |      2,000    1.002876    .0395273   .8701175   1.120148
       d_gmm |      2,000    .9984172    .1415176   .5947328   1.589704
      d_gmme |      2,000    1.006765    .0540633   .8224731   1.188156

The simulation results indicate that the ML estimator is the most efficient (d_ml, std. dev. 0.0395), followed by the efficient GMM estimator (d_gmme}, std. dev. 0.0541), followed by the sample average (d_a, std. dev. 0.0625), followed by the uniformly-weighted GMM estimator (d_gmm, std. dev. 0.1415), and finally followed by the sample-variance moment condition (d_v, std. dev. 0.1732).

The estimator based on the sample-variance moment condition does not converge for 4 of 2,000 draws; this is why there are only 1,996 observations on d_v when there are 2,000 observations for the other estimators. These convergence failures occurred even though we used the sample average as the starting value of the nonlinear solver.

For a better idea about the distributions of these estimators, we graph the densities of their estimates.

Figure 1: Densities of the estimators
graph1

The density plots illustrate the efficiency ranking that we found from the standard deviations of the estimates.

The uniformly weighted GMM estimator is less efficient than the sample average because it places the same weight on the sample average as on the much less efficient estimator based on the sample variance.

In each of the overidentified cases, the GMM estimator uses a weighted average of two sample moment conditions to estimate the mean. The first sample moment condition is the sample average. The second moment condition is the sample variance. As the Monte Carlo results showed, the sample variance provides a much less efficient estimator for the mean than the sample average.

The GMM estimator that places equal weights on the efficient and the inefficient estimator is much less efficient than a GMM estimator that places much less weight on the less efficient estimator.

We display the weight matrix from our optimal GMM estimator to see how the sample moments were weighted.

. quietly gmm ( y - {d}) ( (y-{d})^2 - 2*{d})  , instruments( ) winitial(I)

. matlist e(W), border(rows)

-------------------------------------
             | 1         | 2         
             |     _cons |     _cons 
-------------+-----------+-----------
1            |           |           
       _cons |  1.621476 |           
-------------+-----------+-----------
2            |           |           
       _cons | -.2610053 |  .0707775 
-------------------------------------

The diagonal elements show that the sample-mean moment condition receives more weight than the less efficient sample-variance moment condition.

Done and undone

We used a simple example to illustrate how GMM exploits having more equations than parameters to obtain a more efficient estimator. We also illustrated that optimally weighting the different moments provides important efficiency gains over an estimator that uniformly weights the moment conditions.

Our cursory introduction to GMM is best supplemented with a more formal treatment like the one in Cameron and Trivedi (2005) or Wooldridge (2010).

Graph code appendix

use efcomp
local N = _N
kdensity d_a,     n(`N') generate(x_a    den_a)    nograph
kdensity d_v,     n(`N') generate(x_v    den_v)    nograph
kdensity d_ml,    n(`N') generate(x_ml   den_ml)   nograph
kdensity d_gmm,   n(`N') generate(x_gmm  den_gmm)  nograph
kdensity d_gmme,  n(`N') generate(x_gmme den_gmme) nograph
twoway (line den_a x_a,       lpattern(solid))        ///
       (line den_v x_v,       lpattern(dash))         ///
       (line den_ml x_ml,     lpattern(dot))          ///
       (line den_gmm x_gmm,   lpattern(dash_dot))     ///
       (line den_gmme x_gmme, lpattern(shordash))

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2001. Applications of generalized method of moments estimation. Journal of Economic Perspectives 15(4): 87-100.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

xtabond Cheat Sheet

Random-effects and fixed-effects panel-data models do not allow me to use observable information of previous periods in my model. They are static. Dynamic panel-data models use current and past information. For instance, I may model current health outcomes as a function of health outcomes in the past— a sensible modeling assumption— and of past observable and unobservable characteristics.

Today I will provide information that will help you interpret the estimation and postestimation results from Stata’s Arellano–Bond estimator xtabond, the most common linear dynamic panel-data estimator.

The instruments and the regressors

We have fictional data for 1,000 people from 1991 to 2000. The outcome of interest is income (income), and the explanatory variables are years of schooling (educ) and an indicator for marital status (married). Below, we fit an Arellano–Bond model using xtabond.

. xtabond income married educ, vce(robust)

Arellano-Bond dynamic panel-data estimation     Number of obs     =      8,000
Group variable: id                              Number of groups  =      1,000
Time variable: year
                                                Obs per group:
                                                              min =          8
                                                              avg =          8
                                                              max =          8

Number of instruments =     39                  Wald chi2(3)      =    3113.63
                                                Prob > chi2       =     0.0000
One-step results
                                     (Std. Err. adjusted for clustering on id)
------------------------------------------------------------------------------
             |               Robust
      income |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
      income |
         L1. |   .2008311   .0036375    55.21   0.000     .1937018    .2079604
             |
     married |   1.057667   .1006091    10.51   0.000     .8604764    1.254857
        educ |    .057551   .0045863    12.55   0.000     .0485619      .06654
       _cons |   .2645702   .0805474     3.28   0.001     .1067002    .4224403
------------------------------------------------------------------------------
Instruments for differenced equation
        GMM-type: L(2/.).income
        Standard: D.married D.educ
Instruments for level equation
        Standard: _cons

A couple of elements in the output table are different from what one would expect. The output includes a coefficient for the lagged value of the dependent variable that we did not specify in the command. Why?

In the Arellano–Bond framework, the value of the dependent variable in the previous period is a predictor for the current value of the dependent variable. Stata includes the value of the dependent variable in the previous period for us. Another noteworthy aspect that appears in the table is the mention of 39 instruments in the header. This is followed by a footnote that refers to GMM and standard-type instruments. Here a bit of math will help us understand what is going on.

The relationship of interest is given by

\[\begin{equation*}
y_{it} = x_{it}’\beta_1 + y_{i(t-1)}\beta_2 + \alpha_i + \varepsilon_{it}
\end{equation*}\]

In the equation above, \(y_{it}\) is the outcome of interest for individual \(i\) at time \(t\), \(x_{it}\) are a set of regressors that may include past values, \(y_{i(t-1)}\) is the value of the outcome in the previous period, \(\alpha_i\) is a time-invariant unobservable, and \(\varepsilon_{it}\) is a time-varying unobservable.

As in the fixed-effects framework, we assume the time-invariant unobserved component is related to the regressors. When unobservables and observables are correlated, we have an endogeneity problem that yields inconsistent parameter estimates if we use a conventional linear panel-data estimator. One solution is taking first-differences of the relationship of interest. However, the strategy of taking first-differences does not work. Why?

\[\begin{eqnarray*}
\Delta y_{it} &=& \Delta x_{it}’\beta_1 + \Delta y_{i(t-1)} + \Delta \varepsilon_{it} \\
E\left( \Delta y_{i(t-1)} \Delta \varepsilon_{it} \right) &\neq & 0
\end{eqnarray*}\]

In the first equation above, we got rid of \(\alpha_i\), which is correlated with our regressors, but we generated a new endogeneity problem. The second equation above illustrates one of our regressors is related to our unobservables. The solution is instrumental variables. Which instrumental variables? Arellano–Bond suggest the second lags of the dependent variable and all the feasible lags thereafter. This generates the set of moment conditions defined by

\[\begin{eqnarray*}
E\left( \Delta y_{i(t-2)} \Delta \varepsilon_{it} \right) &=& 0 \\
E\left( \Delta y_{i(t-3)} \Delta \varepsilon_{it} \right) &=& 0 \\
\ldots & & \\
E\left( \Delta y_{i(t-j)} \Delta \varepsilon_{it} \right) &=& 0
\end{eqnarray*}\]

In our example, we have 10 time periods, which yield the following set of instruments:

\[\begin{eqnarray*}
t&=10& \quad y_{t-8}, y_{t-7}, y_{t-6}, y_{t-5}, y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\
t&=9& \quad y_{t-7}, y_{t-6}, y_{t-5}, y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\
t&=8& \quad y_{t-6}, y_{t-5}, y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\
t& = 7& \quad y_{t-5}, y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\
t&= 6& \quad y_{t-4}, y_{t-3}, y_{t-2}, y_{t-1} \\
t&= 5& \quad y_{t-3}, y_{t-2}, y_{t-1} \\
t&= 4& \quad y_{t-2}, y_{t-1} \\
t&=3& \quad y_{t-1}
\end{eqnarray*}\]

This gives us 36 instruments which are what the table calls GMM-type instruments. GMM has been explored in the blog post Estimating parameters by maximum likelihood and method of moments using mlexp and gmm and we will talk about it in a later post. The other three instruments are given by the first difference of the regressors educ and married and the constant. This is no different from two-stage least squares, where we include the exogenous variables as part of our instrument list.

Testing for serial correlation

The key for the instrument set in Arellano–Bond to work is that

\[\begin{equation}
E\left( \Delta y_{i(t-j)} \Delta \varepsilon_{it} \right) = 0 \quad j \geq 2
\end{equation}\]

We can test these conditions in Stata using estat abond. In essence, the differenced unobserved time-invariant component should be unrelated to the second lag of the dependent variable and the lags thereafter. If this is not the case, we are back to the initial problem, endogeneity. Again, a bit of math will help us understand what is going on.

All is well if

\[\begin{equation}
\Delta \varepsilon_{it} = \Delta \nu_{it}
\end{equation}\]

The unobservable is serially correlated of order 1 but not serially correlated of orders 2 or beyond.

But we are in trouble if

\[\begin{equation}
\Delta \varepsilon_{it} = \Delta \nu_{it} + \Delta \nu_{i(t-1)}
\end{equation}\]

The second lag of the dependent variable will be related to the differenced time-varying component \(\Delta \varepsilon_{it}\). Another way of saying this is that the differenced time-varying unobserved component is serially correlated with an order greater than 1.

estat abond provides a test for the serial correlation structure. For the example above,

. estat abond

Arellano-Bond test for zero autocorrelation in first-differenced errors
  +-----------------------+
  |Order |  z     Prob > z|
  |------+----------------|
  |   1  |-22.975  0.0000 |
  |   2  |-.36763  0.7132 |
  +-----------------------+
   H0: no autocorrelation 

We reject no autocorrelation of order 1 and cannot reject no autocorrelation of order 2. There is evidence that the Arellano–Bond model assumptions are satisfied. If this were not the case, we would have to look for different instruments. Essentially, we would have to fit a different dynamic model. This is what the xtdpd command allows us to do, but it is beyond the scope of this post.

Parting words

Dynamic panel-data models provide a useful research framework. In this post, I touched on the interpretation of a couple of results from estimation and postestimation from xtabond that will help you understand your output.

Using mlexp to estimate endogenous treatment effects in a probit model

I use features new to Stata 14.1 to estimate an average treatment effect (ATE) for a probit model with an endogenous treatment. In 14.1, we added new prediction statistics after mlexp that margins can use to estimate an ATE.

I am building on a previous post in which I demonstrated how to use mlexp to estimate the parameters of a probit model with sample selection. Our results match those obtained with biprobit; see [R] biprobit for more details. In a future post, I use these techniques to estimate treatment-effect parameters not yet available from another Stata command.

Probit model with treatment

In this section, I describe the potential-outcome framework used to define an ATE. For each treatment level, there is an outcome that we would observe if a person were to select that treatment level. When the outcome is binary and there are two treatment levels, we can specify how the potential outcomes \(y_{0i}\) and \(y_{1i}\) are generated from the regressors \({\bf x}_i\) and the error terms \(\epsilon_{0i}\) and \(\epsilon_{1i}\):

\[\begin{eqnarray*}
y_{0i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_0 + \epsilon_{0i} > 0) \cr
y_{1i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_1 + \epsilon_{1i} > 0)
\end{eqnarray*}\]

(Assuming that each error is standard normal, this gives us a bivariate probit model.) The indicator function \({\bf 1}(\cdot)\) outputs 1 when its input is true and 0 otherwise.

The probit model for potential outcomes \(y_{0i}\) and \(y_{1i}\) with treatment \(t_i\) assumes that we observe the outcome

\[\begin{equation}
y_i = (1-t_i) y_{0i} + t_i y_{1i}
\nonumber
\end{equation}\]

So we observe \(y_{1i}\) under the treatment (\(t_{i}=1\)) and \(y_{0i}\) when the treatment is withheld (\(t_{i}=0\)).

The treatment \(t_i\) is determined by regressors \({\bf z}_i\) and standard normal error \(u_i\):

\[\begin{equation}
t_i = {\bf 1}({\bf z}_i{\boldsymbol \psi} + u_i > 0)
\nonumber
\end{equation}\]

Probit model with endogenous treatment

We could estimate the parameters \({\boldsymbol \beta}_0\) and \({\boldsymbol \beta}_1\) using a probit regression on \(y_i\) if \(t_i\) was not related to the unobserved errors \(\epsilon_{0i}\) and \(\epsilon_{1i}\). This may not always be the case. Suppose we modeled whether parents send their children to private school and used private tutoring for the child as a treatment. Unobserved factors that influence private school enrollment may be correlated with the unobserved factors that influence whether private tutoring is given. The treatment would be correlated with the unobserved errors of the outcome.

We can treat \(t_i\) as endogenous by allowing \(\epsilon_{0i}\) and \(\epsilon_{1i}\) to be correlated with \(u_i\). In this post, we will assume that these correlations are the same. Formally, \(\epsilon_{0i}\), \(\epsilon_{1i}\), and \(u_i\) are trivariate normal with covariance:

\[\begin{equation}
\left[\begin{matrix}
1 & \rho_{01} & \rho_{t} \cr
\rho_{01} & 1 & \rho_{t} \cr
\rho_{t} & \rho_{t} & 1
\end{matrix}\right]
\nonumber
\end{equation}\]

The correlation \(\rho_{01}\) cannot be identified because we never observe both \(y_{0i}\) and \(y_{1i}\). However, identification of \(\rho_{01}\) is not necessary to estimate the other parameters, because we will observe the covariates and outcome in observations from each treatment group.

The log-likelihood for observation \(i\) is

\[\begin{eqnarray*}
\ln L_i = & & {\bf 1}(y_i =1 \mbox{ and } t_i = 1) \ln \Phi_2({\bf x}_i{\boldsymbol \beta}_1, {\bf z}_i{\boldsymbol \gamma},\rho_t) + \cr
& & {\bf 1}(y_i=0 \mbox{ and } t_i=1)\ln \Phi_2(-{\bf x}_i{\boldsymbol \beta}_1, {\bf z}_i{\boldsymbol \gamma},-\rho_t) + \cr
& & {\bf 1}(y_i=1 \mbox{ and } t_i=0) \ln \Phi_2({\bf x}_i{\boldsymbol \beta}_0, -{\bf z}_i{\boldsymbol \gamma},-\rho_t) + \cr
& & {\bf 1}(y_i=0 \mbox{ and } t_i = 0)\ln \Phi_2(-{\bf x}_i{\boldsymbol \beta}_0, -{\bf z}_i{\boldsymbol \gamma},\rho_t)
\end{eqnarray*}\]

where \(\Phi_2\) is the bivariate normal cumulative distribution function.

This model is a variation of the bivariate probit model. For a good introduction to the bivariate probit model, see Pindyck and Rubinfeld (1998).

The data

We will simulate data from a probit model with an endogenous treatment and then estimate the parameters of the model using mlexp. Then, we will use margins to estimate the ATE. We simulate a random sample of 10,000 observations.

. set seed 3211

. set obs 10000
number of observations (_N) was 0, now 10,000

. gen x = rnormal() + 4

. gen b = rpoisson(1)

. gen z = rnormal()

First, we generate the regressors. The variable \(x\) has a normal distribution with a mean of 4 and variance of 1. It is used as a regressor for the outcome and treatment. The variable \(b\) has a Poisson distribution with a mean of 1 and will be used as a treatment regressor. A standard normal variable \(z\) is also used as a treatment regressor.

. matrix cm = (1, .3,.7 \ .3, 1, .7 \ .7, .7, 1)

. drawnorm ey0 ey1 et, corr(cm)

. gen t = .5*x - .1*b + .4*z - 2.4 + et > 0

. gen y0 = .6*x - .8 + ey0 > 0

. gen y1 = .3*x - 1.2 + ey1 > 0

. gen y = (1-t)*y0 + t*y1

Next, we draw the unobserved errors. The potential outcome and treatment errors will have correlation \(.7\). We generate the errors using the drawnorm command. Finally, the outcome and treatment indicators are created.

Estimating the Model Parameters

Now, we will use mlexp to estimate the parameters of the probit model with an endogenous treatment. As in the previous post, we use the cond() function to calculate different values of the likelihood based on the different values of \(y\) and \(t\). We use the factor variable operator ibn on \(t\) in equation y to allow for a different intercept at each level of \(t\). An interaction between \(t\) and \(x\) is also specified in equation y. This allows for a different coefficient on \(x\) at each level of \(t\). We also specify vce(robust) so that we can use vce(unconditional) when we use margins later.

. mlexp (ln(cond(t,cond(y,binormal({y: i.t#c.x ibn.t},            ///
>                                  {t: x b z _cons}, {rho}),      /// 
>                         binormal(-{y:},{t:}, -{rho})),          ///
>                  cond(y,binormal({y:},-{t:},-{rho}),            ///
>                         binormal(-{y:},-{t:},{rho})))))         ///
>         , vce(robust)

initial:       log pseudolikelihood = -13862.944
alternative:   log pseudolikelihood = -15511.071
rescale:       log pseudolikelihood = -13818.369
rescale eq:    log pseudolikelihood = -10510.488
Iteration 0:   log pseudolikelihood = -10510.488  (not concave)
Iteration 1:   log pseudolikelihood = -10004.946  
Iteration 2:   log pseudolikelihood = -9487.4032  
Iteration 3:   log pseudolikelihood = -9286.0118  
Iteration 4:   log pseudolikelihood =  -9183.901  
Iteration 5:   log pseudolikelihood = -9181.9207  
Iteration 6:   log pseudolikelihood = -9172.0256  
Iteration 7:   log pseudolikelihood = -9170.8198  
Iteration 8:   log pseudolikelihood = -9170.7994  
Iteration 9:   log pseudolikelihood = -9170.7994  

Maximum likelihood estimation

Log pseudolikelihood = -9170.7994               Number of obs     =     10,000

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
y            |
       t#c.x |
          0  |   .5829362   .0223326    26.10   0.000     .5391651    .6267073
          1  |   .2745585   .0259477    10.58   0.000     .2237021     .325415
             |
           t |
          0  |  -.7423227   .0788659    -9.41   0.000     -.896897   -.5877483
          1  |  -1.088765   .1488922    -7.31   0.000    -1.380589   -.7969419
-------------+----------------------------------------------------------------
t            |
           x |   .4900691   .0148391    33.03   0.000     .4609851    .5191532
           b |  -.1086717   .0132481    -8.20   0.000    -.1346375   -.0827059
           z |   .4135792   .0150112    27.55   0.000     .3841579    .4430006
       _cons |  -2.354418   .0640056   -36.78   0.000    -2.479867   -2.228969
-------------+----------------------------------------------------------------
        /rho |   .7146737   .0377255    18.94   0.000     .6407331    .7886143
------------------------------------------------------------------------------

Our parameter estimates are close to their true values.

Estimating the ATE

The ATE of \(t\) is the expected value of the difference between \(y_{1i}\) and \(y_{0i}\), the average difference between the potential outcomes. Using the law of iterated expectations, we have

\[\begin{eqnarray*}
E(y_{1i}-y_{0i}) &=& E\{E(y_{1i}-y_{0i}|{\bf x}_i)\} \cr
&=& E\{\Phi({\bf x}_i{\boldsymbol \beta}_1)-
\Phi({\bf x}_i{\boldsymbol \beta}_0)\}
\end{eqnarray*}\]

This can be estimated as a predictive margin.

Now, we estimate the ATE using margins. We specify the normal probability expression in the expression() option. The xb() term refers to the linear prediction of the first equation, which we can now predict in Stata 14.1. We specify r.t so that margins will take the difference of the expression under \(t=1\) and \(t=0\). We specify vce(unconditional) to obtain standard errors for the population ATE rather than the sample ATE. The contrast(nowald) option is specified to omit the Wald test for the difference.

. margins r.t, expression(normal(xb())) vce(unconditional) contrast(nowald)

Contrasts of predictive margins

Expression   : normal(xb())

--------------------------------------------------------------
             |            Unconditional
             |   Contrast   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
           t |
   (1 vs 0)  |  -.4112345   .0248909     -.4600197   -.3624493
--------------------------------------------------------------

We estimate that the ATE of \(t\) on \(y\) is \(-.41\). So taking the treatment decreases the probability of a positive outcome by \(.41\) on average over the population.

We will compare this estimate to the sample difference of \(y_{1}\) and \(y_{0}\).

. gen diff = y1 - y0

. sum diff

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
        diff |     10,000      -.4132    .5303715         -1          1

In our sample, the average difference of \(y_{1}\) and \(y_{0}\) is also \(-.41\).

Conclusion

I have demonstrated how to estimate the parameters of a model with a complex likelihood function: the probit model with an endogenous treatment using mlexp. See [R] mlexp for more details about mlexp. I have also demonstrated how to use margins to estimate the ATE for the probit model with an endogenous treatment. See [R] margins for more details about margins.

Reference

Pindyck, R. S., and D. L. Rubinfeld. 1998. Econometric Models and Economic Forecasts. 4th ed. New York: McGraw-Hill.

Fixed Effects or Random Effects: The Mundlak Approach

Today I will discuss Mundlak’s (1978) alternative to the Hausman test. Unlike the latter, the Mundlak approach may be used when the errors are heteroskedastic or have intragroup correlation.

What is going on?

Say I want to fit a linear panel-data model and need to decide whether to use a random-effects or fixed-effects estimator. My decision depends on how time-invariant unobservable variables are related to variables in my model. Here are two examples that may yield different answers:

  1. A panel dataset of individuals endowed with innate ability that does not change over time
  2. A panel dataset of countries where the time-invariant unobservables in our model are sets of country-specific geographic characteristics

In the first case, innate ability can affect observable characteristics such as the amount of schooling someone pursues. In the second case, geographic characteristics are probably not correlated with the variables in our model. Of course, these are conjectures, and we want a test to verify if unobservables are related to the variables in our model.

First, I will tell you how to compute the test; then, I will explain the theory and intuition behind it.

What is going on?

Computing the test

  1. Compute the panel-level average of your time-varying covariates.
  2. Use a random-effects estimator to regress your covariates and the panel-level means generated in (1) against your outcome.
  3. Test that the panel-level means generated in (1) are jointly zero.

If you reject that the coefficients are jointly zero, the test suggests that there is correlation between the time-invariant unobservables and your regressors, namely, the fixed-effects assumptions are satisfied. If you cannot reject the null that the generated regressors are zero, there is evidence of no correlation between the time-invariant unobservable and your regressors; that is, the random effects assumptions are satisfied.

Below I demonstrate the three-step procedure above using simulated data. The data satisfy the fixed-effects assumptions and have two time-varying covariates and one time-invariant covariate.

STEP 1

. bysort id: egen mean_x2 = mean(x2)

. bysort id: egen mean_x3 = mean(x3)

STEP 2

. quietly xtreg y x1 x2 x3 mean_x2 mean_x3, vce(robust) 

. estimates store mundlak

STEP 3

. test mean_x2 mean_x3

 ( 1)  mean_x2 = 0
 ( 2)  mean_x3 = 0

           chi2(  2) =    8.94
         Prob > chi2 =    0.0114

We reject the null hypothesis. This suggests that time-invariant unobservables are related to our regressors and that the fixed-effects model is appropriate. Note that I used a robust estimator of the variance-covariance matrix. I could not have done this if I had used a Hausman test.

Where all this came from

A linear panel-data model is given by

\[\begin{equation*}
y_{it} = x_{it}\beta + \alpha_i + \varepsilon_{it}
\end{equation*}\]

The index \(i\) denotes the individual and the index \(t\) time. \(y_{it}\) is the outcome of interest, \(x_{it}\) is the set of regressors, \(\varepsilon_{it}\) is the time-varying unobservable, and \(\alpha_i\) is the time-invariant unobservable.

The key to the Mundlak approach is to determine if \(\alpha_i\) and \(x_{it}\) are correlated. We know how to think about this problem from our regression intuition. We can think of the mean of \(\alpha_i\) conditional on the time-invariant part of our regressors in the same way that we think of the mean of our outcome conditional on our covariates.

\[\begin{eqnarray*}
\alpha_i &=& \bar{x}_i\theta + \nu_i \\
E\left(\alpha_i|x_i\right) &=& \bar{x}_i\theta
\end{eqnarray*}\]

In the expression above, \(\bar{x}_i\) is the panel-level mean of \(x_{it}\), and \(\nu_i\) is a time-invariant unobservable that is uncorrelated to the regressors.

As in regression, if \(\theta = 0\), we know \(\alpha_i\) and the covariates are uncorrelated. This is what we test. The implied model is given by

\[\begin{eqnarray*}
y_{it} &=& x_{it}\beta + \alpha_i + \varepsilon_{it} \\
y_{it} &=& x_{it}\beta + \bar{x}_i\theta + \nu_i + \varepsilon_{it} \\
E\left(y_{it}|x_{it}\right) &=& x_{it}\beta + \bar{x}_i\theta
\end{eqnarray*}\]

The second equality replaces \(\alpha_i\) by \(\bar{x}_i\theta + \nu_i\). The third equality relies on the fact that the regressors and unobservables are mean independent. The test is given by

\[\begin{equation*}
H_{\text{o}}: \theta = 0
\end{equation*}\]

Reference

Mundlak, Y. 1978: On the pooling of time series and cross section data. Econometrica 46:69-85.

Probit model with sample selection by mlexp

Overview

In a previous post, David Drukker demonstrated how to use mlexp to estimate the degree of freedom parameter in a chi-squared distribution by maximum likelihood (ML). In this post, I am going to use mlexp to estimate the parameters of a probit model with sample selection. I will illustrate how to specify a more complex likelihood in mlexp and provide intuition for the probit model with sample selection. Our results match the heckprobit command; see [R] heckprobit for more details.

Probit model

For binary outcome \(y_i\) and regressors \({\bf x}_i\), the probit model assumes

\[\begin{equation} \label{eq:outcome} y_i = {\bf 1}({\bf x}_i{\boldsymbol \beta} + \epsilon_i > 0) \tag{1} \end{equation}\]

where the error \(\epsilon_i\) is standard normal. The indicator function \({\bf1}(\cdot)\) outputs 1 when its input is true and outputs 0 otherwise.

The log likelihood of the probit model is

\[\begin{equation}
\ln L = \sum_{i=1}^{N} y_i \ln \Phi({\bf x}_i{\boldsymbol \beta}) + (1-y_i)\ln\{1-\Phi({\bf x}_i{\boldsymbol \beta})\} \nonumber
\end{equation}\]

where \(\Phi\) is the standard normal cumulative distribution function.

The probit model is widely used to model binary outcomes. But there are situations where it is not appropriate. Sometimes we observe a random sample where the outcome is missing on certain observations. If there is a relationship between the unobserved error of the outcome \(\epsilon_i\) and the unobserved error that affects whether the outcome is observed \(\epsilon_{si}\), then estimates made using the probit model will be inconsistent for \({\boldsymbol \beta}\). For instance, this could happen when we model job satisfaction and our sample includes employed and unemployed individuals. The unobserved factors that affect your job satisfaction may be correlated with factors that affect your employment status. Samples like this are said to suffer from “selection on unobservables”.

Probit model with sample selection

Van de Ven and Van Pragg (1981) introduced the probit model with sample selection to allow for consistent estimation of \({\boldsymbol \beta}\) in samples that suffer from selection on unobservables. The equation for the outcome (1) remains the same, but we add another equation. The selection process for the outcome is modeled as

\[\begin{equation}
s_i = {\bf 1}({\bf z}_i{\boldsymbol \gamma} + \epsilon_{si} > 0) \nonumber
\end{equation}\]

where \(s_i=1\) if we observed \(y_i\) and \(s_i=0\) otherwise, and \({\bf z}_i\) are regressors that affect the selection process.

The errors \(\epsilon_i\) and \(\epsilon_{si}\) are assumed to be standard normal with

\[\begin{equation}
\mbox{corr}(\epsilon_i,\epsilon_{si}) = \rho \nonumber
\end{equation}\]

Let \(S\) be the set of observations where \(y_i\) is observed. The likelihood for the probit model with sample selection is

\[\begin{eqnarray*}
\ln L &=& \sum_{i\in S}^{} y_i\ln\Phi_2({\bf x}_i{\boldsymbol \beta}, {\bf z}_i{\boldsymbol \gamma},\rho) +
(1-y_i)\ln\Phi_2(-{\bf x}_i{\boldsymbol \beta}, {\bf z}_i{\boldsymbol \gamma},-\rho) + \cr
& & \sum_{i\not\in S}^{} \ln \{1- \Phi({\bf z}_i{\boldsymbol \gamma})\}
\end{eqnarray*}\]

where \(\Phi_2\) is the bivariate normal cumulative distribution function.

The data

We will simulate data from a probit model with sample selection and then estimate the parameters of the model using mlexp. We simulate a random sample of 7,000 observations.

. drop _all

. set seed 441

. set obs 7000
number of observations (_N) was 0, now 7,000

. generate x = .5*rchi2(2)

. generate z = rnormal()

. generate b = rbinomial(2,.5)

First, we generate the regressors. We use a \(\chi^2\) variable with \(2\) degrees of freedom \(x\) scaled by \(0.5\) as a regressor for the outcome. A standard normal variable \(z\) is used as a selection regressor. The variable \(b\) has a binomial(\(2,0.5\)) distribution and will be used as a selection regressor.

. matrix cm = (1,.7 \ .7,1)

. drawnorm ey es, corr(cm)

Next, we draw the unobserved errors. The outcome \(y\) and selection indicator \(s\) will be generated with errors that have correlation \(0.7\). We generate the errors with the drawnorm command.

. generate s = z + 1.3*0.b + 1.b + .5*2.b + es > 0

. generate y = .7*x + ey  + .5 > 0

. replace y = .  if !s
(1,750 real changes made, 1,750 to missing)

Finally, we generate the outcome and selection indicator. We specify the effect of \(b\) on selection by using factor-variable notation. Every value of \(b\) provides a different intercept for \(s\). We set the outcome to missing for observations where \(s\) is \(0\).

Effect of ignoring sample selection

First, we will use mlexp to estimate the probit model, ignoring the sample selection. We use the cond() function to calculate different values of the likelihood based on the value of \(y\). For cond(a,b,c), b is returned if a is true and c is returned otherwise. We use only the observations for which \(y\) is not missing by specifying \(y\) in the variables() option. The variables in the equation y are specified once, the first time the equation parameters are used in the likelihood. When the equation is used again, it is referred to as \(\{{\bf y}:\}\).

. mlexp (ln(cond(y,normal({y: x _cons}),1-normal({y:})))), variables(y)

initial:       log likelihood = -3639.0227
alternative:   log likelihood = -2342.8722
rescale:       log likelihood = -1746.0961
Iteration 0:   log likelihood = -1746.0961  
Iteration 1:   log likelihood = -1503.9519  
Iteration 2:   log likelihood = -1485.2935  
Iteration 3:   log likelihood = -1485.1677  
Iteration 4:   log likelihood = -1485.1677  

Maximum likelihood estimation

Log likelihood = -1485.1677                     Number of obs     =      5,250

------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
           x |    .813723   .0568938    14.30   0.000     .7022132    .9252328
       _cons |   .7623006   .0386929    19.70   0.000     .6864639    .8381372
------------------------------------------------------------------------------

Both parameters are overestimated, and the true values are not in the estimated confidence intervals.

Accounting for sample selection

Now, we use mlexp to estimate the probit model with sample selection. We use the cond() function twice, once for the selection indicator value and once for the outcome value. We no longer need to specify the variables() option because we will use each observation in the data. We use the factor-variable operator ibn in the selection equation so that a separate intercept is used in the equation for each level of \(b\).

. mlexp (ln(cond(s,cond(y,binormal({y: x _cons},{s: z ibn.b}, {rho}), binormal(
> -{y:},{s:}, -{rho})),1-normal({s:}))))

initial:       log likelihood =  -8491.053
alternative:   log likelihood =  -5898.851
rescale:       log likelihood =  -5898.851
rescale eq:    log likelihood = -5654.3504
Iteration 0:   log likelihood = -5654.3504  
Iteration 1:   log likelihood = -5473.5319  (not concave)
Iteration 2:   log likelihood = -4401.6027  (not concave)
Iteration 3:   log likelihood = -4340.7398  (not concave)
Iteration 4:   log likelihood = -4333.6402  (not concave)
Iteration 5:   log likelihood = -4326.1744  (not concave)
Iteration 6:   log likelihood = -4316.4936  (not concave)
Iteration 7:   log likelihood =  -4261.307  
Iteration 8:   log likelihood = -4154.7548  
Iteration 9:   log likelihood = -4142.7991  
Iteration 10:  log likelihood = -4141.7431  
Iteration 11:  log likelihood = -4141.7306  
Iteration 12:  log likelihood = -4141.7305  

Maximum likelihood estimation

Log likelihood = -4141.7305                     Number of obs     =      7,000

------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
y            |
           x |   .7643362   .0532342    14.36   0.000      .659999    .8686734
       _cons |   .5259657   .0406914    12.93   0.000      .446212    .6057195
-------------+----------------------------------------------------------------
s            |
           z |   1.028631   .0260977    39.41   0.000      .977481    1.079782
             |
           b |
          0  |   1.365497   .0440301    31.01   0.000       1.2792    1.451794
          1  |   1.034018   .0297178    34.79   0.000     .9757726    1.092264
          2  |    .530342   .0353022    15.02   0.000      .461151    .5995331
-------------+----------------------------------------------------------------
        /rho |   .6854869   .0417266    16.43   0.000     .6037043    .7672696
------------------------------------------------------------------------------

Our estimates of the coefficient on \(x\) and the constant intercept are closer to the true values. The confidence intervals also include the true values. The correlation \(\rho\) is estimated to be \(0.69\), and the true value of \(0.7\) is in the confidence interval. This model obviously works better.

Conclusion

I have demonstrated how to estimate the parameters of a model with a moderately complex likelihood function: the probit model with sample selection using mlexp. I also illustrated how to generate data from this model and how its results differ from the simple probit model.

See [R] mlexp for more details about mlexp. In a future post, we will show how to make predictions after mlexp and how to estimate population average parameters using mlexp and margins.

Reference

Van de Ven, W. P. M. M., and B. M. S. Van Pragg. 1981. The demand for deductibles in private health insurance: A probit model with sample selection. Journal of Econometrics 17: 229{252.

Categories: Statistics Tags: , ,

Estimating parameters by maximum likelihood and method of moments using mlexp and gmm

\(\newcommand{\epsilonb}{\boldsymbol{\epsilon}}
\newcommand{\ebi}{\boldsymbol{\epsilon}_i}
\newcommand{\Sigmab}{\boldsymbol{\Sigma}}
\newcommand{\Omegab}{\boldsymbol{\Omega}}
\newcommand{\Lambdab}{\boldsymbol{\Lambda}}
\newcommand{\betab}{\boldsymbol{\beta}}
\newcommand{\gammab}{\boldsymbol{\gamma}}
\newcommand{\Gammab}{\boldsymbol{\Gamma}}
\newcommand{\deltab}{\boldsymbol{\delta}}
\newcommand{\xib}{\boldsymbol{\xi}}
\newcommand{\iotab}{\boldsymbol{\iota}}
\newcommand{\xb}{{\bf x}}
\newcommand{\xbit}{{\bf x}_{it}}
\newcommand{\xbi}{{\bf x}_{i}}
\newcommand{\zb}{{\bf z}}
\newcommand{\zbi}{{\bf z}_i}
\newcommand{\wb}{{\bf w}}
\newcommand{\yb}{{\bf y}}
\newcommand{\ub}{{\bf u}}
\newcommand{\Gb}{{\bf G}}
\newcommand{\Hb}{{\bf H}}
\newcommand{\thetab}{\boldsymbol{\theta}}
\newcommand{\XBI}{{\bf x}_{i1},\ldots,{\bf x}_{iT}}
\newcommand{\Sb}{{\bf S}} \newcommand{\Xb}{{\bf X}}
\newcommand{\Xtb}{\tilde{\bf X}}
\newcommand{\Wb}{{\bf W}}
\newcommand{\Ab}{{\bf A}}
\newcommand{\Bb}{{\bf B}}
\newcommand{\Zb}{{\bf Z}}
\newcommand{\Eb}{{\bf E}}\) This post was written jointly with Joerg Luedicke, Senior Social Scientist and Statistician, StataCorp.

Overview

We provide an introduction to parameter estimation by maximum likelihood and method of moments using mlexp and gmm, respectively (see [R] mlexp and [R] gmm). We include some background about these estimation techniques; see Pawitan (2001, Casella and Berger (2002), Cameron and Trivedi (2005), and Wooldridge (2010) for more details.

Maximum likelihood (ML) estimation finds the parameter values that make the observed data most probable. The parameters maximize the log of the likelihood function that specifies the probability of observing a particular set of data given a model.

Method of moments (MM) estimators specify population moment conditions and find the parameters that solve the equivalent sample moment conditions. MM estimators usually place fewer restrictions on the model than ML estimators, which implies that MM estimators are less efficient but more robust than ML estimators.

Using mlexp to estimate probit model parameters

A probit model for the binary dependent variable \(y\) conditional on covariates \(\xb\) with coefficients \(\betab\) is

\[\begin{equation}
y = \begin{cases}
1 & \mbox{ if } \xb\betab’ + \epsilon > 0\\
0 & \mbox{ otherwise }
\end{cases}
\end{equation}\]

where \(\epsilon\) has a standard normal distribution. The log-likelihood function for the probit model is

\[\begin{equation}\label{E:b1}
\ln\{L(\betab;\xb,y)\}= \sum_{i=1}^N y_i \ln\Phi(\xb_{i}\betab’)
+ (1-y_i) \ln\Phi(-\xb_{i}\betab’)
\end{equation}\]

where \(\Phi\) denotes the cumulative standard normal.

We now use mlexp to estimate the coefficients of a probit model. We have data on whether an individual belongs to a union (union), the individual’s age (age), and the highest grade completed (grade).

. webuse union
(NLS Women 14-24 in 1968)

. mlexp ( union*lnnormal({b1}*age + {b2}*grade + {b0})    ///
>         + (1-union)*lnnormal(-({b1}*age + {b2}*grade + {b0})) )

initial:       log likelihood = -18160.456
alternative:   log likelihood = -1524604.4
rescale:       log likelihood = -14097.135
rescale eq:    log likelihood =  -14063.38
Iteration 0:   log likelihood =  -14063.38  
Iteration 1:   log likelihood = -13796.715  
Iteration 2:   log likelihood = -13796.336  
Iteration 3:   log likelihood = -13796.336  

Maximum likelihood estimation

Log likelihood = -13796.336                     Number of obs     =     26,200

------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         /b1 |   .0051821   .0013471     3.85   0.000     .0025418    .0078224
         /b2 |   .0373899   .0035814    10.44   0.000     .0303706    .0444092
         /b0 |  -1.404697   .0587797   -23.90   0.000    -1.519903   -1.289491
------------------------------------------------------------------------------

Defining a linear combination of the covariates makes it easier to specify the model and to read the output:

. mlexp ( union*lnnormal({xb:age grade _cons}) + (1-union)*lnnormal(-{xb:}) )

initial:       log likelihood = -18160.456
alternative:   log likelihood = -14355.672
rescale:       log likelihood = -14220.454
Iteration 0:   log likelihood = -14220.454  
Iteration 1:   log likelihood = -13797.767  
Iteration 2:   log likelihood = -13796.336  
Iteration 3:   log likelihood = -13796.336  

Maximum likelihood estimation

Log likelihood = -13796.336                     Number of obs     =     26,200

------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         age |   .0051821   .0013471     3.85   0.000     .0025418    .0078224
       grade |   .0373899   .0035814    10.44   0.000     .0303706    .0444092
       _cons |  -1.404697   .0587797   -23.90   0.000    -1.519903   -1.289491
------------------------------------------------------------------------------

Using gmm to estimate parameters by MM

ML specifies a functional form for the distribution of \(y\) conditional on \(\xb\). Specifying \(\Eb[y|\xb]=\Phi(\xb\betab’)\) is less restrictive because it imposes structure only on the first conditional moment instead of on all the conditional moments. Under correct model specification, the ML estimator is more efficient than the MM
estimator because it correctly specifies the conditional mean and all other conditional moments.

The model assumption \(\Eb[y|\xb]=\Phi(\xb\betab’)\) implies the moment conditions \(\Eb[\{y-\Phi(\xb\betab’)\}\xb] = {\bf 0}\). The sample moment equivalent is

\[\sum_{i=1}^N [\{y_i-\Phi(\xb_i\betab’)\}\xb_i] = {\bf 0}\]

In the gmm command below, we specify the residuals \(y_i-\Phi(\xb_i\betab’)\) inside the parentheses and the variables that multiply them, known as instruments, in the option instruments().

. gmm ( union - normal({xb:age grade _cons}) ), instruments(age grade) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .07831137  
Iteration 1:   GMM criterion Q(b) =  .00004813  
Iteration 2:   GMM criterion Q(b) =  5.333e-09  
Iteration 3:   GMM criterion Q(b) =  5.789e-17  

note: model is exactly identified

GMM estimation 

Number of parameters =   3
Number of moments    =   3
Initial weight matrix: Unadjusted                 Number of obs   =     26,200

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         age |   .0051436   .0013349     3.85   0.000     .0025272      .00776
       grade |   .0383185   .0038331    10.00   0.000     .0308058    .0458312
       _cons |  -1.415623   .0609043   -23.24   0.000    -1.534994   -1.296253
------------------------------------------------------------------------------
Instruments for equation 1: age grade _cons

The point estimates are similar to the ML estimates because both estimators are consistent.

Using gmm to estimate parameters by ML

When we maximize a log-likelihood function, we find the parameters that set the first derivative to 0. For example, setting the first derivative of the probit log-likelihood function with respect to \(\betab\) to 0 in the sample yields

\[\begin{equation}\label{E:b2}
\frac{\partial \ln\{L(\beta;\xb,y)\}}{\partial \betab} =
\sum_{i=1}^N \left\{y_i \frac{\phi(\xb_{i}\betab’)}{\Phi(\xb_{i}\betab’)}
– (1-y_i) \frac{\phi(-\xb_{i}\betab’)}{\Phi(-\xb_{i}\betab’)}\right\}
\xb_{i} = {\bf 0}
\end{equation}\]

Below, we use gmm to find the parameters that solve these sample moment conditions:

. gmm ( union*normalden({xb:age grade _cons})/normal({xb:})       ///
>         -(1-union)*normalden(-{xb:})/normal(-{xb:}) ),          ///
>         instruments(age grade) onestep

Step 1
Iteration 0:   GMM criterion Q(b) =  .19941827  
Iteration 1:   GMM criterion Q(b) =  .00012506  
Iteration 2:   GMM criterion Q(b) =  2.260e-09  
Iteration 3:   GMM criterion Q(b) =  7.369e-19  

note: model is exactly identified

GMM estimation 

Number of parameters =   3
Number of moments    =   3
Initial weight matrix: Unadjusted                 Number of obs   =     26,200

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         age |   .0051821    .001339     3.87   0.000     .0025577    .0078065
       grade |   .0373899   .0037435     9.99   0.000     .0300528     .044727
       _cons |  -1.404697   .0601135   -23.37   0.000    -1.522517   -1.286876
------------------------------------------------------------------------------
Instruments for equation 1: age grade _cons

The point estimates match those reported by mlexp. The standard errors differ because gmm reports robust standard errors.

Summary

We showed how to easily estimate the probit model parameters by ML and by MM using mlexp and gmm, respectively. We also showed that you can estimate these parameters using restrictions imposed by conditional distributions or using weaker conditional moment restrictions. Finally, we illustrated that the equations imposed by the conditional distributions can be viewed as sample moment restrictions.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics Methods and Applications. 1st ed. New York: Cambridge University Press.

Casella, G., and R. L. Berger. 2002. Statistical Inference. 2nd ed. Pacific Grove, CA: Duxbury.

Pawitan, Y. 2001. In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford: Oxford University Press.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. MIT Press.