Programming an estimation command in Stata: An OLS command using Mata

I discuss a command that computes ordinary least-squares (OLS) results in Mata, paying special attention to the structure of Stata programs that use Mata work functions.

This command builds on several previous posts; at a minimum, you should be familiar with Programming an estimation command in Stata: A first ado-command using Mata and Programming an estimation command in Stata: Computing OLS objects in Mata.

An OLS command with Mata computations

The Stata command myregress11 computes the results in Mata. The syntax of the myregress11 command is

myregress11 depvar [indepvars] [if] [in] [, noconstant]

where indepvars can contain factor variables or time-series variables.

In the remainder of this post, I discuss the code for myregress11.ado. I recommend that you click on the file name to download the code. To avoid scrolling, view the code in the do-file editor, or your favorite text editor, to see the line numbers.

I do not discuss the formulas for the OLS objects. See Programming an estimation command in Stata: Computing OLS objects in Mata for the formulas and Mata implementations.

Code block 1: myregress11.ado

*! version 11.0.0  11Jan2016
program define myregress11, eclass sortpreserve
    version 14

    syntax varlist(numeric ts fv) [if] [in] [, noCONStant ]
    marksample touse

    gettoken depvar indepvars : varlist
    _fv_check_depvar `depvar'

    fvexpand `indepvars' 
    local cnames `r(varlist)'

    tempname b V N rank df_r

    mata: mywork("`depvar'", "`cnames'", "`touse'", "`constant'", ///
       "`b'", "`V'", "`N'", "`rank'", "`df_r'") 

    if "`constant'" == "" {
    	local cnames `cnames' _cons
    }

    matrix colnames `b' = `cnames'
    matrix colnames `V' = `cnames'
    matrix rownames `V' = `cnames'

    ereturn post `b' `V', esample(`touse') buildfvinfo
    ereturn scalar N       = `N'
    ereturn scalar rank    = `rank'
    ereturn scalar df_r    = `df_r'
    ereturn local  cmd     "myregress11"

    ereturn display

end

mata:

void mywork( string scalar depvar,  string scalar indepvars, 
             string scalar touse,   string scalar constant,  
	     string scalar bname,   string scalar Vname,     
	     string scalar nname,   string scalar rname,     
	     string scalar dfrname) 
{

    real vector y, b, e, e2
    real matrix X, XpXi
    real scalar n, k

    y    = st_data(., depvar, touse)
    X    = st_data(., indepvars, touse)
    n    = rows(X)

    if (constant == "") {
        X    = X,J(n,1,1)
    }

    XpXi = quadcross(X, X)
    XpXi = invsym(XpXi)
    b    = XpXi*quadcross(X, y)
    e    = y - X*b
    e2   = e:^2
    k    = cols(X) - diag0cnt(XpXi)
    V    = (quadsum(e2)/(n-k))*XpXi

    st_matrix(bname, b')
    st_matrix(Vname, V)
    st_numscalar(nname, n)
    st_numscalar(rname, k)
    st_numscalar(dfrname, n-k)

}

end

Let’s break this 74-line program into familiar pieces to make it easier to understand. Lines 2–35 define the ado-command, and lines 37–74 define the Mata work function that is used by the ado-command. Although there are more details, I used this structure in mymean8.ado, which I discussed in Programming an estimation command in Stata: A first ado-command using Mata.

The ado-command has four parts.

  1. Lines 5–14 parse what the user typed, identify the sample, and create temporary names for the results returned by our Mata work function.
  2. Lines 16-17 call the Mata work function.
  3. Lines 19–31 post the results returned by the Mata work function to e().
  4. Line 33 displays the results.

The Mata function mywork() also has four parts.

  1. Lines 39–43 parse the arguments.
  2. Lines 46–48 declare vectors, matrices, and scalars that are local to mywork().
  3. Lines 54–64 compute the results.
  4. Lines 66–70 copy the computed results to Stata, using the names that were passed in arguments.

Now let’s discuss the ado-code in some detail. Line 5 uses the syntax command to put the names of the variables specified by the user into the local macro varlist, to parse an optional if restriction, to parse an optional in restriction, and to parse the noconstant option. The variables specified by the user must be numeric, may contain time-series variables, and may contain factor variables. For more detailed discussions of similar syntax usages, see Programming an estimation command in Stata: Allowing for sample restrictions and factor variables, Programming an estimation command in Stata: Allowing for options, and Programming an estimation command in Stata: Using a subroutine to parse a complex option.

Line 6 creates a temporary sample-identification variable and stores its name in the local macro touse. I discussed this usage in detail in the section surrounding code block 1 in Programming an estimation command in Stata: Allowing for sample restrictions and factor variables.

Line 8 uses gettoken to store the name of the dependent variable in the local macro depvar and the names of the independent variables in the local macro indepvars. Line 9 uses _fv_check_depvar to check that the name of the dependent variable is not a factor variable.

Line 11 uses fvexpand to expand the factor variables in indepvars. Line 12 puts the expanded names stored in r(varlist) by fvexpand in the local macro cnames. A single factor variable can imply more than one coefficient. fvexpand finds the canonical names for these coefficients and returns them in r(varlist). I have not used fvexpand until now because the Stata commands that I used to compute the results automatically created the coefficient names. Mata functions are designed for speed, so I must create the coefficient names when I use them.

Example 1 illustrates how one factor variable can imply more than one coefficient.

Example 1: fvexpand

. sysuse auto
(1978 Automobile Data)

. tabulate rep78

     Repair |
Record 1978 |      Freq.     Percent        Cum.
------------+-----------------------------------
          1 |          2        2.90        2.90
          2 |          8       11.59       14.49
          3 |         30       43.48       57.97
          4 |         18       26.09       84.06
          5 |         11       15.94      100.00
------------+-----------------------------------
      Total |         69      100.00

. fvexpand i.rep78

. return list

macros:
              r(fvops) : "true"
            r(varlist) : "1b.rep78 2.rep78 3.rep78 4.rep78 5.rep78"

. summarize 2.rep78

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
     2.rep78 |         69     .115942    .3225009          0          1

The tabulate results show that there are five levels in rep78. fvexpand finds the levels and creates a list of the names of the implied indicator variables 1b.rep78, 2.rep78, 3.rep78, 4.rep78, and 5.rep78. Comparing the results from summarize 2.rep78 and tabulate rep78 illustrates this notation. The b in 1b.rep78 identifies level 1 as the base category to be omitted when there is a constant in the model. Type help fvvarlist for more details.

Line 14 creates the temporary names for the results. For example, it stores a safe, temporary name in the local macro b that can be used for the matrix storing the point estimates. I discussed this usage in the section Using temporary names for global objects in Programming an estimation command in Stata: A first ado-command using Mata.

Lines 16 and 17 call the Mata function mywork(), which uses the information contained in the local macros depvar, cnames, touse, and constant to compute the results that are returned in the Stata objects whose names are stored in the local macros b, V, N, rank, and df_r.

Line 20 appends _cons to the local macro cnames, if the user specified the option noconstant.

Lines 23–25 put row names on the vector of point estimates and row and column names on the matrix containing the estimated variance-covariance of the estimator (VCE).

Lines 27–31 post the results to e().

Line 33 displays a standard Stata output table, using the results in e(b), e(V), and e(df_r).

Note that the local macro b created on line 14 contains a temporary name that is passed to mywork() on line 17 and that the Stata matrix whose name is contained in the local macro b is used on lines 23 and 27. mywork() puts the vector of point estimates in the Stata matrix whose name is stored in the local macro b. Also note that the local macro V created on line 14 contains a temporary name that is passed to mywork() on line 17 and that the Stata matrix whose name is contained in the local macro V is used on lines 24, 25, and 27. mywork() puts the estimated VCE in the Stata matrix whose name is stored in the local macro V.

To see how this works, let’s discuss the mywork() function in detail. Lines 39–43 declare that mywork() returns nothing, it is void, and declare that mywork() accepts nine arguments, each of which is a string scalar. The first four arguments are inputs; depvar contains the name of the independent variable, indepvars contains the names of the independent variables, touse contains the name of the sample-identification variable, and constant contains either noconstant or is empty. The values of these arguments are used on lines 50, 51, and 54 to create the vector y and the matrix X.

The last five arguments contain names used to write the results back to Stata. mywork() writes the results back to Stata using the passed-in temporary names. For example, line 17 shows that the Mata string scalar bname contains the temporary name stored in the local macro b. Line 66 copies the results stored in the transpose of the Mata vector b to a Stata matrix whose name is stored in the Mata string scalar bname. (Line 60 shows that the vector b contains the OLS point estimates.) Lines 23 and 27 then use this Stata vector whose name is stored in the local macro b. Similarly, line 17 shows that the Mata string scalar Vname contains the temporary name stored in the local macro V. Line 67 copies the results stored in the Mata matrix V to a Stata matrix whose name is stored in the Mata string scalar Vname. (Line 64 shows that V contains the estimated VCE.) Lines 24, 25, and 27 then use this Stata matrix whose name is stored in the local macro V. The arguments nname, rname, and dfrname are used analogously to return the results for the number of observations, the rank of the VCE, and the degrees of freedom of the residuals.

Lines 50–64 compute the point estimates and the VCE. Except for line 54, I discussed these computations in Programming an estimation command in Stata: A first ado-command using Mata. Line 54 causes a column of 1s to be joined to the covariate matrix X when the string scalar constant is empty. Lines 5 and 16 imply that the Mata string scalar constant contains noconstant when the user specifies the noconstant option and that it is empty otherwise.

Done and undone

I discussed the code for myregress11.ado, which uses Mata to compute OLS point estimates and a VCE that assumes independent and identically distributed observations. The structure of the code is the same as the one that I used in mymean7.ado and mymean8.ado, discussed in Programming an estimation command in Stata: A first ado-command using Mata, although there are more details in the OLS program.

Key to this structure is that the Mata work function accepts two types of arguments: the names of Stata objects that are inputs and temporary names that are used to write the results back to Stata from Mata.

In the next post, I extend myregress11.ado to allow for robust or cluster-robust estimators of the VCE.

probit or logit: ladies and gentlemen, pick your weapon

We often use probit and logit models to analyze binary outcomes. A case can be made that the logit model is easier to interpret than the probit model, but Stata’s margins command makes any estimator easy to interpret. Ultimately, estimates from both models produce similar results, and using one or the other is a matter of habit or preference.

I show that the estimates from a probit and logit model are similar for the computation of a set of effects that are of interest to researchers. I focus on the effects of changes in the covariates on the probability of a positive outcome for continuous and discrete covariates. I evaluate these effects on average and at the mean value of the covariates. In other words, I study the average marginal effects (AME), the average treatment effects (ATE), the marginal effects at the mean values of the covariates (MEM), and the treatment effects at the mean values of the covariates (TEM).

First, I present the results. Second, I discuss the code used for the simulations.

Results

In Table 1, I present the results of a simulation with 4,000 replications when the true data generating process (DGP) satisfies the assumptions of a probit model. I show the average of the AME and the ATE estimates and the 5% rejection rate of the true null hypothesis that arise after probit and logit estimation. I also provide an approximate true value of the AME and ATE. I obtain the approximate true values by computing the ATE and AME, at the true values of the coefficients, using a sample of 20 million observations. I will provide more details on the simulation in a later section.

Table 1: Average Marginal and Treatment Effects: True DGP Probit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
AME of x1 -.1536 -.1537 -.1537
5% Rejection Rate .050 .052
ATE of x2 .1418 .1417 .1417
5% Rejection Rate .050 .049

For the MEM and TEM, we have the following:

Table 2: Marginal and Treatment Effects at Mean Values: True DGP Probit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
MEM of x1 -.1672 -.1673 -.1665
5% Rejection Rate .056 .06
TEM of x2 .1499 .1498 .1471
5% Rejection Rate .053 .058

The logit estimates are close to the true value and have a rejection rate that is close to 5%. Fitting the parameters of our model using logit when the true DGP satisfies the assumptions of a probit model does not lead us astray.

If the true DGP satisfies the assumptions of the logit model, the conclusions are the same. I present the results in the next two tables.

Table 3: Average Marginal and Treatment Effects: True DGP Logit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
AME of x1 -.1090 -.1088 -.1089
5% Rejection Rate .052 .052
ATE of x2 .1046 .1044 .1045
5% Rejection Rate .053 .051

Table 4: Marginal and Treatment Effects at Mean Values: True DGP Logit

Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
MEM of x1 -.1146 -.1138 -.1146
5% Rejection Rate .050 .051
TEM of x2 .1086 .1081 .1085
5% Rejection Rate .058 .058

Why?

Maximum likelihood estimators find the parameters that maximize the likelihood that our data will fit the distributional assumptions that we make. The likelihood chosen is an approximation to the true likelihood, and it is a helpful approximation if the true likelihood and our approximating are close to each other. Viewing likelihood-based models as useful approximations, instead of as models of a true likelihood, is the basis of quasilikelihood theory. For more details, see White (1996) and Wooldridge (2010).

It is assumed that the unobservable random variable in the probit model and logit model comes from a standard normal and logistic distribution, respectively. The cumulative distribution functions (CDFs) in these two cases are close to each other, especially around the mean. Therefore, estimators under these two sets of assumptions produce similar results. To illustrate these arguments, we can plot the two CDFs and their differences as follows:

Graph 1: Normal and Logistic CDF’s and their Difference
graph1

The difference between the CDFs approaches zero as you get closer to the mean, from the right or from the left, and it is always smaller than .15.

Simulation design

Below is the code I used to generate the data for my simulations. In the first part, lines 4 to 12, I generate outcome variables that satisfy the assumptions of the probit model, y1, and the logit model, y2. In the second part, lines 13 to 16, I compute the marginal effects for the logit and probit models. I have a continuous and a discrete covariate. For the discrete covariate, the marginal effect is a treatment effect. In the third part, lines 17 to 25, I compute the marginal effects evaluated at the means. I will use these estimates later to compute approximations to the true values of the effects.

program define mkdata
        syntax, [n(integer 1000)]
        clear
        quietly set obs `n'
        // 1. Generating data from probit, logit, and misspecified 
        generate x1    = rnormal()
        generate x2    = rbeta(2,4)>.5
        generate e1    = rnormal()
        generate u     = runiform()
        generate e2    = ln(u) -ln(1-u)
        generate xb    = .5*(1 -x1 + x2)
        generate y1    =  xb + e1 > 0
        generate y2    =  xb + e2 > 0
        // 2. Computing probit & logit marginal and treatment effects 
        generate m1    = normalden(xb)*(-.5)
        generate m2    = normal(1 -.5*x1 ) - normal(.5 -.5*x1)
        generate m1l   = exp(xb)*(-.5)/(1+exp(xb))^2
        generate m2l   = exp(1 -.5*x1)/(1+ exp(1 -.5*x1 )) - ///
                         exp(.5 -.5*x1)/(1+ exp(.5 -.5*x1 ))
        // 3. Computing probit & logit marginal and treatment effects at means
        quietly mean x1 x2 
        matrix A         = r(table)
        scalar a         = .5 -.5*A[1,1] + .5*A[1,2]
        scalar b1        =  1 -.5*A[1,1]
        scalar b0        = .5 -.5*A[1,1]
        generate mean1   = normalden(a)*(-.5)
        generate mean2   = normal(b1) - normal(b0)
        generate mean1l  = exp(a)*(-.5)/(1+exp(a))^2
        generate mean2l  = exp(b1)/(1+ exp(b1)) - exp(b0)/(1+ exp(b0))
end

I approximate the true marginal effects using a sample of 20 million observations. This is a reasonable strategy in this case. For example, take the average marginal effect for a continuous covariate, \(x_{k}\), in the case of the probit model:

\[\begin{equation*}
\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}
\end{equation*}\]

The expression above is an approximation to \(E\left(\phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\right)\). To obtain this expected value, we would need to integrate over the distribution of all the covariates. This is not practical and would limit my choice of covariates. Instead, I draw a sample of 20 million observations, compute \(\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\), and take it to be the true value. I follow the same logic for the other marginal effects.

Below is the code I use to compute the approximate true marginal effects. I draw the 20 million observations, then I compute the averages that I am going to use in my simulation, and I create locals for each approximate true value.

. mkdata, n(20000000)

. local values "m1 m2 m1l m2l mean1 mean2 mean1l mean2l"

. local means  "mx1 mx2 mx1l mx2l meanx1 meanx2 meanx1l meanx2l"

. local n : word count `values'

. forvalues i= 1/`n' {
  2.         local a: word `i' of `values'
  3.         local b: word `i' of `means'
  4.         sum `a', meanonly
  5.         local `b' = r(mean)
  6. }

Now I am ready to run all the simulations that I used to produce the results in the previous section. The code that I used for the simulations for the ATE and the AME when the true DGP is a probit is given by

. postfile mprobit y1p y1p_r y1l y1l_r y2p y2p_r y2l y2l_r ///
>                 using simsmprobit, replace 

. forvalues i=1/4000 {
  2.         quietly {
  3.                 mkdata, n(10000)
  4.                 probit y1 x1 i.x2, vce(robust)
  5.                 margins, dydx(*) atmeans post 
  6.                 local y1p = _b[x1]
  7.                 test _b[x1] = `meanx1'
  8.                 local y1p_r   = (r(p)<.05) 
  9.                 local y2p = _b[1.x2]
 10.                 test _b[1.x2] = `meanx2'
 11.                 local y2p_r   = (r(p)<.05) 
 12.                 logit  y1 x1 i.x2, vce(robust)
 13.                 margins, dydx(*) atmeans post 
 14.                 local y1l = _b[x1]
 15.                 test _b[x1] = `meanx1'
 16.                 local y1l_r   = (r(p)<.05) 
 17.                 local y2l = _b[1.x2]
 18.                 test _b[1.x2] = `meanx2'
 19.                 local y2l_r   = (r(p)<.05) 
 20.                 post mprobit (`y1p') (`y1p_r') (`y1l') (`y1l_r') ///
>                            (`y2p') (`y2p_r') (`y2l') (`y2l_r')
 21.         }
 22. }
. use simsprobit
. summarize

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
         y1p |      4,000   -.1536812    .0038952  -.1697037  -.1396532
       y1p_r |      4,000         .05    .2179722          0          1
         y1l |      4,000   -.1536778    .0039179  -.1692524  -.1396366
       y1l_r |      4,000      .05175    .2215496          0          1
         y2p |      4,000     .141708    .0097155   .1111133   .1800973
-------------+---------------------------------------------------------
       y2p_r |      4,000       .0495    .2169367          0          1
         y2l |      4,000    .1416983    .0097459   .1102069   .1789895
       y2l_r |      4,000        .049     .215895          0          1

For the results in the case of the MEM and the TEM when the true DGP is a probit, I use margins with the option atmeans. The other cases are similar. I use robust standard error for all computations to account for the fact that my likelihood model is an approximation to the true likelihood, and I use the option vce(unconditional) to account for the fact that I am using two-step M-estimation. See Wooldridge (2010) for more details on two-step M-estimation.

You can download the code used to produce the results by clicking this link: pvsl.do

Concluding Remarks

I provided simulation evidence that illustrates that the differences between using estimates of effects after probit or logit is negligible. The reason lies in the theory of quasilikelihood and, specifically, in that the cumulative distribution functions of the probit and logit models are similar, especially around the mean.

References

White, H. 1996. Estimation, Inference, and Specification Analysis>. Cambridge: Cambridge University Press.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Programming an estimation command in Stata: Computing OLS objects in Mata

\(\newcommand{\epsilonb}{\boldsymbol{\epsilon}}
\newcommand{\ebi}{\boldsymbol{\epsilon}_i}
\newcommand{\Sigmab}{\boldsymbol{\Sigma}}
\newcommand{\betab}{\boldsymbol{\beta}}
\newcommand{\eb}{{\bf e}}
\newcommand{\xb}{{\bf x}}
\newcommand{\xbit}{{\bf x}_{it}}
\newcommand{\xbi}{{\bf x}_{i}}
\newcommand{\zb}{{\bf z}}
\newcommand{\zbi}{{\bf z}_i}
\newcommand{\wb}{{\bf w}}
\newcommand{\yb}{{\bf y}}
\newcommand{\ub}{{\bf u}}
\newcommand{\Xb}{{\bf X}}
\newcommand{\Mb}{{\bf M}}
\newcommand{\Xtb}{\tilde{\bf X}}
\newcommand{\Wb}{{\bf W}}
\newcommand{\Vb}{{\bf V}}\)I present the formulas for computing the ordinary least-squares (OLS) estimator and show how to compute them in Mata. This post is a Mata version of Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects. I discuss the formulas and the computation of independence-based standard errors, robust standard errors, and cluster-robust standard errors.

OLS formulas

Recall that the OLS point estimates are given by

\[
\widehat{\betab} =
\left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}
\left(
\sum_{i=1}^N \xb_i’y_i
\right)
\]

where \(\xb_i\) is the \(1\times k\) vector of independent variables, \(y_i\) is the dependent variable for each of the \(N\) sample observations, and the model for \(y_i\) is

\[
y_i = \xb_i\betab’ + \epsilon_i
\]

If the \(\epsilon_i\) are independently and identically distributed (IID), we estimate the variance-covariance matrix of the estimator (VCE) by

\[
\widehat{\Vb} = \widehat{s}
\left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}
\]

where \(\widehat{s} = 1/(N-k)\sum_{i=1}^N e_i^2\) and \(e_i=y_i-\xb_i\widehat{\betab}\). See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2015) for introductions to OLS.

Mata implementation

I compute the OLS point estimates in Mata in example 1.

Example 1: Computing OLS point estimates in Mata

. sysuse auto
(1978 Automobile Data)

. mata:
------------------------------------------------- mata (type end to exit) ------
: y    = st_data(., "price")

: X    = st_data(., "mpg trunk")

: n    = rows(X)

: X    = X,J(n,1,1)

: XpX  = quadcross(X, X)

: XpXi = invsym(XpX)

: b    = XpXi*quadcross(X, y)

: end
--------------------------------------------------------------------------------

I used st_data() to put a copy of the observations on price into the Mata vector y and to put a copy of the observations on mpg and trunk into the Mata matrix X. I used rows(X) to put the number of observations into n. After adding a column of ones onto X for the constant term, I used quadcross() to calculate \(\Xb’\Xb\) in quad precision. After using invsym() to calculate the inverse of the symmetric matrix XpXi, I calculated the point estimates from the OLS formula.

In example 1, I computed the OLS point estimates after forming the cross products. As discussed in Lange (2010, chapter 7), I could compute more accurate estimates using a QR decomposition; type help mf_qrd for details about computing QR decompositions in Mata. By computing the cross products in quad precision, I obtained point estimates that are almost as accurate as those obtainable from a QR decomposition in double precision, but that is a topic for another post.

Here are the point estimates I computed in Mata and comparable results from regress.

Example 2: Results from Mata and regress

. mata: b'
                  1              2              3
    +----------------------------------------------+
  1 |  -220.1648801    43.55851009    10254.94983  |
    +----------------------------------------------+

. regress price mpg trunk

      Source |       SS           df       MS      Number of obs   =        74
-------------+----------------------------------   F(2, 71)        =     10.14
       Model |   141126459         2  70563229.4   Prob > F        =    0.0001
    Residual |   493938937        71  6956886.44   R-squared       =    0.2222
-------------+----------------------------------   Adj R-squared   =    0.2003
       Total |   635065396        73  8699525.97   Root MSE        =    2637.6

------------------------------------------------------------------------------
       price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         mpg |  -220.1649   65.59262    -3.36   0.001    -350.9529    -89.3769
       trunk |   43.55851   88.71884     0.49   0.625    -133.3418    220.4589
       _cons |   10254.95   2349.084     4.37   0.000      5571.01    14938.89
------------------------------------------------------------------------------

Given the OLS point estimates, I can now compute the IID estimator of the VCE.

Example 3: Computing the IID VCE

. mata:
------------------------------------------------- mata (type end to exit) ------
: e    = y - X*b

: e2   = e:^2

: k    = cols(X)

: V    = (quadsum(e2)/(n-k))*XpXi

: sqrt(diagonal(V))'
                 1             2             3
    +-------------------------------------------+
  1 |  65.59262431   88.71884015    2349.08381  |
    +-------------------------------------------+

: end
--------------------------------------------------------------------------------

I put the residuals into the Mata vector e, which I subsequently element-wise squared. I used cols(X) to put the number of covariates into k. I used quadsum() to compute the sum of the squared residuals in quad precision when computing V, an IID estimator for the VCE. The standard errors displayed by sqrt(diagonal(V)) are the same as the ones displayed by regress in example 2.

Robust standard errors

The frequently used robust estimator of the VCE is given by

\[
\widehat{V}_{robust}=\frac{N}{N-k}
\left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}
\Mb
\left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}
\]

where
\[\Mb=\sum_{i=1}^N \widehat{e}_i^2\xb_i’\xb_i\]

See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2015) for derivations and discussions.

Example 4 implements this estimator in Mata.

Example 4: A robust VCE

. mata:
------------------------------------------------- mata (type end to exit) ------
: M    = quadcross(X, e2, X)

: V    = (n/(n-k))*XpXi*M*XpXi

: sqrt(diagonal(V))'
                 1             2             3
    +-------------------------------------------+
  1 |  72.45387946   71.45370224   2430.640607  |
    +-------------------------------------------+

: end
--------------------------------------------------------------------------------

Using quadcross(X, e2, X) to compute M is more accurate and faster than looping over the observations. The accuracy comes from the quad precision offered by quadcross(). The speed comes from performing the loops in compiled C code instead of compiled Mata code. Mata is fast but C is faster, because C imposes much more structure and because C is compiled using much more platform-specific information than Mata.

quadcross() is also faster because it has been parallelized, like many Mata functions. For example, a call to quadcross() from Stata/MP with 2 processors will run about twice as fast as a call to quadcross() from Stata/SE when there are many rows in X. A detailed discussion of the performance increases offered by Mata in Stata/MP is a subject for another post.

I now verify that my computations match those reported by regress.

Example 5: Comparing computations of robust VCE

. regress price mpg trunk, vce(robust)

Linear regression                               Number of obs     =         74
                                                F(2, 71)          =      11.59
                                                Prob > F          =     0.0000
                                                R-squared         =     0.2222
                                                Root MSE          =     2637.6

------------------------------------------------------------------------------
             |               Robust
       price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         mpg |  -220.1649   72.45388    -3.04   0.003    -364.6338   -75.69595
       trunk |   43.55851    71.4537     0.61   0.544    -98.91613    186.0331
       _cons |   10254.95   2430.641     4.22   0.000      5408.39    15101.51
------------------------------------------------------------------------------

Cluster-robust standard errors

The cluster-robust estimator of the VCE is frequently used when the data have a group structure, also known as a panel structure or as a longitudinal structure. This VCE accounts for the within-group correlation of the errors, and it is given by

\[
\widehat{V}_{cluster}=\frac{N-1}{N-k}\frac{g}{g-1}
\left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}
\Mb_c
\left( \sum_{i=1}^N \xb_i’\xb_i \right)^{-1}
\]

where
\[
\Mb_c=\sum_{j=1}^g
\Xb_j’
(\widehat{\eb}_j \widehat{\eb}_j’)
\Xb_j
\]

\(\Xb_j\) is the \(n_j\times k\) matrix of observations on \(\xb_i\) in group \(j\), \(\widehat{\eb}_j\) is the \(n_j\times 1\) vector of residuals in group \(j\), and \(g\) is the number of groups. See Cameron and Trivedi (2005), Wooldridge (2010), and [R] regress for derivations and discussions.

Computing \(\Mb_c\) requires sorting the data by group. I use rep78, with the missing values replaced by 6, as the group variable in my example. In example 6, I sort the dataset in Stata, put a copy of the observations on the modified rep78 into the column vector id, and recompute the OLS objects that I need. I could have sorted the dataset in Mata, but I usually sort it in Stata, so that is what I illustrated. Type help mf_sort for sorting in Mata. In a real program, I would not need to recompute everything. I do here because I did not want to discuss the group variable or sorting the dataset until I discussed cluster-robust standard errors.

Example 6: Setup for computing M

. replace rep78=6 if missing(rep78)
(5 real changes made)

. sort rep78

. mata:
------------------------------------------------- mata (type end to exit) ------
: id   = st_data(., "rep78")

: y    = st_data(., "price")

: X    = st_data(., "mpg trunk")

: n    = rows(X)

: X    = X,J(n,1,1)

: k    = cols(X)

: XpX  = quadcross(X, X)

: XpXi = invsym(XpX)

: b    = XpXi*quadcross(X, y)

: e    = y - X*b

: end
--------------------------------------------------------------------------------

The Mata function panelsetup(Q,p) returns a matrix describing the group structure of the data when Q is sorted by the group variable in column p. I illustrate this function in example 7.

Example 7: panelsetup()

. list rep78 if rep78<3, sepby(rep78)

     +-------+
     | rep78 |
     |-------|
  1. |     1 |
  2. |     1 |
     |-------|
  3. |     2 |
  4. |     2 |
  5. |     2 |
  6. |     2 |
  7. |     2 |
  8. |     2 |
  9. |     2 |
 10. |     2 |
     +-------+

. mata:
------------------------------------------------- mata (type end to exit) ------
: info = panelsetup(id, 1)

: info
        1    2
    +-----------+
  1 |   1    2  |
  2 |   3   10  |
  3 |  11   40  |
  4 |  41   58  |
  5 |  59   69  |
  6 |  70   74  |
    +-----------+

: end
--------------------------------------------------------------------------------

I begin by listing out the group variable, rep78, in Stata for the first two groups. I then use panelsetup() to create info, which has one row for each group with the first column containing the first row of that group and the second column containing the second row of that group. I display info to illustrate what it contains. The first row of info specifies that the first group starts in row 1 and ends in row 2, which matches the results produced by list. The second row of info specifies that the second group starts in row 3 and ends in row 10, which also matches the results produced by list.

Having created info, I can use it and the panelsubmatrix() to compute \(\Mb_c\).

Example 8: A cluster-robust VCE

. mata:
------------------------------------------------- mata (type end to exit) ------
: nc   = rows(info)

: M    = J(k, k, 0)

: for(i=1; i<=nc; i++) {
>     xi = panelsubmatrix(X,i,info)
>     ei = panelsubmatrix(e,i,info)
>     M  = M + xi'*(ei*ei')*xi
> }

: V    = ((n-1)/(n-k))*(nc/(nc-1))*XpXi*M*XpXi

: sqrt(diagonal(V))'
                 1             2             3
    +-------------------------------------------+
  1 |  93.28127184   58.89644366   2448.547376  |
    +-------------------------------------------+

: end
--------------------------------------------------------------------------------

After storing the number of groups in nc, I created an initial M to be a k \(\times\) k matrix of zeros. For each group, I used panelsubmatrix() to extract the covariate for that group from X, I used panelsubmatrix() to extract the residuals for that group from e, and I added that group's contribution into M. After looping over the groups, I computed V and displayed the standard errors.

I now verify that my computations match those reported by regress.

Example 9: Comparing computations of cluster-robust VCE

. regress price mpg trunk, vce(cluster rep78)

Linear regression                               Number of obs     =         74
                                                F(2, 5)           =       9.54
                                                Prob > F          =     0.0196
                                                R-squared         =     0.2222
                                                Root MSE          =     2637.6

                                  (Std. Err. adjusted for 6 clusters in rep78)
------------------------------------------------------------------------------
             |               Robust
       price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         mpg |  -220.1649   93.28127    -2.36   0.065     -459.952    19.62226
       trunk |   43.55851   58.89644     0.74   0.493    -107.8396    194.9566
       _cons |   10254.95   2448.547     4.19   0.009     3960.758    16549.14
------------------------------------------------------------------------------

Done and undone

I reviewed the formulas that underlie the OLS estimator and showed how to compute them in Mata. In the next two posts, I write an ado-command that implements these formulas.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Lange, K. 2010. Numerical Analysis for Statisticians. 2nd ed. New York: Springer.

Stock, J. H., and M. W. Watson. 2010. Introduction to Econometrics. 3rd ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Wooldridge, J. M. 2015. Introductory Econometrics: A Modern Approach. 6th ed. Cincinnati, Ohio: South-Western.

Programming an estimation command in Stata: A first ado-command using Mata

I discuss a sequence of ado-commands that use Mata to estimate the mean of a variable. The commands illustrate a general structure for Stata/Mata programs. This post builds on Programming an estimation command in Stata: Mata 101, Programming an estimation command in Stata: Mata functions, and Programming an estimation command in Stata: A first ado command.

Using Mata in ado-programs

I begin by reviewing the structure in mymean5.ado, which I discussed in Programming an estimation command in Stata: A first ado command.

Code block 1: mymean5.ado

*! version 5.0.0 20Oct2015
program define mymean5, eclass
	version 14

	syntax varlist(max=1)

	tempvar e2 
	tempname b V
	quietly summarize `varlist'
	local sum            = r(sum)
	local N              = r(N)
	matrix `b'           = (1/`N')*`sum'
	generate double `e2' = (`varlist' - `b'[1,1])^2
	quietly summarize `e2'
	matrix `V'           = (1/((`N')*(`N'-1)))*r(sum)
	matrix colnames `b'  = `varlist'
	matrix colnames `V'  = `varlist'
	matrix rownames `V'  = `varlist'
	ereturn post `b' `V'
	ereturn display
end

The syntax command on line 5 stores the name of the variable for which the command estimates the mean. The tempvar and tempname commands on lines 7 and 8 put temporary names into the local macros e2, b, and V. Lines 9-15 compute the point estimates and the estimated variance-covariance of the estimator (VCE), using the temporary names for objects, so as not to overwrite user-created objects. Lines 16–18 put the column name on the point estimate and row and column names on the estimated VCE. Line 19 posts the point estimate and the estimated VCE to e(b) and e(V), respectively. Line 20 produces a standard output table from the information stored in e(b) and e(V).

By the end of this post, I will have a command that replaces the Stata computations on lines 9–15 with Mata computations. To illustrate the structure of Stata-Mata programming, I start off only computing the point estimate in myregress6.

Code block 2: mymean6.ado

*! version 6.0.0 05Dec2015
program define mymean6, eclass
	version 14

	syntax varlist(max=1)

	mata: x  = st_data(., "`varlist'")
	mata: w  = mean(x)
	mata: st_matrix("Q", w)

	display "The point estimates are in Q"
	matrix list Q

end

Line 7 executes a one-line call to Mata; in this construction, Stata drops down to Mata, executes the Mata expression, and pops back up to Stata. Popping down to Mata and back up to Stata takes almost no time, but I would like to avoid doing it three times. (Lines 8 and 9 are also one-line calls to Mata.)

Line 7 puts a copy of all the observations on the variable for which the command estimates the mean in the Mata column vector named x, which is stored in global Mata memory. Line 8 stores the mean of the column vector in the 1\(\times\)1 matrix named w, which is also stored in global Mata memory. Line 9 copies the Mata matrix w to the Stata matrix named Q. Lines 11 and 12 display the results stored in Stata.

I illustrate what myregress6 produces in example 1.

Example 1: myregress6 uses global Mata memory

. sysuse auto
(1978 Automobile Data)

. mymean6 price
The point estimates are in Q

symmetric Q[1,1]
           c1
r1  6165.2568

. matrix dir
            Q[1,1]

. mata: mata describe

      # bytes   type                        name and extent
-------------------------------------------------------------------------------
            8   real scalar                 w
          592   real colvector              x[74]
-------------------------------------------------------------------------------

I use matrix dir to illustrate that Q is a Stata matrix, and I use mata describe to illustrate that x and w are objects in global Mata memory. Using fixed names for an object in Stata memory or in global Mata memory should be avoided, because you can overwrite users’ data.

mymean7 does not put anything in global Mata memory; all computations are done using objects that are local to the Mata function mymean_work(). mymean7 uses temporary names for objects stored in Stata memory.

Code block 3: mymean7.ado

*! version 7.0.0 07Dec2015
program define mymean7, eclass
	version 14

	syntax varlist(max=1)

	tempname b
	mata: mymean_work("`varlist'", "`b'")

	display "b is "
	matrix list `b'
end

mata:
void mymean_work(string scalar vname, string scalar mname)
{
	real vector    x
	real scalar    w
	
	x  = st_data(., vname)
	w  = mean(x)
	st_matrix(mname, w)
}
end

There are two parts to mymean7.ado: an ado-program and a Mata function. The ado-program is defined on lines 2–12. The Mata function mymean_work() is defined on lines 14–24. The Mata function mymean_work() is local to the ado-program mymean7.

Line 8 uses a one-line call to Mata to execute mymean_work(). mymean_work() does not return anything to global Mata memory, and we are passing in two arguments. The first argument is a string scalar containing the name of the variable for which the function should compute the estimate. The second argument is a string scalar containing the temporary name stored in the local macro b. This temporary name will be the name of a Stata matrix that stores the point estimate computed in mymean_work().

Line 15 declares the function mymean_work(). Function declarations specify what the function returns, the name of the function, and the arguments that the function accepts; see Programming an estimation command in Stata: Mata functions for a quick introduction.

The word void on line 15 specifies that the function does not return an argument; in other words, it returns nothing. What precedes the ( is the function name; thus, mymean_work() is the name of the function. The words string scalar vname specify that the first argument of mymean_work() is a string scalar that is known as vname inside mymean_work(). The comma separates the first argument from the second argument. The words string scalar mname specify that the second argument of mymean_work() is a string scalar that is known as mname inside mymean_work(). ) closes the function declaration.

Lines 17-22 define mymean_work() because they are enclosed between the curly braces on lines 16 and 23. Lines 17 and 18 declare the real vector x and real scalar w, which are local to mymean_work(). Lines 20 and 21 compute the estimate. Line 22 copies the estimate stored in the scalar w, which is local to the Mata function mymean_work(), to the Stata matrix whose name is stored in the string scalar mname, which contains the temporary name contained in the local macro b that was passed to the function on line 8.

The structure used in mymean7 ensures three important features.

  1. It does not use global Mata memory.
  2. It uses temporary names for global Stata objects.
  3. It leaves nothing behind in Mata memory or Stata memory.

Examples 2 and 3 combine to illustrate feature (3); example 2 clears Stata and Mata memory, and example 3 shows that mymean7 leaves nothing in the previously cleared memory.

Example 2: Removing objects from Stata and Mata memory

. clear all

. matrix dir

. mata: mata describe

      # bytes   type                        name and extent
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------

In detail, I use clear all to drop all objects from Stata and Mata memory, use matrix dir to illustrate that no matrices were left in Stata memory, and use mata describe to illustrate that no objects were left in Mata memory.

Example 3: mymean7 leaves nothing in Stata or Mata memory

. sysuse auto
(1978 Automobile Data)

. mymean7 price
b is 

symmetric __000000[1,1]
           c1
r1  6165.2568

. matrix dir

. mata: mata describe

      # bytes   type                        name and extent
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------

In example 3, I use mymean7 to estimate the mean, and use matrix dir and mata describe to illustrate that mymean7 did not leave Stata matrices or Mata objects in memory. The output also illustrates that the temporary name __000000 was used for the Stata matrix that held the result before the ado-program terminated.

While it is good that mymean7 leaves nothing in global Stata or Mata memory, it is bad that mymean7 does not leave the estimate behind somewhere, like in e().

mymean8 stores the results in e() and has the features of mymean5, but computes its results in Mata.

Code block 4: mymean8.ado

*! version 8.0.0 07Dec2015
program define mymean8, eclass
	version 14

	syntax varlist(max=1)

	tempname b V
	mata: mymean_work("`varlist'", "`b'", "`V'")
	matrix colnames `b'  = `varlist'
	matrix colnames `V'  = `varlist'
	matrix rownames `V'  = `varlist'
	ereturn post `b' `V'
	ereturn display
end

mata:
void mymean_work(                  ///
          string scalar vname,     ///
	  string scalar mname,     ///
	  string scalar vcename)
{
	real vector    x, e2
	real scalar    w, n, v
	
	x  = st_data(., vname)
	n  = rows(x)
	w  = mean(x)
	e2 = (x :- w):^2
	v   = (1/(n*(n-1)))*sum(e2)
	st_matrix(mname,   w)
	st_matrix(vcename, v)
}
end

Line 8 is a one-line call to mymean_work(), which now has three arguments: the name of the variable whose mean is to be estimated, a temporary name for the Stata matrix that will hold the point estimate, and a temporary name for the Stata matrix that will hold the estimated VCE. The declaration mymean_work() on lines 17-20 has been adjusted accordingly; each of the three arguments is a string scalar. Lines 22 and 23 declare objects local to mymean_work(). Lines 25-29 compute the mean and the estimated VCE. Lines 30 and 31 copy these results to Stata matrices, under the temporary names in the second and third arguments.

There is a logic to the order of the arguments in mymean_work(); the first argument is the name of an input, the second and third arguments are temporary names for the outputs.

Returning to the ado-code, we see that lines 9–11 put row or column names on the point estimate or the estimated VCE. Line 12 posts the results to e(), which are displayed by line 13.

Example 4 illustrates that mymean8 produces the same point estimate and standard error as produced by mean.

Example 4: Comparing mymean8 and mean

. mymean8 price
------------------------------------------------------------------------------
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       price |   6165.257   342.8719    17.98   0.000      5493.24    6837.273
------------------------------------------------------------------------------

. mean price

Mean estimation                   Number of obs   =         74

--------------------------------------------------------------
             |       Mean   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
       price |   6165.257   342.8719      5481.914      6848.6
--------------------------------------------------------------

The confidence intervals produced by mymean8 differ from those produced by mean because mymean8 uses a normal distribution while mean uses a \(t\) distribution. The mymean_work() in mymean9 uses a fourth argument to remove this difference.

Code block 5: mymean9.ado

*! version 9.0.0 07Dec2015
program define mymean9, eclass
	version 14

	syntax varlist(max=1)

	tempname b V dfr
	mata: mymean_work("`varlist'", "`b'", "`V'", "`dfr'")
	matrix colnames `b'  = `varlist'
	matrix colnames `V'  = `varlist'
	matrix rownames `V'  = `varlist'
	ereturn post `b' `V'
	ereturn scalar df_r  = `dfr'
	ereturn display
end

mata:
void mymean_work(                  ///
          string scalar vname,     ///
	  string scalar mname,     ///
	  string scalar vcename,   ///
	  string scalar dfrname)
{
	real vector    x, e2
	real scalar    w, n, v
	
	x  = st_data(., vname)
	n  = rows(x)
	w  = mean(x)
	e2 = (x :- w):^2
	v   = (1/(n*(n-1)))*sum(e2)
	st_matrix(mname,   w)
	st_matrix(vcename, v)
	st_numscalar(dfrname, n-1)
}
end

On line 8, mymean_work() accepts four arguments. The fourth argument is new; it contains the temporary name that is used for the Stata scalar that holds the residual degrees of freedom. Line 34 copies the value of the expression n-1 to the Stata numeric scalar whose name is stored in the string scalar dfrname; this Stata scalar now contains the residual degrees of freedom. Line 13 stores the residual degrees of freedom in e(df_r), which causes ereturn display to use a \(t\) distribution instead of a normal distribution.

Example 5: mymean9 uses a $t$ distribution

. mymean9 price
------------------------------------------------------------------------------
             |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       price |   6165.257   342.8719    17.98   0.000     5481.914      6848.6
------------------------------------------------------------------------------

mymean9 has five basic parts.

  1. It parses the user input.
  2. It uses a one-line call to a Mata work routine to compute results and to store these results in Stata matrices whose temporary names are passed to the Mata work routine.
  3. It puts on the column and row names and stores the results in e().
  4. It displays the results.
  5. It defines the Mata work routine after the end that terminates the definition of the ado-program.

This structure can accommodate any estimator whose results we can store in e(). The details of each part become increasingly complicated, but the structure remains the same. In future posts, I discuss Stata/Mata programs with this structure that implement the ordinary-least squares (OLS) estimator and the Poisson quasi-maximum-likelihood estimator.

Done and undone

I discussed a sequence of ado-commands that use Mata to estimate the mean of a variable. The commands illustrated a general structure for Stata/Mata programs.

In the next post, I show some Mata computations that produce the point estimates, an IID VCE, a robust VCE, and a cluster-robust VCE for the OLS estimator.

Programming an estimation command in Stata: Mata functions

I show how to write a function in Mata, the matrix programming language that is part of Stata. This post uses concepts introduced in Programming an estimation command in Stata: Mata 101.

Mata functions

Commands do work in Stata. Functions do work in Mata. Commands operate on Stata objects, like variables, and users specify options to alter the behavior. Mata functions accept arguments, operate on the arguments, and may return a result or alter the value of an argument to contain a result.

Consider myadd() defined below.

Code block 1: myadd()

mata:
function myadd(X, Y)
{
    A = X + Y
    return(A)
}
end

myadd() accepts two arguments, X and Y, puts the sum of X and Y into A, and returns A. For example,

Example 1: Defining and using a function

. mata:
------------------------------------------------- mata (type end to exit) ------
: function myadd(X, Y)
> {
>     A = X + Y
>     return(A)
> }

: C = J(3, 3, 4)

: D = I(3)

: W = myadd(C,D)

: W
[symmetric]
       1   2   3
    +-------------+
  1 |  5          |
  2 |  4   5      |
  3 |  4   4   5  |
    +-------------+

: end
--------------------------------------------------------------------------------

After defining myadd(), I create the matrices C and D, and I pass C and D into myadd(), which returns their sum. Mata assigns the returned sum to W, which I display. Note that inside the function myadd(), C and D are respectively known as X and Y.

The A created in myadd() can only be accessed inside myadd(), and it does not conflict with an A defined outside myadd(); that is, A is local to the function myadd(). I now illustrate that A is local to myadd().

Example 2: A is local to myadd()

. mata:
------------------------------------------------- mata (type end to exit) ------
: A = J(3, 3, 4)

: A
[symmetric]
       1   2   3
    +-------------+
  1 |  4          |
  2 |  4   4      |
  3 |  4   4   4  |
    +-------------+

: W = myadd(A,D)

: A
[symmetric]
       1   2   3
    +-------------+
  1 |  4          |
  2 |  4   4      |
  3 |  4   4   4  |
    +-------------+

: end
--------------------------------------------------------------------------------

Having illustrated that the A defined inside myadd() is local to myadd(), I should point out that the C and D matrices I defined in example 1 are in global Mata memory. As in ado-programs, we do not want to use fixed names in global Mata memory in our programs so that we do not destroy the users’ data. Fortunately, this problem is easily solved by writing Mata functions that write their results out to Stata and do not return results. I will provide detailed discussions of this solution in the commands that I develop in subsequent posts.

When a Mata function changes the value of an argument inside the function, that changes the value of that argument outside the function; in other words, arguments are passed by address. Mata functions can compute more than one result by storing these results in arguments. For example, sumproduct() returns both the sum and the element-wise product of two matrices.

Code block 2: sumproduct()

function sumproduct(X, Y, S, P)
{
	S = X +  Y
	P = X :* Y
	return
}

sumproduct() returns the sum of the arguments X and Y in the argument S and the element-wise product in P.

Example 3: Returning results in arguments

. mata:
------------------------------------------------- mata (type end to exit) ------
: function sumproduct(X, Y, S, P)
> {
>         S = X +  Y
>         P = X :* Y
>         return
> }

: A = I(3)

: B = rowshape(1::9, 3)

: A
[symmetric]
       1   2   3
    +-------------+
  1 |  1          |
  2 |  0   1      |
  3 |  0   0   1  |
    +-------------+

: B
       1   2   3
    +-------------+
  1 |  1   2   3  |
  2 |  4   5   6  |
  3 |  7   8   9  |
    +-------------+

: W=.

: W
  .

: Z=.

: Z
  .

: sumproduct(A, B, W, Z)

: W
        1    2    3
    +----------------+
  1 |   2    2    3  |
  2 |   4    6    6  |
  3 |   7    8   10  |
    +----------------+

: Z
[symmetric]
       1   2   3
    +-------------+
  1 |  1          |
  2 |  0   5      |
  3 |  0   0   9  |
    +-------------+

: end
--------------------------------------------------------------------------------

After defining sumproduct(), I use I() to create A and use rowshape() to create B. I then create W and Z; each is a scalar missing value. I must create W and Z before I pass them as arguments; otherwise, I would be referencing arguments that do not exist. After calling sumproduct(), I display W and Z to illustrate that they now contain the sum and element-wise product of X and Y.

In myadd() and sumproduct(), I did not specify what type of thing each argument must be, nor did I specify what type of thing each function would return. In other words, I used implicit declarations. Implicit declarations are easier to type than explicit declarations, but they make error messages and code less informative. I highly recommend explicitly declaring returns, arguments, and local variables to make your code and error messages more readable.

myadd2() is a version of myadd() that explicitly declares the type of thing returned, the type of thing that each argument must be, and the type that each local-to-the-function thing must be.

Code block 3: myadd2(): Explicit declarations

mata:
numeric matrix myadd2(numeric matrix X, numeric matrix Y)
{
    numeric matrix A

    A = X + Y
    return(A)
}
end

myadd2() returns a numeric matrix that it constructs by adding the numeric matrix X to the numeric matrix Y. The local-to-the-function object A is also a numeric matrix. A numeric matrix is either a real matrix or a complex matrix; it cannot be a string matrix.

Explicitly declaring returns, arguments, and local variables makes the code more informative. I immediately see that myadd2() does not work with string matrices, but this property is buried in the code for myadd().

I cannot sufficiently stress the importance of writing easy-to-read code. Reading other people’s code is an essential part of programming. It is instructional, and it allows you to adopt the solutions that others have found or implemented. If you are new to programming, you may not yet realize that after a few months, reading your own code is like reading someone else’s code. Even if you never give your code to anyone else, it is essential that you write easy-to-read code so that you can read it at a later date.

Explicit declarations also make some error messages easier to track down. In examples 4 and 5, I pass a string matrix to myadd() and to myadd2(), respectively.

Example 4: Passing a string matrix to myadd()

. mata:
------------------------------------------------- mata (type end to exit) ------
: B = I(3)

: C = J(3,3,"hello")

: myadd(B,C)
                 myadd():  3250  type mismatch
                 :     -  function returned error
(0 lines skipped)
--------------------------------------------------------------------------------
r(3250);

Example 5: Passing a string matrix to myadd2()

. mata:
------------------------------------------------- mata (type end to exit) ------
: B = I(3)

: C = J(3,3,"hello")

: numeric matrix myadd2(numeric matrix X, numeric matrix Y)
> {
>     numeric matrix A
> 
>     A = X + Y
>     return(A)
> }

: myadd2(B,C)
                myadd2():  3251  C[3,3] found where real or complex required
                 :     -  function returned error
(0 lines skipped)
--------------------------------------------------------------------------------
r(3251);

end of do-file

The error message in example 4 indicates that somewhere in myadd(), an operator or a function could not perform something on two objects because their types were not compatible. Do not be deluded by the simplicity of myadd(). Tracking down a type mismatch in real code can be difficult.

In contrast, the error message in example 5 says that the matrix C we passed to myadd2() is neither a real nor a complex matrix like the argument of myadd2() requires. Looking at the code and the error message immediately informs me that the problem is that I passed a string matrix to a function that requires a numeric matrix.

Explicit declarations are so highly recommended that Mata has a setting to require it, as illustrated below.

Example 6: Turning on matastrict

. mata: mata set matastrict on

Setting matastrict to on causes the Mata compiler to require that return and local variables be explicitly declared. By default, matastrict is set to off, in which case return and local variables may be implicitly declared.

When matastrict is set to on, arguments are not required to be explicitly declared because some arguments hold results whose input and output types could differ. Consider makereal() defined and used in example 7.

Example 7: Changing an arguments type

. mata:
------------------------------------------------- mata (type end to exit) ------
: void makereal(A)
> {
>         A = substr(A, 11,1) 
>         A = strtoreal(A)
> }

: A = J(2,2, "Number is 4")

: A
[symmetric]
                 1             2
    +-----------------------------+
  1 |  Number is 4                |
  2 |  Number is 4   Number is 4  |
    +-----------------------------+

: makereal(A)

: A + I(2)
[symmetric]
       1   2
    +---------+
  1 |  5      |
  2 |  4   5  |
    +---------+

: end
--------------------------------------------------------------------------------

The declaration of makereal() specifies that makereal() returns nothing because void comes before the name of the function. Even though matastrict is set to on, I did not declare what type of thing A must be. The two executable lines of makereal() clarify that A must be a string on input and that A will be real on output, which I subsequently illustrate.

I use the feature that arguments may be implicitly declared to make my code easier to read. Many of the Mata functions that I write replace arguments with results. I explicitly declare arguments that are inputs, and I implicitly declare arguments that contain outputs. Consider sumproduct2().

Code block 4: sumproduct2(): Explicit declarations of inputs but not outputs

void sumproduct2(real matrix X, real matrix Y, S, P)
{
	S = X +  Y
	P = X :* Y
	return
}

sumproduct2() returns nothing because void comes before the function name. The first argument X is real matrix, the second argument Y is a real matrix, the third argument S is implicitly declared, and the fourth argument P is implicitly declared. My coding convention that inputs are explicitly declared and that outputs are implicitly declared immediately informs me that X and Y are inputs but that S and P are outputs. That X and Y are inputs and that S and P are outputs is illustrated in example 8.

Example 8: Explicitly declaring inputs but not outputs

. mata:
------------------------------------------------- mata (type end to exit) ------
: void sumproduct2(real matrix X, real matrix Y, S, P)
> {
>         S = X +  Y
>         P = X :* Y
>         return
> }

: A = I(2)

: B = rowshape(1::4, 2)

: C = .

: D = .

: sumproduct2(A, B, C, D)

: C
       1   2
    +---------+
  1 |  2   2  |
  2 |  3   5  |
    +---------+

: D
[symmetric]
       1   2
    +---------+
  1 |  1      |
  2 |  0   4  |
    +---------+

: end
--------------------------------------------------------------------------------

Done and undone

I showed how to write a function in Mata and discussed declarations in some detail. Type help m2_declarations for many more details.

In my next post, I use Mata functions to perform the computations for a simple estimation command.

A Tour of Datetime in Stata

Converting a string date

Stata has a wide array of tools to work with dates. You can have dates in years, months, or even milliseconds. In this post, I will provide a brief tour of working with dates that will help you get started using all of Stata’s tools.

When you load a dataset, you will notice that every variable has a display format. For date variables, the display format is %td for daily dates, %tm for monthly dates, etc. Let’s load the wpi1 dataset as an example.

. webuse wpi1

. describe

Contains data from http://www.stata-press.com/data/r14/wpi1.dta
  obs:           124                          
 vars:             3                          28 Nov 2014 10:31
 size:         1,240                          
-------------------------------------------------------------------------------
              storage   display    value
variable name   type    format     label      variable label
-------------------------------------------------------------------------------
wpi             float   %9.0g                 U.S. Wholesale Price Index
t               int     %tq                   quarterly date
ln_wpi          float   %9.0g                 
-------------------------------------------------------------------------------
Sorted by: t

The display format for the variable t is %tq which indicates a quarterly date. At this point, we can use tsset to declare the data as time series and continue our statistical endeavors. In reality, however, date variables are stored in string or numeric formats, which are not directly interpretable by Stata. Below, we use functions for translating dates stored in string and numeric formats into a Stata date format. For details about converting dates from other software, see Using dates and times from other software

The two most widely used functions for translating string dates are date() and clock(). date() converts daily dates into the number of days elapsed since 01 Jan 1960. Similarly, clock() converts dates with time stamps into the number of milliseconds elapsed since 01 January 1960 00:00:00.000. For dates before 01 Jan 1960, Stata assigns negative numbers as the number of elapsed days. For example, 31 Dec 1959 is -1 days elapsed since 01 Jan 1960.

All date functions takes two arguments: the name of the string variable and a string specifying the order of the date and time components called mask. Let’s look at an example.

. clear

. input str18 mydate

                 mydate
  1. "20151001"
  2. "15-10-01"
  3. "102015"
  4. "01Oct2015 20:10"
  5. "14:10:05"
  6. end

I created a string variable mydate with five different types of dates as observations. The first observation “20151001” is ordered as year, month, and day, and the corresponding mask is “YMD”. I can translate this string date using date(mydate,”YMD”) and store the translated date in the variable newdate.

. quietly generate double newdate = date(mydate,"YMD") in 1

I recommend using the storage type double for newdate to minimize loss of precision. This is particularly important for clock time dates, as we will see later. Let’s list what Stata stored in newdate.

. list mydate newdate in 1

     +--------------------+
     |   mydate   newdate |
     |--------------------|
  1. | 20151001     20362 |
     +--------------------+

The numeric value of 01 October 2015 is equivalent to 20,362 days elapsed since 01 January 1960. At this point, we have successfully translated a string date into a numeric date. We can resume our analysis with newdate as our date variable. In a later section, I will change the display format using format to make it readable.

The next observation, “15-10-01”, is ordered the same way as the first except for the missing century and the presence of hyphens. To translate dates with a two-digit year, we need to inform Stata what century the year component refers to. In this case, we can enter the century by specifying “20YMD” as the mask. We can obtain the corresponding numeric date by typing

. quietly replace newdate = date(mydate,"20YMD") in 2

Note that we did not acknowledge the hyphens while converting the date above because Stata ignores any punctuation that exists in string dates. The third observation, “102015”, refers to a month and year. Because the day component does not exist, we only specify “MY” as the mask.

. quietly replace newdate = date(mydate,"MY") in 3

. list mydate newdate in 3

     +------------------+
     | mydate   newdate |
     |------------------|
  3. | 102015     20362 |
     +------------------+

Although we did not specify the day component, the date() function with the “MY” mask still converted the string date as the number of days elapsed. This is because Stata assumes a default value of 1 for the day component, so we inadvertently translated 01 Oct 2015 when in fact the date existed as only Oct 2015 in out data.

The fourth observation, “01Oct2015 20:10:40”, has an hour and a minute time stamp. In this case we use the clock() function instead of date(). The corresponding mask is “DMYhm”.

. quietly replace newdate = clock(mydate,"DMYhm") in 4

. list mydate newdate in 4

     +-----------------------------+
     |          mydate     newdate |
     |-----------------------------|
  4. | 01Oct2015 20:10   1.759e+12 |
     +-----------------------------+

The numeric value in newdate refers to a number with a magnitude \(10^{12}\) milliseconds elapsed since 01 Jan 1960 00:00:00.000.

We can also ignore certain components in the string by using the # symbol for those components. For example, we can specify “DMYh#” as the mask to ignore the minute component. However, if we omit the # symbol and just use “DMYh” instead, we will obtain a missing value. Stata will not assume a default value of 00 for the minute component as it did earlier for the date. The reason is that Stata always expects a mask for the full string in the observation.

Finally, the last observation, “14:10:05”, is in hour, minute, and second form. As you may have guessed, the corresponding mask is simply “hms”. We can convert it to a numeric value using clock(mydate,”hms”).

I have only used five variations of string dates in this post. There are many more Stata functions that fit almost any need. Also, there are numerous translation functions such as weekly(), monthly(), etc., that I have not mentioned.

Displaying dates

We prefer working with readable dates instead of elapsed dates and times. Once we have converted to a numeric date, we can simply use format to change the display format. Let’s look at an example.

. clear

. input str18 mydate

                 mydate
  1. "01Jan2015"
  2. "02Jan2015"
  3. "03Jan2015"
  4. "04Jan2015"
  5. "05Jan2015"
  6. end

The variable mydate contains daily dates with day, month, and year components. As you may have guessed, we use the date() function with a mask “DMY” to translate mydate into a numeric date.

. generate double newdate = date(mydate,"DMY")

. list

     +---------------------+
     |    mydate   newdate |
     |---------------------|
  1. | 01Jan2015     20089 |
  2. | 02Jan2015     20090 |
  3. | 03Jan2015     20091 |
  4. | 04Jan2015     20092 |
  5. | 05Jan2015     20093 |
     +---------------------+

I stored the numeric dates in the variable newdate. The observations are stored as the number of elapsed days. To change the display format, we use the format command.

. format %td newdate

. list

     +-----------------------+
     |    mydate     newdate |
     |-----------------------|
  1. | 01Jan2015   01jan2015 |
  2. | 02Jan2015   02jan2015 |
  3. | 03Jan2015   03jan2015 |
  4. | 04Jan2015   04jan2015 |
  5. | 05Jan2015   05jan2015 |
     +-----------------------+

%td refers to the format for displaying daily dates. We can now tsset our data for time-series or panel-data analysis.

Building dates from numeric components

Sometimes, dates exist as individual numeric components. In Stata, we can combine these individual components to obtain the desired date. Consider the following dataset as an example:

. clear

. input month day year hour minute

         month        day       year       hour     minute
  1. 01 10 2015 10 05
  2. 02 10 2015 05 10
  3. 03 15 2015 20 15
  4. 04 20 2015 12 20
  5. 05 05 2015 02 25
  6. end

I created five variables, namely month, day, year, hour, and minute. Let’s begin with combining the individual month, day, and year components in the variable newdate. As with translating string dates into numeric using functions, Stata provides a different set of functions for combining individual numeric date components. For example, we use the function mdy() to combine the month, day, and year components.

. generate double newdate = mdy(month,day,year)

. format %td newdate

. list newdate

     +-----------+
     |   newdate |
     |-----------|
  1. | 10jan2015 |
  2. | 10feb2015 |
  3. | 15mar2015 |
  4. | 20apr2015 |
  5. | 05may2015 |
     +-----------+

The arguments for mdy() are individual month, day, and year components. Suppose we want to add the hours and minutes to the existing date variable newdate. We use the function dhms(newdate,hour,minute,1), which takes date, hour, minute, and second components as arguments. Because seconds do not exist in the data, we add a 1 to denote the default value for the seconds component in dhms().

. quietly replace newdate = dhms(newdate,hour,minute,1)

. format %tc newdate

. list newdate

     +--------------------+
     |            newdate |
     |--------------------|
  1. | 10jan2015 10:05:01 |
  2. | 10feb2015 05:10:01 |
  3. | 15mar2015 20:15:01 |
  4. | 20apr2015 12:20:01 |
  5. | 05may2015 02:25:01 |
     +--------------------+

We used %tc because we want to display a datetime format instead of a date format. As a final example, I will create a monthly date variable by combining the year and month using the ym() function and use %tm as a display format for monthly dates.

. quietly replace newdate = ym(year,month)

. format %tm newdate

. list newdate

     +---------+
     | newdate |
     |---------|
  1. |  2015m1 |
  2. |  2015m2 |
  3. |  2015m3 |
  4. |  2015m4 |
  5. |  2015m5 |
     +---------+

There are other useful functions for combining individual components that I have not discussed here and you can read about them in the the Stata manuals. But the basic idea remains the same. Once you know the individual date and time components, you can pick any of the datetime functions to combine them.

Summary

We began our tour by noting that date variables have a specific display format such as %td for daily dates. However, date variables in raw data are often stored as strings. We converted such string dates to numeric dates using date() and clock() functions. Stata stores numeric dates as the number of elapsed days since 01 Jan 1960 for date() and the number of elapsed milliseconds since 01 Jan 1960 00:00:00:000 for clock(). We obtained a readable date by using the format command with %td for daily dates, %tc for datetime, and %tm for monthly. Finally, we also combined individual numeric date and time components to form the desired date variable.

What’s new from Stata Press

Reflecting on the year, Stata has a lot to be thankful for—we released Stata 14, celebrated 30 years of Stata, and had the pleasure of meeting and working with many great people, including our Stata Press authors.

Are you interested in writing a book about Stata or just a book on statistics? We’d love to work with you too. Stata Press offers books with clear, step-by-step examples that make learning and teaching easier. Read more about our submission guidelines, or contact us to get started.

If you’re searching for a good book to read during the holidays, check out our full list of books or our most recent ones below. If you’d like to be notified when new books are released, sign up for Stata Press email alerts.

I hope you all have a great New Year!

sbs-front

Stata for the Behavioral Sciences

Michael N. Mitchell’s Stata for the Behavioral Sciences is an ideal reference for researchers using Stata to fit ANOVA models and other models commonly applied to behavioral science data.

Drawing on his education in psychology and his experience in consulting, Mitchell uses terminology and examples familiar to the reader as he demonstrates how to fit a variety of models, how to interpret results, how to understand simple and interaction effects, and how to explore results graphically.

Highlights:

  • Demonstrates fitting models to data from a variety of designs—one-factor, two-factor, three-factor, repeated measures, longitudinal, complex survey, and more
  • Shows how to use Stata to easily understand the effects of categorical factors, continuous covariates, and interactions
  • Introduces power analysis for ANOVA, ANCOVA, and regression models
  • Provides Stata code and datasets so that the reader can reproduce all examples
  • Gives an overview of Stata for new users and converts commonly used SPSS commands into Stata syntax

Read more »

An Introduction to Stata Programming

An Introduction to Stata Programming, Second Edition

Christopher F. Baum’s An Introduction to Stata Programming, Second Edition, is a great reference for anyone that wants to learn Stata programming. For those learning, Baum assumes familiarity with Stata and gradually introduces more advanced programming tools. For the more advanced Stata programmer, the book introduces Stata’s Mata programming language and optimization routines.

Baum introduces concepts by providing the background and importance for the topic, presents common uses and examples, and then concludes with larger, more applied examples he refers to as “cookbook recipes”. Many of the examples in the book are of particular interest because they arose from frequently asked questions from Stata users.

Highlights:

  • Updated syntax and output
  • Examples and discussions incorporating factor variables and operators
  • Examples and discussions about computation of marginal effects, marginal means, and predictive margins using margins
  • Examples using gmm to implement generalized method of moments estimation
  • Examples using suest for seemingly unrelated estimation
  • Mata-based likelihood function evaluators
  • Commonly used Mata associative arrays

Read more »

Meta-Analysis in Stata

Meta-Analysis in Stata: An Updated Collection from the Stata Journal, Second Edition

Stata has some of the best statistical tools available for doing meta-analysis, and Meta-Analysis in Stata is a complete introduction and reference for performing meta-analyses in Stata.

In this new edition, editors Tom Palmer and Jonathan Sterne have substantially expanded the scope of the collection to cover many contemporary advances that will help keep the reader up to date. Topics range from standard and cumulative meta-analysis and forest plots to contour-enhanced funnel plots and nonparametric analysis of publication bias. In their articles, the authors present conceptual overviews of the techniques, thorough explanations, and detailed descriptions and syntax of new commands. They also provide examples using real-world data.

Highlights:

  • Shows how to use Stata to conduct and interpret basic meta-analyses
  • Illustrates the use of cutting-edge meta-analysis techniques such as multivariate or multiple outcomes meta-analysis, individual participant data (IPD) meta-analysis, and network meta-analysis
  • Includes information about using Stata for other fundamentals such as meta-regression and graphical analysis of bias using funnel plots
  • Explores a full range of user-written Stata meta-analysis commands
  • Provides information about using official Stata commands such as sem and gsem to conduct certain types of meta-analysis

Read more »

Thirty Years with Stata

Thirty Years with Stata: A Retrospective

This volume is a sometimes serious and sometimes whimsical retrospective of Stata, its development, and its use over the last 30 years—complete with inside and outside points of view. Mostly, we simply wanted to celebrate the relationship between Stata users and Stata software. This volume holds something interesting for everyone.

Highlights:

  • Shows how Stata has helped researchers in 13 different disciplines, from the perspective of leading scholars in their fields
  • Illustrates the guiding principles that shaped Stata since its conception
  • Explores how research and statistical analysis has evolved in the last 30 years and how Stata has been an integral part of this process

Read more »

Categories: New Books, Resources Tags:

Programming an estimation command in Stata: Mata 101

I introduce Mata, the matrix programming language that is part of Stata.

Meeting Mata

Mata is a matrix programming language that is part of Stata. Mata code is fast because it is compiled to object code that runs on a virtual machine; type help m1_how for details.

The easiest way to learn Mata is to use it. I begin with an interactive session. (You might find it useful to type along.)

Example 1: A first interactive Mata session

. mata:
------------------------------------------------- mata (type end to exit) ------
: X = J(3, 4, 5)

: X
       1   2   3   4
    +-----------------+
  1 |  5   5   5   5  |
  2 |  5   5   5   5  |
  3 |  5   5   5   5  |
    +-----------------+

: w = (1::4)

: w
       1
    +-----+
  1 |  1  |
  2 |  2  |
  3 |  3  |
  4 |  4  |
    +-----+

: v = X*w

: v
        1
    +------+
  1 |  50  |
  2 |  50  |
  3 |  50  |
    +------+

: v'
        1    2    3
    +----------------+
  1 |  50   50   50  |
    +----------------+

: end
--------------------------------------------------------------------------------

Typing mata: causes Stata to drop down to a Mata session. Typing end ends the Mata session, thereby popping back up to Stata. The dot prompt . is Stata asking for something to do. After you type mata:, the colon prompt : is the Mata compiler asking for something to do.

Typing X = J(3, 4, 5) at the colon prompt causes Mata to compile and execute this code. J(r, c, v) is the Mata function that creates an r\(\times\)c matrix, each of whose elements is v. The expression on the right-hand side of the assignment operator = is assigned to the symbol on the left-hand side.

Typing X by itself causes Mata to display what X contains, which is a 3\(\times\)4 matrix of 5s. Unassigned expressions display their results. Type help m2_exp for details about expressions.

Typing w = (1::4) causes Mata to use the column range operator to create the 4\(\times\)1 column vector that was assigned to w and displayed when I typed w by itself. Type help m2_op_range for details and a discussion of the row range operator.

Typing v = X*w causes Mata to assign the matrix product of X times w to v, which I subsequently display. I then illustrate that is the transpose operator. Type help m2_exp, marker(remarks7) for a list of operators.

Again, typing end ends the Mata session.

In almost all the work I do, I extract submatrices from a matrix.

Example 2: Extracting submatrices from a matrix

. mata:
------------------------------------------------- mata (type end to exit) ------
: rseed(1234)

: W = runiform(4,4)

: W
                 1             2             3             4
    +---------------------------------------------------------+
  1 |  .9472316166   .0522233748   .9743182755   .9457483679  |
  2 |  .1856478315   .9487333737   .8825376215   .9440776079  |
  3 |  .0894258515   .7505444902   .9484983174   .1121626508  |
  4 |  .4809064012   .9763447517   .1254975307   .7655025515  |
    +---------------------------------------------------------+

: v = (2, 4)

: u = (1\ 3)

: v
       1   2
    +---------+
  1 |  2   4  |
    +---------+

: u
       1
    +-----+
  1 |  1  |
  2 |  3  |
    +-----+

: W[u, v]
                 1             2
    +-----------------------------+
  1 |  .0522233748   .9457483679  |
  2 |  .7505444902   .1121626508  |
    +-----------------------------+

: W[| 1,1 \ 3,3 |]
                 1             2             3
    +-------------------------------------------+
  1 |  .9472316166   .0522233748   .9743182755  |
  2 |  .1856478315   .9487333737   .8825376215  |
  3 |  .0894258515   .7505444902   .9484983174  |
    +-------------------------------------------+

: end
--------------------------------------------------------------------------------

I use rseed() to set the seed for the random-number generator and then use runiform(r,c) to create a 4\(\times\)4 matrix uniform deviates, which I subsequently display.

Next, I use the row-join operator , to create the row vector v and I use the column-join operator \ to create the column vector u. Type help m2_op_join for details.

Typing W[u,v] extracts from W the rows specified in the vector u and the columns specified in the vector v.

I frequently extract rectangular blocks defined by a top-left element and a bottom-right element. I illustrate this syntax by typing

W[| 1,1 \ 3,3 |]

In detail, [| opens a range-subscript extraction, 1,1 is the address of the top-left element, \ separates the top-left element from the bottom-right element, 3,3 is the address of the bottom-right element, and |] closes a range-subscript extraction. Type help m2_subscripts for details.

Ironically, when I am doing matrix programming, I frequently want the element-by-element operator instead of the matrix operator. Preface any matrix operator in Mata with a colon (:) to obtain the element-by-element equivalent.

Example 3: Element-wise operators

. mata:
------------------------------------------------- mata (type end to exit) ------
: W = W[| 2,1 \ 4,4 |]

: W
                 1             2             3             4
    +---------------------------------------------------------+
  1 |  .1856478315   .9487333737   .8825376215   .9440776079  |
  2 |  .0894258515   .7505444902   .9484983174   .1121626508  |
  3 |  .4809064012   .9763447517   .1254975307   .7655025515  |
    +---------------------------------------------------------+

: v = .1*(4::6)

: v
        1
    +------+
  1 |  .4  |
  2 |  .5  |
  3 |  .6  |
    +------+

: v:*W
                 1             2             3             4
    +---------------------------------------------------------+
  1 |  .0742591326   .3794933495   .3530150486   .3776310432  |
  2 |  .0447129257   .3752722451   .4742491587   .0560813254  |
  3 |  .2885438407    .585806851   .0752985184   .4593015309  |
    +---------------------------------------------------------+

: v'*W
                 1             2             3             4
    +---------------------------------------------------------+
  1 |  .4075158991   1.340572446   .9025627257   .8930138994  |
    +---------------------------------------------------------+

: end
--------------------------------------------------------------------------------

I extract the bottom four rows of W, store this matrix in W, and display this new W. I then create a row-wise conformable vector v, perform element-wise multiplication of v across the columns of W, and display the result. I cannot type v*W because the 3\(\times\)1 v is not conformable with the 3\(\times\)3 W. But I can, and do, type v’*W because the 1\(\times\)3 v’ is conformable with the 3\(\times\)3 W.

Example 4 uses an element-wise logical operator.

Example 4: Element-wise logical operator

. mata:
------------------------------------------------- mata (type end to exit) ------
: W :< v
       1   2   3   4
    +-----------------+
  1 |  1   0   0   0  |
  2 |  1   0   0   1  |
  3 |  1   0   1   0  |
    +-----------------+

: end
--------------------------------------------------------------------------------

I display the result of comparing the element-wise conformable v with W. Type help m2_op_colon for details.

Stata data in Mata

The Mata function st_data() creates a Mata matrix containing a copy of the data from the Stata dataset in memory. The Mata function st_view() creates a Mata view of the data in the Stata dataset in memory. Views act like matrices, but there is a speed-space tradeoff. Copies are fast at the cost of using twice as much memory. Views are slower, but they use little extra memory.

Copying the data from Stata into Mata doubles the memory used, but the values are stored in Mata memory. Every time a Mata function asks for a value from a matrix, it finds it immediately. In contrast, a view of the data in Stata barely increases the memory used, but the values are in Stata memory. Every time a Mata function asks for a value from a view, it finds a sign telling it where in Stata to get the value.

Example 5: Data from Stata into Mata

. sysuse auto
(1978 Automobile Data)

. list mpg headroom trunk rep78 turn foreign in 1/3 , nolabel

     +-------------------------------------------------+
     | mpg   headroom   trunk   rep78   turn   foreign |
     |-------------------------------------------------|
  1. |  22        2.5      11       3     40         0 |
  2. |  17        3.0      11       3     40         0 |
  3. |  22        3.0      12       .     35         0 |
     +-------------------------------------------------+

. mata:
------------------------------------------------- mata (type end to exit) ------
: Y = st_data(., "mpg headroom trunk")

: st_view(X=., ., "rep78 turn foreign")

: V = Y,X

: V[| 1,1 \ 3,6 |]
         1     2     3     4     5     6
    +-------------------------------------+
  1 |   22   2.5    11     3    40     0  |
  2 |   17     3    11     3    40     0  |
  3 |   22     3    12     .    35     0  |
    +-------------------------------------+

: X[3,1] = 7

: X[| 1,1 \ 3,3 |]
        1    2    3
    +----------------+
  1 |   3   40    0  |
  2 |   3   40    0  |
  3 |   7   35    0  |
    +----------------+

: end
--------------------------------------------------------------------------------

. list rep78 turn foreign in 1/3 , nolabel

     +------------------------+
     | rep78   turn   foreign |
     |------------------------|
  1. |     3     40         0 |
  2. |     3     40         0 |
  3. |     7     35         0 |
     +------------------------+

After I list out the first three observations on six variables in the auto dataset, I drop down to Mata, use st_data() to put a copy of all the observations on mpg, headroom, and trunk into the Mata matrix Y, and use st_view() to create the Mata view X on to all the observations on rep78, turn, and foreign.

After row-joining Y and X to create V, I display the first 3 rows of V. Note that the third observation on rep78 is missing and that Mata matrices and views can contain missing values.

Changing the value of an element in a view changes the data in Stata. I illustrate this point by replacing the (3,1) element of the view X with 7, displaying the first three rows of the view, and listing out the first three observations on rep78, turn, and foreign.

Copying matrices between Mata and Stata

The Mata function st_matrix() puts a copy of a Stata matrix into a Mata matrix, or it puts a copy of a Mata matrix into a Stata matrix. In example 6, V = st_matrix("B") puts a copy of the Stata matrix B into the Mata matrix V.

Example 6: Creating a copy of a Stata matrix in a Mata vector

. matrix B = (1, 2\ 3, 4)

. matrix list B

B[2,2]
    c1  c2
r1   1   2
r2   3   4

. mata:
------------------------------------------------- mata (type end to exit) ------
: V = st_matrix("B")

: V
       1   2
    +---------+
  1 |  1   2  |
  2 |  3   4  |
    +---------+

: end
--------------------------------------------------------------------------------

In example 7, st_matrix("Z", W) puts a copy of the Mata matrix W into the Stata matrix Z.

Example 7: Creating a copy of a Mata matrix in a Stata vector

. mata:
------------------------------------------------- mata (type end to exit) ------
: W = (4..6\7..9)

: W
       1   2   3
    +-------------+
  1 |  4   5   6  |
  2 |  7   8   9  |
    +-------------+

: st_matrix("Z", W)

: end
--------------------------------------------------------------------------------

. matrix list Z

Z[2,3]
    c1  c2  c3
r1   4   5   6
r2   7   8   9

Strings

Mata matrices can be string matrices.

In my work, I frequently have a list of variables in a string scalar that is easier to work with as a string vector.

Turning a string scalar list into a string vector

. mata:
------------------------------------------------- mata (type end to exit) ------
: s1 = "price mpg trunk"

: s1
  price mpg trunk

: s2 = tokens(s1)

: s2
           1       2       3
    +-------------------------+
  1 |  price     mpg   trunk  |
    +-------------------------+

: end
--------------------------------------------------------------------------------

I use tokens() to create the string vector s2 from the string vector s1.

Flow of control

Mata has constructs for looping over a block of code enclosed between curly braces or only executing it if an expression is true.

I frequently use the for() construction to loop over a block of code.

Code block 1: for()

mata:
for(i=1; i<=3; i=i+1) {
	i
}
end

In this example, I set i to the initial value of 1. The loop will continue as long as i is less than or equal to 3. Each time through the loop, the block of code enclosed between the curly braces is executed, and 1 is added to the current value of i. The code block displays the value of i. Example 9 illustrates these points.

Example 9: A for loop

. mata:
------------------------------------------------- mata (type end to exit) ------
: for(i=1; i<=3; i=i+1) {
>         i
> }
  1
  2
  3

: end
--------------------------------------------------------------------------------

Sometimes, I want to execute a block of code as long as a condition is true, in which case I use a while loop, as in code block 2 and example 10.

Code block 1: globala.do

i = 7
while (i>5) {
    i
    i = i - 1
}

I set i to 7 and repeat the block of code between the curly braces while i is greater than 5. The block of code displays the current value of i, then subtracts 1 from i.

Example 10: A while loop

. mata:
------------------------------------------------- mata (type end to exit) ------
: i = 7

: while (i>5) {
>     i
>     i = i - 1
> }
  7
  6

: end
--------------------------------------------------------------------------------

The if construct only executes a code block if an expression is true. I usually use the if-else construct that executes one code block if an expression is true and another code block if the expression is false.

Example 11: An if-else construct

. mata:
-------------------------------------------- mata (type end to exit) ---
: for(i=2; i<10; i=i+5) {
>         i
>         if (i<3) {
>                 "i is less than 3"
>         }
>         else {
>                 "i is not less than 3"
>         }
> }
  2
  i is less than 3
  7
  i is not less than 3

: end
-------------------------------------------------------------------------

One-line calls to Mata

I frequently make one-line calls to Mata from Stata. A one-line call to Mata causes Stata to drop to Mata, compile and execute the line of Mata code, and pop back up to Stata.

Example 12: One-line calls to Mata

. mata: st_matrix("Q", I(3))

. matrix list Q

symmetric Q[3,3]
    c1  c2  c3
r1   1
r2   0   1
r3   0   0   1

In example 12, I use the one-line call to Mata mata: st_matrix("Q", I(3)) to put a copy of the Mata matrix returned by the Mata expression I(3) into the Stata matrix Q. After the one-line call to Mata, I am back in Stata, so I use matrix list Q to show that the Stata matrix Q is a copy of the Mata matrix W.

Done and undone

I used an interactive session to introduce Mata, the matrix programming language that is part of Stata.

In the next post, I show how to define Mata functions.

Using mlexp to estimate endogenous treatment effects in a heteroskedastic probit model

I use features new to Stata 14.1 to estimate an average treatment effect (ATE) for a heteroskedastic probit model with an endogenous treatment. In 14.1, we added new prediction statistics after mlexp that margins can use to estimate an ATE.

I am building on a previous post in which I demonstrated how to use mlexp to estimate the parameters of a probit model with an endogenous treatment and used margins to estimate the ATE for the model Using mlexp to estimate endogenous treatment effects in a probit model. Currently, no official commands estimate the heteroskedastic probit model with an endogenous treatment, so in this post I show how mlexp can be used to extend the models estimated by Stata.

Heteroskedastic probit model

For binary outcome \(y_i\) and regressors \({\bf x}_i\), the probit model assumes

\[\begin{equation}
y_i = {\bf 1}({\bf x}_i{\boldsymbol \beta} + \epsilon_i > 0)
\end{equation}\]

The indicator function \({\bf 1}(\cdot)\) outputs 1 when its input is true and outputs 0 otherwise. The error \(\epsilon_i\) is standard normal.

Assuming that the error has constant variance may not always be wise. Suppose we are studying a certain business decision. Large firms, because they have the resources to take chances, may exhibit more variation in the factors that affect their decision than small firms.

In the heteroskedastic probit model, regressors \({\bf w}_i\) determine the variance of \(\epsilon_i\). Following Harvey (1976), we have

\[\begin{equation} \mbox{Var}\left(\epsilon_i\right) = \left\{\exp\left({\bf
w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber \end{equation}\]

Heteroskedastic probit model with treatment

In this section, I review the potential-outcome framework used to define an ATE and extend it for the heteroskedastic probit model. For each treatment level, there is an outcome that we would observe if a person were to select that treatment level. When the outcome is binary and there are two treatment levels, we can specify how the potential outcomes \(y_{0i}\) and \(y_{1i}\) are generated from the regressors \({\bf x}_i\) and the error terms \(\epsilon_{0i}\) and \(\epsilon_{1i}\):

\[\begin{eqnarray*}
y_{0i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_0 + \epsilon_{0i} > 0) \cr
y_{1i} &=& {\bf 1}({\bf x}_i{\boldsymbol \beta}_1 + \epsilon_{1i} > 0)
\end{eqnarray*}\]

We assume a heteroskedastic probit model for the potential outcomes. The errors are normal with mean \(0\) and conditional variance generated by regressors \({\bf w}_i\). In this post, we assume equal variance of the potential outcome errors.

\[\begin{equation}
\mbox{Var}\left(\epsilon_{0i}\right) = \mbox{Var}\left(\epsilon_{1i}\right) =
\left\{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)\right\}^2 \nonumber
\end{equation}\]

The heteroskedastic probit model for potential outcomes \(y_{0i}\) and \(y_{1i}\) with treatment \(t_i\) assumes that we observe the outcome

\[\begin{equation}
y_i = (1-t_i) y_{0i} + t_i y_{1i}
\nonumber
\end{equation}\]

So we observe \(y_{1i}\) under the treatment (\(t_{i}=1\)) and \(y_{0i}\) when the treatment is withheld (\(t_{i}=0\)).

The treatment \(t_i\) is determined by regressors \({\bf z}_i\) and error \(u_i\):

\[\begin{equation}
t_i = {\bf 1}({\bf z}_i{\boldsymbol \psi} + u_i > 0)
\nonumber
\end{equation}\]

The treatment error \(u_i\) is normal with mean zero, and we allow its variance to be determined by another set of regressors \({\bf v}_i\):

\[\begin{equation}
\mbox{Var}\left(u_i\right) =
\left\{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)\right\}^2 \nonumber
\end{equation}\]

Heteroskedastic probit model with endogenous treatment

In the previous post, I described how to model endogeneity for the treatment \(t_i\) by correlating the outcome errors \(\epsilon_{0i}\) and \(\epsilon_{1i}\) with the treatment error \(u_i\). We use the same framework for modeling endogeneity here. The variance of the errors may change depending on the heteroskedasticity regressors \({\bf w}_i\) and \({\bf v}_i\), but their correlation remains constant. The errors \(\epsilon_{0i}\), \(\epsilon_{1i}\), and \(u_i\) are trivariate normal with correlation

\[\begin{equation}
\left[\begin{matrix}
1 & \rho_{01} & \rho_{t} \cr
\rho_{01} & 1 & \rho_{t} \cr
\rho_{t} & \rho_{t} & 1
\end{matrix}\right]
\nonumber
\end{equation}\]

Now we have all the pieces we need to write the log likelihood of the heteroskedastic probit model with an endogenous treatment. The form of the likelihood is similar to what was given in the previous post. Now the inputs to the bivariate normal cumulative distribution function, \(\Phi_2\), are standardized by dividing by the conditional standard deviations of the errors.

The log likelihood for observation \(i\) is

\[\begin{eqnarray*}
\ln L_i = & & {\bf 1}(y_i =1 \mbox{ and } t_i = 1) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\} + \cr
& & {\bf 1}(y_i=0 \mbox{ and } t_i=1)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_1}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr
& & {\bf 1}(y_i=1 \mbox{ and } t_i=0) \ln \Phi_2\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},-\rho_t\right\} + \cr
& & {\bf 1}(y_i=0 \mbox{ and } t_i = 0)\ln \Phi_2\left\{\frac{-{\bf x}_i{\boldsymbol \beta}_0}{\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}, \frac{-{\bf z}_i{\boldsymbol \psi}}{\exp\left({\bf v}_i{\boldsymbol \alpha}\right)},\rho_t\right\}
\end{eqnarray*}\]

The data

We will simulate data from a heteroskedastic probit model with an endogenous treatment and then estimate the parameters of the model with mlexp. Then, we will use margins to estimate the ATE.

. set seed 323

. set obs 10000
number of observations (_N) was 0, now 10,000

. generate x = .8*rnormal() + 4

. generate b = rpoisson(1)

. generate z = rnormal()

. matrix cm = (1, .3,.7 \ .3, 1, .7 \ .7, .7, 1)

. drawnorm ey0 ey1 et, corr(cm)

We simulate a random sample of 10,000 observations. The treatment and outcome regressors are generated in a similar manner to their creation in the last post. As in the last post, we generate the errors with drawnorm to have correlation \(0.7\).

. generate g = runiform()

. generate h = rnormal()

. quietly replace ey0 = ey0*exp(.5*g)

. quietly replace ey1 = ey1*exp(.5*g)

. quietly replace et = et*exp(.1*h)

. generate t = .5*x - .1*b + .5*z - 2.4 + et > 0

. generate y0 = .6*x - .8 + ey0 > 0

. generate y1 = .3*x - 1.3 + ey1 > 0

. generate y = (1-t)*y0 + t*y1

The uniform variable g is generated as a regressor for the outcome error variance, while h is a regressor for the treatment error variance. We scale the errors by using the variance regressors so that they are heteroskedastic, and then we generate the treatment and outcome indicators.

Estimating the model parameters

Now, we will use mlexp to estimate the parameters of the heteroskedastic probit model with an endogenous treatment. As in the previous post, we use the cond() function to calculate different values of the likelihood based on the different values of \(y\) and \(t\). We use the factor-variable operator ibn on \(t\) in equation y to allow for a different intercept at each level of \(t\). An interaction between \(t\) and \(x\) is also specified in equation y. This allows for a different coefficient on \(x\) at each level of \(t\).

. mlexp (ln(cond(t, ///                                          
>         cond(y,binormal({y: i.t#c.x ibn.t}/exp({g:g}), ///
>             {t: x b z _cons}/exp({h:h}),{rho}), /// 
>                 binormal(-{y:}/exp({g:}),{t:}/exp({h:}),-{rho})), ///
>         cond(y,binormal({y:}/exp({g:}),-{t:}/exp({h:}),-{rho}), ///
>                 binormal(-{y:}/exp({g:}),-{t:}/exp({h:}),{rho}) ///
>         )))), vce(robust)

initial:       log pseudolikelihood = -13862.944
alternative:   log pseudolikelihood = -16501.619
rescale:       log pseudolikelihood = -13858.877
rescale eq:    log pseudolikelihood = -11224.877
Iteration 0:   log pseudolikelihood = -11224.877  (not concave)
Iteration 1:   log pseudolikelihood = -10644.625  
Iteration 2:   log pseudolikelihood = -10074.998  
Iteration 3:   log pseudolikelihood = -9976.6027  
Iteration 4:   log pseudolikelihood = -9973.0988  
Iteration 5:   log pseudolikelihood = -9973.0913  
Iteration 6:   log pseudolikelihood = -9973.0913  

Maximum likelihood estimation

Log pseudolikelihood = -9973.0913               Number of obs     =     10,000

------------------------------------------------------------------------------
             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
y            |
       t#c.x |
          0  |   .6178115   .0334521    18.47   0.000     .5522467    .6833764
          1  |   .2732094   .0365742     7.47   0.000     .2015253    .3448936
             |
           t |
          0  |  -.8403294   .1130197    -7.44   0.000    -1.061844   -.6188149
          1  |  -1.215177   .1837483    -6.61   0.000    -1.575317   -.8550371
-------------+----------------------------------------------------------------
g            |
           g |   .4993187   .0513297     9.73   0.000     .3987143    .5999232
-------------+----------------------------------------------------------------
t            |
           x |   .4985802   .0183033    27.24   0.000     .4627065    .5344539
           b |  -.1140255   .0132988    -8.57   0.000    -.1400908   -.0879603
           z |   .4993995   .0150844    33.11   0.000     .4698347    .5289643
       _cons |  -2.402772   .0780275   -30.79   0.000    -2.555703   -2.249841
-------------+----------------------------------------------------------------
h            |
           h |   .1011185   .0199762     5.06   0.000     .0619658    .1402713
-------------+----------------------------------------------------------------
        /rho |   .7036964   .0326734    21.54   0.000     .6396577    .7677351
------------------------------------------------------------------------------

Our parameter estimates are close to their true values.

Estimating the ATE

The ATE of \(t\) is the expected value of the difference between \(y_{1i}\) and \(y_{0i}\), the average difference between the potential outcomes. Using the law of iterated expectations, we have

\[\begin{eqnarray*}
E(y_{1i}-y_{0i})&=& E\left\{ E\left(y_{1i}-y_{0i}|{\bf x}_i,{\bf w}_i\right)\right\} \cr
&=& E\left\lbrack\Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_1}{
\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}-
\Phi\left\{\frac{{\bf x}_i{\boldsymbol \beta}_0}{
\exp\left({\bf w}_i{\boldsymbol \gamma}\right)}\right\}\right\rbrack \cr
\end{eqnarray*}\]

This can be estimated as a mean of predictions.

Now, we estimate the ATE by using margins. We specify the normal probability expression in the expression() option. We use the expression function xb() to get the linear predictions for the outcome equation and the outcome error variance equation. We can now predict these linear forms after mlexp in Stata 14.1. We specify r.t so that margins will take the difference of the expression under t=1 and t=0. We specify vce(unconditional) to obtain standard errors for the population ATE rather than the sample ATE; we specified vce(robust) for mlexp so that we could specify vce(unconditional) for margins. The contrast(nowald) option is specified to omit the Wald test for the difference.

. margins r.t, expression(normal(xb(y)/exp(xb(g)))) ///
>     vce(unconditional) contrast(nowald)

Contrasts of predictive margins

Expression   : normal(xb(y)/exp(xb(g)))

--------------------------------------------------------------
             |            Unconditional
             |   Contrast   Std. Err.     [95% Conf. Interval]
-------------+------------------------------------------------
           t |
   (1 vs 0)  |  -.4183043   .0202635     -.4580202   -.3785885
--------------------------------------------------------------

We estimate that the ATE of \(t\) on \(y\) is \(-0.42\). So taking the treatment decreases the probability of a positive outcome by \(0.42\) on average over the population.

We will compare this estimate to the average difference of \(y_{1}\) and \(y_{0}\) in the sample. We can do this because we simulated the data. In practice, only one potential outcome is observed for every observation, and this average difference cannot be computed.

. generate diff = y1 - y0

. sum diff

    Variable |        Obs        Mean    Std. Dev.       Min        Max
-------------+---------------------------------------------------------
        diff |     10,000      -.4164    .5506736         -1          1

In our sample, the average difference of \(y_{1}\) and \(y_{0}\) is also \(-0.42\).

Conclusion

I have demonstrated how to estimate the parameters of a model that is not available in Stata: the heteroskedastic probit model with an endogenous treatment using mlexp. See [R] mlexp for more details about mlexp. I have also demonstrated how to use margins to estimate the ATE for the heteroskedastic probit model with an endogenous treatment. See [R] margins for more details about mlexp.

Reference

Harvey, A. C. 1976. Estimating regression models with multiplicative heteroscedasticity. Econometrica 44: 461-465.

Programming an estimation command in Stata: Using a subroutine to parse a complex option

I make two improvements to the command that implements the ordinary least-squares (OLS) estimator that I discussed in Programming an estimation command in Stata: Allowing for options. First, I add an option for a cluster-robust estimator of the variance-covariance of the estimator (VCE). Second, I make the command accept the modern syntax for either a robust or a cluster-robust estimator of the VCE. In the process, I use subroutines in my ado-program to facilitate the parsing, and I discuss some advanced parsing tricks.

Allowing for a robust or a cluster-robust VCE

The syntax of myregress9, which I discussed in Programming an estimation command in Stata: Allowing for options, is

myregress9 depvar [indepvars] [if] [in] [, robust noconstant]

The syntax of myregress10, which I discuss here, is

myregress10 depvar [indepvars] [if] [in] [, vce(robust | cluster clustervar) noconstant]

By default, myregress10 estimates the VCE assuming that the errors are independently and identically distributed (IID). If the option vce(robust) is specified, myregress10 uses the robust estimator of the VCE. If the option vce(cluster clustervar) is specified, myregress10 uses the cluster-robust estimator of the VCE. See Cameron and Trivedi (2005), Stock and Watson (2010), or Wooldridge (2010, 2015) for introductions to OLS; see Programming an estimation command in Stata: Using Stata matrix commands and functions to compute OLS objects for the formulas and Stata matrix implementations.

I recommend that you click on the file name to download the code for my myregress10.ado. To avoid scrolling, view the code in the do-file editor, or your favorite text editor, to see the line numbers.

Code block 1: myregress10.ado

*! version 10.0.0  02Dec2015
program define myregress10, eclass sortpreserve
    version 14

    syntax varlist(numeric ts fv) [if] [in] [, vce(string) noCONStant ]
    marksample touse

    gettoken depvar indeps : varlist
    _fv_check_depvar `depvar'

    tempname zpz xpx xpy xpxi b V
    tempvar  xbhat res res2 

    if `"`vce'"' != "" {
        my_vce_parse , vce(`vce') 
        local vcetype     "robust"
        local clustervar  "`r(clustervar)'"
        if "`clustervar'" != "" {
            markout `touse' `clustervar'
            sort `clustervar'
        }
    }

    quietly matrix accum `zpz' = `varlist' if `touse' , `constant'
    local N                    = r(N)
    local p                    = colsof(`zpz')
    matrix `xpx'               = `zpz'[2..`p', 2..`p']
    matrix `xpy'               = `zpz'[2..`p', 1]
    matrix `xpxi'              = syminv(`xpx')
    matrix `b'                 = (`xpxi'*`xpy')'
    local k                    = `p' - diag0cnt(`xpxi') - 1
    quietly matrix score double `xbhat' = `b' if `touse'
    quietly generate double `res'       = (`depvar' - `xbhat') if `touse'
    quietly generate double `res2'      = (`res')^2 if `touse'

    if "`vcetype'" == "robust" {
        if "`clustervar'" == "" {
            tempname M
            quietly matrix accum `M' = `indeps'         ///
                [iweight=`res2'] if `touse' , `constant'
            local fac                = (`N'/(`N'-`k'))
            local df_r               = (`N'-`k')
        }
        else  {
            tempvar idvar
            tempname M
            quietly egen `idvar' = group(`clustervar') if `touse'
            quietly summarize `idvar' if `touse', meanonly
            local Nc   = r(max)
            local fac  = ((`N'-1)/(`N'-`k')*(`Nc'/(`Nc'-1)))
            local df_r = (`Nc'-1)
            matrix opaccum `M' = `indeps' if `touse'     ///
                , group(`clustervar') opvar(`res')
        }
        matrix `V' = (`fac')*`xpxi'*`M'*`xpxi'
        local vce                   "robust"          
        local vcetype               "Robust"          
    }
    else {                            // IID Case
        quietly summarize `res2' if `touse' , meanonly
        local sum           = r(sum)
        local s2            = `sum'/(`N'-`k')
        local df_r          = (`N'-`k')
        matrix `V'          = `s2'*`xpxi'
    }

    ereturn post `b' `V', esample(`touse') buildfvinfo
    ereturn scalar N       = `N'
    ereturn scalar rank    = `k'
    ereturn scalar df_r    = `df_r'
    ereturn local  vce     "`vce'"
    ereturn local  vcetype "`vcetype'"
    ereturn local clustvar "`clustvar'"
    ereturn local  cmd     "myregress10"
    ereturn display
end

program define my_vce_parse, rclass
    syntax  [, vce(string) ]

    local case : word count `vce'
    
    if `case' > 2 {
        my_vce_error , typed(`vce')
    }

    local 0 `", `vce'"' 
    syntax  [, Robust CLuster * ]

    if `case' == 2 {
        if "`robust'" == "robust" | "`cluster'" == "" {
            my_vce_error , typed(`vce')
        }

        capture confirm numeric variable `options'
        if _rc {
            my_vce_error , typed(`vce')
        }

        local clustervar "`options'" 
    }
    else {    // case = 1
        if "`robust'" == "" {
            my_vce_error , typed(`vce')
        }

    }

    return clear    
    return local clustervar "`clustervar'" 
end

program define my_vce_error
    syntax , typed(string)

    display `"{red}{bf:vce(`typed')} invalid"'
    error 498
end

The syntax command on line 5 puts whatever the user encloses in vce() into a local macro called vce. For example, if the user types

. myregress10 price mpg trunk , vce(hello there)

the local macro vce will contain “hello there”. If the user does not specify something in the vce() option, the local macro vce will be empty. Line 14 uses this condition to execute lines 15–21 only if the user has specified something in option vce().

When the user specifies something in the vce() option, line 15 calls the ado subroutine my_vce_parse to parse what is in the local macro vce. my_vce_parse stores the name of the cluster variable in r(clustervar) and deals with error conditions, as I discuss below. Line 16 stores “robust” into the local macro vcetype, and line 17 stores the contents of the local macro r(clustervar) created by my_vce_parse into the local macro and clustervar.

If the user does not specify something in vce(), the local macro vcetype will be empty and line 36 ensures that myregress10 will compute an IID estimator of the VCE.

Lines 19 and 20 are only executed if the local macro clustervar is not empty. Line 19 updates the touse variable, whose name is stored in the local macro touse, to account for missing values in the cluster variable, whose name is stored in clustervar. Line 20 sorts the dataset in the ascending order of the cluster variable. Users do not want estimation commands resorting their datasets. On line 2, I specified the sortpreserve option on program define to keep the dataset in the order it was in when myregress10 was executed by the user.

Lines 36–65 compute the requested estimator for the VCE. Recall that the local macro vcetype is empty or it contains “robust” and that the local macro clustervar is empty or it contains the name of the cluster variable. The if and else statements use the values stored in vcetype and clustervar to execute one of three blocks of code.

  1. Lines 38–42 compute a robust estimator of the VCE when vcetype contains “robust” and clustervar is empty.
  2. Lines 45–53 compute a cluster-robust of the VCE when vcetype contains “robust” and clustervar contains the name of the cluster variable.
  3. Lines 60–64 compute an IID estimator of the VCE when vcetype does not contain “robust”.

Line 73 stores the name of the cluster variable in e(clustervar), if the local macro clustervar is not empty.

Lines 78–111 define the rclass ado-subroutine my_vce_parse, which performs two tasks. First, it stores the name of the cluster variable in the local macro r(clustervar) when the user specifies vce(cluster clustervar). Second, it finds cases in which the user specified a syntax error in vce() and returns an error in such cases.

Putting these parsing details into a subroutine makes the main command much easier to follow. I recommend that you encapsulate details in subroutines.

The ado-subroutine my_vce_parse is local to the ado-command myregress10; the name my_vce_parse is in a namespace local to myregress10, and my_vce_parse can only be executed from within myregress10.

Line 79 uses syntax to store whatever the user specified in the option vce() in the local macro vce. Line 81 puts the number of words in vce into the local macro case. Line 83 causes the ado-subroutine my_vce_error to display an error message and return error code 498 when there are more than two words in vce. (Recall that vce should contain either robust or cluster clustervar.)

Having ruled out the cases with more than two words, line 87 stores what the local macro vce contains in the local macro 0. Line 88 uses syntax to parse what is in the local macro 0. If the user specified vce(robust), or a valid abbreviation thereof, syntax stores “robust” in the local macro robust; otherwise, the local macro robust is empty. If the user specified vce(cluster something), or a valid abbreviation of cluster, syntax stores “cluster” in the local macro cluster; otherwise, the local macro cluster is empty. The option * causes syntax to put any remaining options into the local macro options. In this case, syntax will store the something in the local macro options.

Remember the trick used in lines 87 and 88. Option parsing is frequently made much easier by storing what a local macro contains in the local macro 0 and using syntax to parse it.

When there are two words in the local macro vce, lines 91–100 ensure that the first word is “cluster” and that the second word, stored in the local macro options, is the name of a numeric variable. When all is well, line 100 stores the name of this numeric variable in the local macro clustervar. Lines 95–98 use a subtle construction to display a custom error message. Rather than let confirm display an error message, lines 95–98 use capture and an if condition to display our custom error message. In detail, line 95 uses confirm to confirm that the local macro options contains the name of a numeric variable. capture puts the return code produced by confirm in the scalar _rc. When options contains the name of a numeric variable, confirm produces the return code 0 and capture stores 0 in _rc; otherwise, confirm produces a positive return code, and capture stores this positive return code in _rc.

When all is well, line 109 clears whatever was in r(), and line 110 stores the name of the cluster variable in r(clustervar).

Lines 113–118 define the ado-subroutine my_vce_error, which displays a custom error message. Like my_vce_parse, my_vce_error is local to myregress10.ado.

Done and undone

I added an option for a cluster-robust estimator of the VCE, and I made myregress10 accept the modern syntax for either a robust or a cluster-robust estimator of the VCE. In the process, I used subroutines in myregress10.ado to facilitate the parsing, and I discussed some advanced parsing tricks.

Reading myregress10.ado would have been more difficult to read if I had not used subroutines to simplify the main routine.

Although it may seem that I have covered every possible nuance, I have only dealt with a few. Type help syntax for more details about parsing options using the syntax command.

References

Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.

Stock, J. H., and M. W. Watson. 2010. Introduction to Econometrics. 3rd ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.

Wooldridge, J. M. 2015. Introductory Econometrics: A Modern Approach. 6th ed. Cincinnati, Ohio: South-Western.