5 things to do at the 2015 Stata Conference in Columbus

The Stata Conference connects you with the best and the brightest of the Stata community, offering a variety of presentations from Stata users and StataCorp experts. This year’s conference will be held July 30-31, 2015, in Columbus, Ohio, and is open to all Stata users wishing to attend.


With the conference just around the corner, we want to suggest a few things to do that will help maximize your experience.

  1. Come early and network.

    Between 8:00 and 8:50 a.m., the smell of fresh coffee will be in the air: a continental breakfast will be served just outside the meeting room. Take this time to grab a bite to eat and get acquainted with the other guests.

    Don’t forget to swing by our registration table and say hello to long-time StataCorp employees Chris Farrar, Gretchen Farrar, and Nathan Bishop. They will hand you a conference packet that includes information on the schedule, abstracts, and more.

  2. Browse our display of Stata Press books.

    Discover which books you want to add to your collection by flipping through the pages of our best-selling books on Stata. Stop by, and learn how Stata Conference attendees receive a 20% discount for all online purchases through October 2, 2015.

  3. Connect with the Stata community.

    The Stata community is full of users from all disciplines, including people you may have met online but would like to meet in person — people such as Stata expert Nick Cox from Statalist and the Stata Journal or StataCorp’s own enthusiastic Director of Econometrics, David Drukker, and Head of Development, Bill Gould.

    Want to start socializing now? Follow @Stata on Twitter and join the conversation. Throughout the conference, we will be live tweeting using the conference hashtag #stata2015. Post tidbits of the presentations you find interesting, and share any pictures you take. If you aren’t on Twitter, look for us on Facebook or LinkedIn.

    Many attendees are well known in their field, and even more have been using Stata for over 10 years. Take a moment to talk to the people around you, and share your story and how you use Stata.

  4. Join us for dinner.

    An optional dinner will be held at Due Amici on Thursday, July 30, at 6:30 p.m. The dinner is a perfect opportunity to interact with presenters and fellow Stata users. Seating is limited, so please register in advance.

  5. Stay for the “Wishes and grumbles” session.

    The conference program concludes with the user-favorite “Wishes and grumbles” session, where users have a chance to share their comments and suggestions directly with developers from StataCorp. Attend this session to hear which new features other users would like to see, or give us some ideas of your own. This session is sure to be lively, especially with feedback regarding our most recent release — Stata 14.

If you haven’t registered yet, head over to our website now for more details.

We look forward to seeing you there!

Categories: Meetings Tags: , ,

Bayesian modeling: Beyond Stata’s built-in models

This post was written jointly with Nikolay Balov, Senior Statistician and Software Developer, StataCorp.

A question on Statalist motivated us to write this blog entry.

A user asked if the churdle command (http://www.stata.com/stata14/hurdle-models/) for fitting hurdle models, new in Stata 14, can be combined with the bayesmh command (http://www.stata.com/stata14/bayesian-analysis/) for fitting Bayesian models, also new in Stata 14:


Our initial reaction to this question was ‘No’ or, more precisely, ‘Not easily’ — hurdle models are not among the likelihood models supported by bayesmh. One can write a program to compute the log likelihood of the double hurdle model and use this program with bayesmh (in the spirit of http://www.stata.com/stata14/bayesian-evaluators/), but this may seem like a daunting task if you are not familiar with Stata programming.

And then we realized, why not simply call churdle from the evaluator to compute the log likelihood? All we need is for churdle to evaluate the log likelihood at specific values of model parameters without performing iterations. This can be achieved by specifying churdle‘s options from() and iterate(0).

Let’s look at an example. We consider a simple hurdle model using a subset of the fitness dataset from [R] churdle:

. webuse fitness
. set seed 17653
. sample 10
. churdle linear hours age, select(commute) ll(0)

Iteration 0:   log likelihood = -2783.3352
Iteration 1:   log likelihood =  -2759.029
Iteration 2:   log likelihood = -2758.9992
Iteration 3:   log likelihood = -2758.9992

Cragg hurdle regression                         Number of obs     =      1,983
                                                LR chi2(1)        =       3.42
                                                Prob > chi2       =     0.0645
Log likelihood = -2758.9992                     Pseudo R2         =     0.0006

       hours |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
hours        |
         age |   .0051263   .0028423     1.80   0.071    -.0004446    .0106971
       _cons |   1.170932   .1238682     9.45   0.000     .9281548    1.413709
selection_ll |
     commute |  -.0655171   .1561046    -0.42   0.675    -.3714765    .2404423
       _cons |   .1421166   .0882658     1.61   0.107    -.0308813    .3151144
lnsigma      |
       _cons |   .1280215     .03453     3.71   0.000      .060344     .195699
      /sigma |   1.136577    .039246                      1.062202    1.216161

Let’s assume for a moment that we already have an evaluator, mychurdle1, that returns the corresponding log-likelihood value. We can fit a Bayesian hurdle model using bayesmh as follows:

. gen byte hours0 = (hours==0) //dependent variable for the selection equation
. set seed 123
. bayesmh (hours age) (hours0 commute),
        llevaluator(mychurdle1, parameters({lnsig}))
        prior({hours:} {hours0:} {lnsig}, flat)
        saving(sim, replace) dots

(output omitted)

We use a two-equation specification to fit this model. The main regression is specified first, and the selection regression is specified next. The additional parameter, log of the standard deviation associated with the main regression, is specified in llevaluator()‘s suboption parameters(). All parameters are assigned flat priors to obtain results similar to churdle. MCMC results are saved in sim.dta.

Here is the actual output from bayesmh:

. bayesmh (hours age) (hours0 commute), llevaluator(mychurdle1, parameters({lns
> ig})) prior({hours:} {hours0:} {lnsig}, flat) saving(sim, replace) dots

Burn-in 2500 aaaaaaaaa1000aaaaaa...2000..... done
Simulation 10000 .........1000.........2000.........3000.........4000.........5
> 000.........6000.........7000.........8000.........9000.........10000 done

Model summary
  hours hours0 ~ mychurdle1(xb_hours,xb_hours0,{lnsig})

       {hours:age _cons} ~ 1 (flat)                                        (1)
  {hours0:commute _cons} ~ 1 (flat)                                        (2)
                 {lnsig} ~ 1 (flat)
(1) Parameters are elements of the linear form xb_hours.
(2) Parameters are elements of the linear form xb_hours0.

Bayesian regression                              MCMC iterations  =     12,500
Random-walk Metropolis-Hastings sampling         Burn-in          =      2,500
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      1,983
                                                 Acceptance rate  =      .2889
                                                 Efficiency:  min =     .05538
                                                              avg =     .06266
Log marginal likelihood = -2772.3953                          max =     .06945

             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
hours        |
         age |  .0050916   .0027972   .000106   .0049923  -.0000372   .0107231
       _cons |  1.167265    .124755   .004889   1.175293   .9125105   1.392021
hours0       |
     commute | -.0621282   .1549908   .006585  -.0613511  -.3623891   .2379805
       _cons |  .1425693   .0863626   .003313   .1430396  -.0254507   .3097677
       lnsig |  .1321532   .0346446   .001472   .1326704   .0663646   .2015249

file sim.dta saved

The results are similar to those produced by churdle, as one would expect with noninformative priors.

If desired, we can use bayesstats summary to obtain the estimate of the standard deviation:

. bayesstats summary (sigma: exp({lnsig}))

Posterior summary statistics                      MCMC sample size =    10,000

       sigma : exp({lnsig})

             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
       sigma |  1.141969   .0396264   .001685   1.141874   1.068616   1.223267

Let’s now talk in more detail about a log-likelihood evaluator. We will consider two evaluators: one using churdle and one directly implementing the log likelihood of the considered hurdle model.


Log-likelihood evaluator using churdle


Here we demonstrate how to write a log-likelihood evaluator that calls an existing Stata estimation command, churdle in our example, to compute the log likelihood.

program mychurdle1
        version 14.0
        args llf
        tempname b
        mat `b' = ($MH_b, $MH_p)
        capture churdle linear $MH_y1 $MH_y1x1 if $MH_touse, ///
                    select($MH_y2x1) ll(0) from(`b') iterate(0)
        if _rc {
                if (_rc==1) { // handle break key
                        exit _rc
                scalar `llf' = .
        else {
                scalar `llf' = e(ll)

The mychurdle1 program returns the log-likelihood value computed by churdle at the current values of model parameters. This program accepts one argument — a temporary scalar to contain the log-likelihood value llf. We stored current values of model parameters (regression coefficients from two equations stored in vector MH_b and the extra parameter, log standard-deviation, stored in vector MH_p) in a temporary matrix b. We specified churdle‘s options from() and iterate(0) to evaluate the log likelihood at the current parameter values. Finally, we stored the resulting log-likelihood value in llf (or missing value if the command failed to evaluate the log likelihood).


Log-likelihood evaluator directly computing log likelihood


Here we demonstrate how to write a log-likelihood evaluator that computes the likelihood of the fitted hurdle model directly rather than calling churdle.

program mychurdle2
        version 14.0
        args lnf xb xg lnsig
        tempname sig
        scalar `sig' = exp(`lnsig')
        tempvar lnfj
        qui gen double `lnfj' = normal(`xg')  if $MH_touse
        qui replace `lnfj'    = log(1 - `lnfj') if $MH_y1 <= 0 & $MH_touse
        qui replace `lnfj'    = log(`lnfj') - log(normal(`xb'/`sig'))   ///
                              + log(normalden($MH_y1,`xb',`sig'))       ///
                                if $MH_y1 > 0 & $MH_touse
        summarize `lnfj' if $MH_touse, meanonly
        if r(N) < $MH_n {
            scalar `lnf' = .
        scalar `lnf' = r(sum)

The mychurdle2 program accepts four arguments: a temporary scalar to contain the log-likelihood value llf, temporary variables xb and xg containing linear predictors from the corresponding main and selection equations evaluated at the current values of model parameters, and temporary scalar lnsig containing the current value of the log standard-deviation parameter. We compute and store the observation-level log likelihood in a temporary variable lnfj. Global MH_y1 contains the name of the dependent variable from the first (main) equation, and global MH_touse marks the estimation sample. If all observation-specific log likelihood contributions are nonmissing, we store the overall log-likelihood value in lnf or we otherwise store missing.

We fit our model using the same syntax as earlier, except we use mychurdle2 as the program evaluator.

. set seed 123
. bayesmh (hours age) (hours0 commute), llevaluator(mychurdle2, parameters({lns
> ig})) prior({hours:} {hours0:} {lnsig}, flat) saving(sim, replace) dots

Burn-in 2500 aaaaaaaaa1000aaaaaa...2000..... done
Simulation 10000 .........1000.........2000.........3000.........4000.........5
> 000.........6000.........7000.........8000.........9000.........10000 done

Model summary
  hours hours0 ~ mychurdle2(xb_hours,xb_hours0,{lnsig})

       {hours:age _cons} ~ 1 (flat)                                        (1)
  {hours0:commute _cons} ~ 1 (flat)                                        (2)
                 {lnsig} ~ 1 (flat)
(1) Parameters are elements of the linear form xb_hours.
(2) Parameters are elements of the linear form xb_hours0.

Bayesian regression                              MCMC iterations  =     12,500
Random-walk Metropolis-Hastings sampling         Burn-in          =      2,500
                                                 MCMC sample size =     10,000
                                                 Number of obs    =      1,983
                                                 Acceptance rate  =      .2889
                                                 Efficiency:  min =     .05538
                                                              avg =     .06266
Log marginal likelihood = -2772.3953                          max =     .06945

             |                                                Equal-tailed
             |      Mean   Std. Dev.     MCSE     Median  [95% Cred. Interval]
hours        |
         age |  .0050916   .0027972   .000106   .0049923  -.0000372   .0107231
       _cons |  1.167265    .124755   .004889   1.175293   .9125105   1.392021
hours0       |
     commute | -.0621282   .1549908   .006585  -.0613511  -.3623891   .2379805
       _cons |  .1425693   .0863626   .003313   .1430396  -.0254507   .3097677
       lnsig |  .1321532   .0346446   .001472   .1326704   .0663646   .2015249

file sim.dta not found; file saved

We obtain the same results as those obtained using approach 1, and we obtain them much faster.


Final remarks


Approach 1 is very straightforward. It can be applied to any Stata command that returns the log likelihood and allows you to specify parameter values at which this log likelihood must be evaluated. Without too much programming effort, you can use almost any existing Stata maximum likelihood estimation command with bayesmh. A disadvantage of approach 1 is slower execution compared with programming the likelihood directly, as in approach 2. For example, the command using the mychurdle1 evaluator from approach 1 took about 25 minutes to run, whereas the command using the mychurdle2 evaluator from approach 2 took only 20 seconds.

Stata 14 announced, ships

We’ve just announced the release of Stata 14. Stata 14 ships and downloads starting now.

I just posted on Statalist about it. Here’s a copy of what I wrote.

Stata 14 is now available. You heard it here first.

There’s a long tradition that Statalisters hear about Stata’s new releases first. The new forum is celebrating its first birthday, but it is a continuation of the old Statalist, so the tradition continues, but updated for the modern world, where everything happens more quickly. You are hearing about Stata 14 roughly a microsecond before the rest of the world. Traditions are important.

Here’s yet another example of everything happening faster in the modern world. Rather than the announcement preceding shipping by a few weeks as in previous releases, Stata 14 ships and downloads starting now. Or rather, a microsecond from now.

Some things from the past are worth preserving, however, and one is that I get to write about the new release in my own idiosyncratic way. So let me get the marketing stuff out of the way and then I can tell you about a few things that especially interest me and might interest you.


Here’s a partial list of what’s new, a.k.a. the highlights:

  • Unicode
  • More than 2 billion observations (Stata/MP)
  • Bayesian analysis
  • IRT (Item Response Theory)
  • Panel-data survival models
  • Treatment effects
    • Treatment effects for survival models
    • Endogenous treatments
    • Probability weights
    • Balance analysis
  • Multilevel mixed-effects survival models
  • Small-sample inference for multilevel models
  • SEM (structural equation modeling)
    • Survival models
    • Satorra-Bentler scaled chi-squared test
    • Survey data
    • Multilevel weights
  • Power and sample size
    • Survival models
    • Contingency (epidemiological) tables
  • Markov-switching regression models
  • Tests for structural breaks in time-series
  • Fractional outcome regression models
  • Hurdle models
  • Censored Poisson regression
  • Survey support & multilevel weights for multilevel models
  • New random-number generators
  • Estimated marginal means and marginal effects
    • Tables for multiple outcomes and levels
    • Integration over unobserved and latent variables
  • ICD-10
  • Stata in Spanish and in Japanese

The above list is not complete; it lists about 30% of what’s new.

For all the details about Stata 14, including purchase and update information, and links to distributors outside of the US, visit stata.com/stata14.

If you are outside of the US, you can order from your authorized Stata distributor. They will supply codes so that you can access and download from stata.com.


I want to write about three of the new features ‒ Unicode, more than 2-billion observations, and Bayesian analysis.

Unicode is the modern way that computers encode characters such as the letters in what you are now reading. Unicode encodes all the world’s characters, meaning I can write Hello, Здравствуйте, こんにちは, and lots more besides. Well, the forum software is modern and I always could write those words here. Now I can write them in Stata, too.

For those who care, Stata uses Unicode’s UTF-8 encoding.

Anyway, you can use Unicode characters in your data, of course; in your variable labels, of course; and in your value labels, of course. What you might not expect is that you can use Unicode in your variable names, macro names, and everywhere else Stata wants a name or identifier.

Here’s the auto data in Japanese:

Your use of Unicode may not be as extreme as the above. It might be enough just to make tables and graphs labeled in languages other than English. If so, just set the variable labels and value labels. It doesn’t matter whether the variables are named übersetzung and kofferraum or gear_ratio and trunkspace or 変速比 and トランク.

I want to remind English speakers that Unicode includes mathematical symbols. You can use them in titles, axis labels, and the like.

Few good things come without cost. If you have been using Extended ASCII to circumvent Stata’s plain ASCII limitations, those files need to be translated to Unicode if the strings in them are to display correctly in Stata 14. This includes .dta files, do-files, ado-files, help files, and the like. It’s easier to do than you might expect. A new unicode analyze command will tell you whether you have files that need fixing and, if so, the new unicode translate command will fix them for you. It’s almost as easy as typing

. unicode translate *

This command translates your files and that has got to concern you. What if it mistranslates them? What if the power fails? Relax. unicode translate makes backups of the originals, and it keeps the backups until you delete them, which you have to do by typing

. unicode erasebackups, badidea

Yes, the option really is named badidea and it is not optional. Another unicode command can restore the backups.

The difficult part of translating your existing files is not performing the translation, it’s determining which Extended ASCII encoding your files used so that the translation can be performed. We have advice on that in the help files but, even so, some of you will only be able to narrow down the encoding to a few choices. The good news is that it is easy to try each one. You just type

. unicode retranslate *

It won’t take long to figure out which encoding works best.

Stata/MP now allows you to process datasets containing more than 2.1-billion observations. This sounds exciting, but I suspect it will interest only a few of you. How many of us have datasets with more than 2.1-billion observations? And even if you do, you will need a computer with lots of memory. This feature is useful if you have access to a 512-gigabyte, 1-terabyte, or 1.5-terabyte computer. With smaller computers, you are unlikely to have room for 2.1 billion observations. It’s exciting that such computers are available.

We increased the limit on only Stata/MP because, to exploit the higher limit, you need multiple processors. It’s easy to misjudge how much larger a 2-billion observation dataset is than a 2-million observation one. On my everyday 16 gigabyte computer ‒ which is nothing special ‒ I just fit a linear regression with six RHS variables on 2-million observations. It ran in 1.2 seconds. I used Stata/SE, and the 1.2 seconds felt fast. So, if my computer had more memory, how long would it take to fit a model on 2-billion observations? 1,200 seconds, which is to say, 20 minutes! You need Stata/MP. Stata/MP4 will reduce that to 5 minutes. Stata/MP32 will reduce that to 37.5 seconds.

By the way, if you intend to use more than 2-billion observations, be sure to click on help obs_advice that appears in the start-up notes after Stata launches. You will get better performance if you set min_memory and segmentsize to larger values. We tell you what values to set.

There’s quite a good discussion about dealing with more than 2-billion observations at stata.com/stata14/huge-datasets.

After that, it’s statistics, statistics, statistics.

Which new statistics will interest you obviously depends on your field. We’ve gone deeper into a number of fields. Treatment effects for survival models is just one example. Multilevel survival models is another. Markov-switching models is yet another. Well, you can read the list above.

Two of the new statistical features are worth mentioning, however, because they simply weren’t there previously. They are Bayesian analysis and IRT models, which are admittedly two very different things.

IRT is a highlight of the release and for some of it you will be the highlight, so I mention it, and I’ll just tell you to see stata.com/stata14/irt for more information.

Bayesian analysis is the other highlight as far as I’m concerned, and it will interest a lot of you because it cuts across fields. Many of you are already knowledgeable about this and I can just hear you asking, “Does Stata include …?” So here’s the high-speed summary:

Stata fits continuous-, binary-, ordinal-, and count-outcome models. And linear and nonlinear models. And generalized nonlinear models. Univariate, multivariate, and multiple-equation. It provides 10 likelihood models and 18 prior distributions. It also allows for user-defined likelihoods combined with built-in priors, built-in likelihoods combined with user-defined priors, and a roll-your-own programming approach to calculate the posterior density directly. MCMC methods are provided, including Adaptive Metropolis-Hastings (MH), Adaptive MH with Gibbs updates, and full Gibbs sampling for certain likelihoods and priors.

It’s also easy to use and that’s saying something.

There’s a great example of the new Bayes features in The Stata News. I mention this because including the example there is nearly a proof of ease of use. The example looks at the number of disasters in the British coal mining industry. There was a fairly abrupt decrease in the rate sometime between 1887 and 1895, which you see if you eyeballed a graph. In the example, we model the number of disasters before the change point as one Poisson process; the number after, as another Poisson process; and then we fit a model of the two Poisson parameters and the date of change. For the change point it uses a uniform prior on [1851, 1962] ‒ the range of the data ‒ and obtains a posterior mean estimate of 1890.4 and a 95% credible interval of [1886, 1896], which agrees with our visual assessment.

I hope something I’ve written above interests you. Visit stata.com/stata14 for more information.

‒ Bill

Using gmm to solve two-step estimation problems

Two-step estimation problems can be solved using the gmm command.

When a two-step estimator produces consistent point estimates but inconsistent standard errors, it is known as the two-step-estimation problem. For instance, inverse-probability weighted (IPW) estimators are a weighted average in which the weights are estimated in the first step. Two-step estimators use first-step estimates to estimate the parameters of interest in a second step. The two-step-estimation problem arises because the second step ignores the estimation error in the first step.

One solution is to convert the two-step estimator into a one-step estimator. My favorite way to do this conversion is to stack the equations solved by each of the two estimators and solve them jointly. This one-step approach produces consistent point estimates and consistent standard errors. There is no two-step problem because all the computations are performed jointly. Newey (1984) derives and justifies this approach.

I’m going to illustrate this approach with the IPW example, but it can be used with any two-step problem as long as each step is continuous.

IPW estimators are frequently used to estimate the mean that would be observed if everyone in a population received a specified treatment, a quantity known as a potential-outcome mean (POM). A difference of POMs is called the average treatment effect (ATE). Aside from all that, it is the mechanics of the two-step IPW estimator that interest me here. IPW estimators are weighted averages of the outcome, and the weights are estimated in a first step. The weights used in the second step are the inverse of the estimated probability of treatment.

Let’s imagine we are analyzing an extract of the birthweight data used by Cattaneo (2010). In this dataset, bweight is the baby’s weight at birth, mbsmoke is 1 if the mother smoked while pregnant (and 0 otherwise), mmarried is 1 if the mother is married, and prenatal1 is 1 if the mother had a prenatal visit in the first trimester.

Let’s imagine we want to estimate the mean when all pregnant women smoked, which is to say, the POM for smoking. If we were doing substantive research, we would also estimate the POM when no pregnant women smoked. The difference between these estimated POMs would then estimate the ATE of smoking.

In the IPW estimator, we begin by estimating the probability weights for smoking. We fit a probit model of mbsmoke as a function of mmarried and prenatal1.

. use cattaneo2
(Excerpt from Cattaneo (2010) Journal of Econometrics 155: 138-154)

. probit mbsmoke mmarried prenatal1, vce(robust)

Iteration 0:   log pseudolikelihood = -2230.7484
Iteration 1:   log pseudolikelihood = -2102.6994
Iteration 2:   log pseudolikelihood = -2102.1437
Iteration 3:   log pseudolikelihood = -2102.1436

Probit regression                                 Number of obs   =       4642
                                                  Wald chi2(2)    =     259.42
                                                  Prob > chi2     =     0.0000
Log pseudolikelihood = -2102.1436                 Pseudo R2       =     0.0577

             |               Robust
     mbsmoke |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
    mmarried |  -.6365472   .0478037   -13.32   0.000    -.7302407   -.5428537
   prenatal1 |  -.2144569   .0547583    -3.92   0.000    -.3217811   -.1071327
       _cons |  -.3226297   .0471906    -6.84   0.000    -.4151215   -.2301379

The results indicate that both mmarried and prenatal1 significantly predict whether the mother smoked while pregnant.

We want to calculate the inverse probabilities. We begin by getting the probabilities:

. predict double pr, pr

Now, we can obtain the inverse probabilities by typing

. generate double ipw = (mbsmoke==1)/pr

We can now perform the second step: calculate the mean for smokers by using the IPWs.

. mean bweight [pw=ipw]

Mean estimation                     Number of obs    =     864

             |       Mean   Std. Err.     [95% Conf. Interval]
     bweight |   3162.868   21.71397      3120.249    3205.486
. mean bweight [pw=ipw] if mbsmoke

The point estimate reported by mean is consistent; the reported standard error Read more…

Putting the Stata Manuals on your iPad

You can install the Stata manuals on your iPad. Here’s how: install GoodReader and copy the manuals from your computer to your iPad. It takes a few minutes and will cost you about $7 to purchase the app.

Once installed, launch GoodReader, press the bookmark icon at the bottom of the screen, and GoodReader shows you the list of the manuals.


Well, that’s only a partial list. We’d have to scroll to see them all.

If you tap on a manual, it opens,


You can swipe to go forward,

All the links are live. If you tap on graph intro, the reader jumps to the manual entry,


Here are some formulas:


To illustrate formulas, I jumped to mi estimate in the [MI] manual. I can jump anywhere because I have all 21 manuals—all 11,000-plus pages—installed on my iPad.

You can have them installed on your iPad, too.

Here’s how.

Step 1. Install GoodReader on your iPad

You must purchase GoodReader 4 from the App Store. No other PDF reader will do. What makes GoodReader a good reader for the Stata manuals is that it can handle links across manuals. As of Read more…

Using gsem to combine estimation results

gsem is a very flexible command that allows us to fit very sophisticated models. However, it is also useful in situations that involve simple models.

For example, when we want to compare parameters among two or more models, we usually use suest, which combines the estimation results under one parameter vector and creates a simultaneous covariance matrix of the robust type. This covariance estimate is described in the Methods and formulas of [R] suest as the robust variance from a “stacked model”. Actually, gsem can estimate these kinds of “stacked models”, even if the estimation samples are not the same and eventually overlap. By using the option vce(robust), we can replicate the results from suest if the models are available for gsem. In addition, gsem allows us to combine results from some estimation commands that are not supported by suest, like models including random effects.


Example: Comparing parameters from two models


Let’s consider the childweight dataset, described in [ME] mixed. Consider the following models, where weights of boys and girls are modeled using the age and the age-squared:

. webuse childweight, clear
(Weight data on Asian children)

. regress  weight age c.age#c.age if girl == 0, noheader
      weight |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
         age |   7.985022   .6343855    12.59   0.000     6.725942    9.244101
 c.age#c.age |   -1.74346   .2374504    -7.34   0.000    -2.214733   -1.272187
       _cons |   3.684363   .3217223    11.45   0.000     3.045833    4.322893

. regress  weight age c.age#c.age if girl == 1, noheader
      weight |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
         age |   7.008066   .5164687    13.57   0.000     5.982746    8.033386
 c.age#c.age |  -1.450582   .1930318    -7.51   0.000    -1.833798   -1.067365
       _cons |   3.480933   .2616616    13.30   0.000     2.961469    4.000397

To test whether birthweights are the same for the two groups, we need to test whether the intercepts in the two regressions are the same. Using suest, we would proceed as follows:

. quietly regress weight age c.age#c.age if girl == 0, noheader

. estimates store boys

. quietly regress weight age c.age#c.age if girl == 1, noheader

. estimates store girls

. suest boys girls

Simultaneous results for boys, girls

                                                  Number of obs   =        198

             |               Robust
             |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
boys_mean    |
         age |   7.985022   .4678417    17.07   0.000     7.068069    8.901975
 c.age#c.age |   -1.74346   .2034352    -8.57   0.000    -2.142186   -1.344734
       _cons |   3.684363   .1719028    21.43   0.000      3.34744    4.021286
boys_lnvar   |
       _cons |   .4770289   .1870822     2.55   0.011     .1103546    .8437032
girls_mean   |
         age |   7.008066   .4166916    16.82   0.000     6.191365    7.824766
 c.age#c.age |  -1.450582   .1695722    -8.55   0.000    -1.782937   -1.118226
       _cons |   3.480933   .1556014    22.37   0.000      3.17596    3.785906
girls_lnvar  |
       _cons |   .0097127   .1351769     0.07   0.943    -.2552292    .2746545

Invoking an estimation command with the option coeflegend will give us a legend we can use to refer to the parameters when we use postestimation commands like test.

. suest, coeflegend

Simultaneous results for boys, girls

                                                  Number of obs   =        198

             |      Coef.  Legend
boys_mean    |
         age |   7.985022  _b[boys_mean:age]
 c.age#c.age |   -1.74346  _b[boys_mean:c.age#c.age]
       _cons |   3.684363  _b[boys_mean:_cons]
boys_lnvar   |
       _cons |   .4770289  _b[boys_lnvar:_cons]
girls_mean   |
         age |   7.008066  _b[girls_mean:age]
 c.age#c.age |  -1.450582  _b[girls_mean:c.age#c.age]
       _cons |   3.480933  _b[girls_mean:_cons]
girls_lnvar  |
       _cons |   .0097127  _b[girls_lnvar:_cons]

. test  _b[boys_mean:_cons] = _b[girls_mean:_cons]

 ( 1)  [boys_mean]_cons - [girls_mean]_cons = 0

           chi2(  1) =    0.77
         Prob > chi2 =    0.3803

We find no evidence that the intercepts are different.

Now, let’s replicate those results Read more…

How to simulate multilevel/longitudinal data

I was recently talking with my friend Rebecca about simulating multilevel data, and she asked me if I would show her some examples. It occurred to me that many of you might also like to see some examples, so I decided to post them to the Stata Blog.




We simulate data all the time at StataCorp and for a variety of reasons.

One reason is that real datasets that include the features we would like are often difficult to find. We prefer to use real datasets in the manual examples, but sometimes that isn’t feasible and so we create simulated datasets.

We also simulate data to check the coverage probabilities of new estimators in Stata. Sometimes the formulae published in books and papers contain typographical errors. Sometimes the asymptotic properties of estimators don’t hold under certain conditions. And every once in a while, we make coding mistakes. We run simulations during development to verify that a 95% confidence interval really is a 95% confidence interval.

Simulated data can also come in handy for presentations, teaching purposes, and calculating statistical power using simulations for complex study designs.

And, simulating data is just plain fun once you get the hang of it.

Some of you will recall Vince Wiggins’s blog entry from 2011 entitled “Multilevel random effects in xtmixed and sem — the long and wide of it” in which he simulated a three-level dataset. I’m going to elaborate on how Vince simulated multilevel data, and then I’ll show you some useful variations. Specifically, I’m going to talk about:

  1. How to simulate single-level data
  2. How to simulate two- and three-level data
  3. How to simulate three-level data with covariates
  4. How to simulate longitudinal data with random slopes
  5. How to simulate longitudinal data with structured errors


How to simulate single-level data


Let’s begin by simulating a trivially simple, single-level dataset that has the form

\[y_i = 70 + e_i\]

We will assume that e is normally distributed with mean zero and variance \(\sigma^2\).

We’d want to simulate 500 observations, so let’s begin by clearing Stata’s memory and setting the number of observations to 500.

. clear 
. set obs 500

Next, let’s create a variable named e that contains pseudorandom normally distributed data with mean zero and standard deviation 5:

. generate e = rnormal(0,5)

The variable e is our error term, so we can create an outcome variable y by typing

. generate y = 70 + e

. list y e in 1/5

     |        y           e |
  1. | 78.83927     8.83927 |
  2. | 69.97774   -.0222647 |
  3. | 69.80065   -.1993514 |
  4. | 68.11398    -1.88602 |
  5. | 63.08952   -6.910483 |

We can fit a linear regression for the variable y to determine whether our parameter estimates are reasonably close to the parameters we specified when we simulated our dataset:

. regress y

      Source |       SS       df       MS              Number of obs =     500
-------------+------------------------------           F(  0,   499) =    0.00
       Model |           0     0           .           Prob > F      =       .
    Residual |  12188.8118   499  24.4264766           R-squared     =  0.0000
-------------+------------------------------           Adj R-squared =  0.0000
       Total |  12188.8118   499  24.4264766           Root MSE      =  4.9423

           y |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
       _cons |   69.89768    .221027   316.24   0.000     69.46342    70.33194

The estimate of _cons is 69.9, which is very close to 70, and the Root MSE of 4.9 is equally close to the error’s standard deviation of 5. The parameter estimates will not be exactly equal to the underlying parameters we specified when we created the data because we introduced randomness with the rnormal() function.

This simple example is just to get us started before we work with multilevel data. For familiarity, let’s fit the same model with the mixed command that we will be using later:

. mixed y, stddev

Mixed-effects ML regression                     Number of obs      =       500

                                                Wald chi2(0)       =         .
Log likelihood = -1507.8857                     Prob > chi2        =         .

           y |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
       _cons |   69.89768   .2208059   316.56   0.000     69.46491    70.33045

  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
                sd(Residual) |    4.93737   .1561334      4.640645    5.253068

The output is organized with the parameter estimates for the fixed part in the top table and the estimated standard deviations for the random effects in the bottom table. Just as previously, the estimate of _cons is 69.9, and the estimate of the standard deviation of the residuals is 4.9.

Okay. That really was trivial, wasn’t it? Simulating two- and three-level data is almost as easy.


How to simulate two- and three-level data


I posted a blog entry last year titled “Multilevel linear models in Stata, part 1: Components of variance“. In that posting, I showed a diagram for a residual of a three-level model.

The equation for the variance-components model I fit had the form

\[y_{ijk} = mu + u_i.. + u_{ij.} + e_{ijk}\]

This model had three residuals, whereas the one-level model we just fit above had only one.

This time, let’s start with a two-level model. Let’s simulate Read more…

Using resampling methods to detect influential points

As stated in the documentation for jackknife, an often forgotten utility for this command is the detection of overly influential observations.

Some commands, like logit or stcox, come with their own set of prediction tools to detect influential points. However, these kinds of predictions can be computed for virtually any regression command. In particular, we will see that the dfbeta statistics can be easily computed for any command that accepts the jackknife prefix. dfbeta statistics allow us to visualize how influential some observations are compared with the rest, concerning a specific parameter.

We will also compute Cook’s likelihood displacement, which is an overall measure of influence, and it can also be compared with a specific threshold.


Using jackknife to compute dfbeta


The main task of jackknife is to fit the model while suppressing one observation at a time, which allows us to see how much results change when each observation is suppressed; in other words, it allows us to see how much each observation influences the results. A very intuitive measure of influence is dfbeta, which is the amount that a particular parameter changes when an observation is suppressed. There will be one dfbeta variable for each parameter. If \(\hat\beta\) is the estimate for parameter \(\beta\) obtained from the full data and \( \hat\beta_{(i)} \) is the corresponding estimate obtained when the \(i\)th observation is suppressed, then the \(i\)th element of variable dfbeta is obtained as

\[dfbeta = \hat\beta – \hat\beta_{(i)}\]

Parameters \(\hat\beta\) are saved by the estimation commands in matrix e(b) and also can be obtained using the _b notation, as we will show below. The leave-one-out values \(\hat\beta_{(i)}\) can be saved in a new file by using the option saving() with jackknife. With these two elements, we can compute the dfbeta values for each variable.

Let’s see an example with the probit command.

. sysuse auto, clear
(1978 Automobile Data)

. *preserve original dataset
. preserve

. *generate a variable with the original observation number
. gen obs =_n

. probit foreign mpg weight

Iteration 0:   log likelihood =  -45.03321
Iteration 1:   log likelihood = -27.914626
Iteration 2:   log likelihood = -26.858074
Iteration 3:   log likelihood = -26.844197
Iteration 4:   log likelihood = -26.844189
Iteration 5:   log likelihood = -26.844189

Probit regression                                 Number of obs   =         74
                                                  LR chi2(2)      =      36.38
                                                  Prob > chi2     =     0.0000
Log likelihood = -26.844189                       Pseudo R2       =     0.4039

     foreign |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
         mpg |  -.1039503   .0515689    -2.02   0.044    -.2050235   -.0028772
      weight |  -.0023355   .0005661    -4.13   0.000     -.003445   -.0012261
       _cons |   8.275464   2.554142     3.24   0.001     3.269437    13.28149

. *keep the estimation sample so each observation will be matched
. *with the corresponding replication
. keep if e(sample)
(0 observations deleted)

. *use jackknife to generate the replications, and save the values in
. *file b_replic
. jackknife, saving(b_replic, replace):  probit foreign mpg weight
(running probit on estimation sample)

Jackknife replications (74)
----+--- 1 ---+--- 2 ---+--- 3 ---+--- 4 ---+--- 5
..................................................    50

Probit regression                               Number of obs      =        74
                                                Replications       =        74
                                                F(   2,     73)    =     10.36
                                                Prob > F           =    0.0001
Log likelihood = -26.844189                     Pseudo R2          =    0.4039

             |              Jackknife
     foreign |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
         mpg |  -.1039503   .0831194    -1.25   0.215     -.269607    .0617063
      weight |  -.0023355   .0006619    -3.53   0.001    -.0036547   -.0010164
       _cons |   8.275464   3.506085     2.36   0.021     1.287847    15.26308

. *verify that all the replications were successful
. assert e(N_misreps) ==0

. merge 1:1 _n using b_replic

    Result                           # of obs.
    not matched                             0
    matched                                74  (_merge==3)

. *see how values from replications are stored
. describe, fullnames

Contains data from .../auto.dta
  obs:            74                          1978 Automobile Data
 vars:            17                          13 Apr 2013 17:45
 size:         4,440                          (_dta has notes)
              storage   display    value
variable name   type    format     label      variable label
make            str18   %-18s                 Make and Model
price           int     %8.0gc                Price
mpg             int     %8.0g                 Mileage (mpg)
rep78           int     %8.0g                 Repair Record 1978
headroom        float   %6.1f                 Headroom (in.)
trunk           int     %8.0g                 Trunk space (cu. ft.)
weight          int     %8.0gc                Weight (lbs.)
length          int     %8.0g                 Length (in.)
turn            int     %8.0g                 Turn Circle (ft.)
displacement    int     %8.0g                 Displacement (cu. in.)
gear_ratio      float   %6.2f                 Gear Ratio
foreign         byte    %8.0g      origin     Car type
obs             float   %9.0g
foreign_b_mpg   float   %9.0g                 [foreign]_b[mpg]
                float   %9.0g                 [foreign]_b[weight]
foreign_b_cons  float   %9.0g                 [foreign]_b[_cons]
_merge          byte    %23.0g     _merge
Sorted by:
     Note:  dataset has changed since last saved

. *compute the dfbeta for each covariate
. foreach var in mpg weight {
  2.  gen dfbeta_`var' = (_b[`var'] -foreign_b_`var')
  3. }

. gen dfbeta_cons = (_b[_cons] - foreign_b_cons)

. label var obs "observation number"
. label var dfbeta_mpg "dfbeta for mpg"
. label var dfbeta_weight "dfbeta for weight"
. label var dfbeta_cons "dfbeta for the constant"

. *plot dfbeta values for variable mpg
. scatter dfbeta_mpg obs, mlabel(obs) title("dfbeta values for variable mpg")

. *restore original dataset
. restore


Based on the impact on the Read more…

How to create animated graphics using Stata


Today I want to show you how to create animated graphics using Stata. It’s easier than you might expect and you can use animated graphics to illustrate concepts that would be challenging to illustrate with static graphs. In addition to Stata, you will need a video editing program but don’t be concerned if you don’t have one. At the 2012 UK Stata User Group Meeting Robert Grant demonstrated how to create animated graphics from within Stata using a free software program called FFmpeg. I will show you how I create my animated graphs using Camtasia and how Robert creates his using FFmpeg.

I recently recorded a video for the Stata Youtube channel called “Power and sample size calculations in Stata: A conceptual introduction“. I wanted to illustrate two concepts: (1) that statistcal power increases as sample size increases, and (2) as effect size increases. Both of these concepts can be illustrated with a static graph along with the explanation “imagine that …”. Creating animated graphs allowed me to skip the explanation and just show what I meant.



Creating the graphs

Videos are illusions. All videos — from Charles-Émile Reynaud’s 1877 praxinoscope to modern blu-ray movies — are created by displaying a series of ordered still images for a fraction of a second each. Our brains perceive this series of still images as motion.

To create the illusion of motion with graphs, Read more…

Categories: Graphics Tags: , ,

Retaining an Excel cell’s format when using putexcel

In a previous blog entry, I talked about the new Stata 13 command putexcel and how we could use putexcel with a Stata command’s stored results to create tables in an Excel file.

After the entry was posted, a few users pointed out two features they wanted added to putexcel:

  1. Retain a cell’s format after writing numeric data to it.
  2. Allow putexcel to format a cell.

In Stata 13.1, we added the new option keepcellformat to putexcel. This option retains a cell’s format after writing numeric data to it. keepcellformat is useful for people who want to automate the updating of a report or paper.

To review, the basic syntax of putexcel is as follows:

putexcel excel_cell=(expression) … using filename[, options]

If you are working with matrices, the syntax is

putexcel excel_cell=matrix(expression) … using filename[, options]

In the previous blog post, we exported a simple table created by the correlate command by using the commands below.

. sysuse auto
(1978 Automobile Data)

. correlate foreign mpg

             |  foreign      mpg
     foreign |   1.0000
         mpg |   0.3934   1.0000

. putexcel A1=matrix(r(C), names) using corr

These commands created the file corr.xlsx, which contained the table below in the first worksheet.


As you can see, this table is not formatted. So, I formatted the table by hand in Excel so that the correlations Read more…

Categories: Programming Tags: , , , , ,