Multilevel linear models in Stata, part 2: Longitudinal data

In my last posting, I introduced you to the concepts of hierarchical or “multilevel” data. In today’s post, I’d like to show you how to use multilevel modeling techniques to analyse longitudinal data with Stata’s xtmixed command.

Last time, we noticed that our data had two features. First, we noticed that the means within each level of the hierarchy were different from each other and we incorporated that into our data analysis by fitting a “variance component” model using Stata’s xtmixed command.

The second feature that we noticed is that repeated measurement of GSP showed an upward trend. We’ll pick up where we left off last time and stick to the concepts again and you can refer to the references at the end to learn more about the details.

The videos

Stata has a very friendly dialog box that can assist you in building multilevel models. If you would like a brief introduction using the GUI, you can watch a demonstration on Stata’s YouTube Channel:

Introduction to multilevel linear models in Stata, part 2: Longitudinal data

Longitudinal data

I’m often asked by beginning data analysts – “What’s the difference between longitudinal data and time-series data? Aren’t they the same thing?”.

The confusion is understandable — both types of data involve some measurement of time. But the answer is no, they are not the same thing.

Univariate time series data typically arise from the collection of many data points over time from a single source, such as from a person, country, financial instrument, etc.

Longitudinal data typically arise from collecting a few observations over time from many sources, such as a few blood pressure measurements from many people.

There are some multivariate time series that blur this distinction but a rule of thumb for distinguishing between the two is that time series have more repeated observations than subjects while longitudinal data have more subjects than repeated observations.

Because our GSP data from last time involve 17 measurements from 48 states (more sources than measurements), we will treat them as longitudinal data.

GSP Data: http://www.stata-press.com/data/r12/productivity.dta

Random intercept models

As I mentioned last time, repeated observations on a group of individuals can be conceptualized as multilevel data and modeled just as any other multilevel data. We left off last time with a variance component model for GSP (Gross State Product, logged) and noted that our model assumed a constant GSP over time while the data showed a clear upward trend.

Graph3

If we consider a single observation and think about our model, nothing in the fixed or random part of the models is a function of time.

Slide15

Let’s begin by adding the variable year to the fixed part of our model.

Slide16

As we expected, our grand mean has become a linear regression which more accurately reflects the change over time in GSP. What might be unexpected is that each state’s and region’s mean has changed as well and now has the same slope as the regression line. This is because none of the random components of our model are a function of time. Let’s fit this model with the xtmixed command:

. xtmixed gsp year, || region: || state:

------------------------------------------------------------------------------
         gsp |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
        year |   .0274903   .0005247    52.39   0.000     .0264618    .0285188
       _cons |  -43.71617   1.067718   -40.94   0.000    -45.80886   -41.62348
------------------------------------------------------------------------------

------------------------------------------------------------------------------
  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
region: Identity             |
                   sd(_cons) |   .6615238   .2038949      .3615664    1.210327
-----------------------------+------------------------------------------------
state: Identity              |
                   sd(_cons) |   .7805107   .0885788      .6248525    .9749452
-----------------------------+------------------------------------------------
                sd(Residual) |   .0734343   .0018737      .0698522    .0772001
------------------------------------------------------------------------------

The fixed part of our model now displays an estimate of the intercept (_cons = -43.7) and the slope (year = 0.027). Let’s graph the model for Region 7 and see if it fits the data better than the variance component model.

predict GrandMean, xb
label var GrandMean "GrandMean"
predict RegionEffect, reffects level(region)
predict StateEffect, reffects level(state)
gen RegionMean = GrandMean + RegionEffect
gen StateMean = GrandMean + RegionEffect + StateEffect

twoway  (line GrandMean year, lcolor(black) lwidth(thick))      ///
        (line RegionMean year, lcolor(blue) lwidth(medthick))   ///
        (line StateMean year, lcolor(green) connect(ascending)) ///
        (scatter gsp year, mcolor(red) msize(medsmall))         ///
        if region ==7,                                          ///
        ytitle(log(Gross State Product), margin(medsmall))      ///
        legend(cols(4) size(small))                             ///
        title("Multilevel Model of GSP for Region 7", size(medsmall))

Graph4

That looks like a much better fit than our variance-components model from last time. Perhaps I should leave well enough alone, but I can’t help noticing that the slopes of the green lines for each state don’t fit as well as they could. The top green line fits nicely but the second from the top looks like it slopes upward more than is necessary. That’s the best fit we can achieve if the regression lines are forced to be parallel to each other. But what if the lines were not forced to be parallel? What if we could fit a “mini-regression model” for each state within the context of my overall multilevel model. Well, good news — we can!

Random slope models

By introducing the variable year to the fixed part of the model, we turned our grand mean into a regression line. Next I’d like to incorporate the variable year into the random part of the model. By introducing a fourth random component that is a function of time, I am effectively estimating a separate regression line within each state.

Slide19

Notice that the size of the new, brown deviation u1ij. is a function of time. If the observation were one year to the left, u1ij. would be smaller and if the observation were one year to the right, u1ij.would be larger.

It is common to “center” the time variable before fitting these kinds of models. Explaining why is for another day. The quick answer is that, at some point during the fitting of the model, Stata will have to compute the equivalent of the inverse of the square of year. For the year 1986 this turns out to be 2.535e-07. That’s a fairly small number and if we multiply it by another small number…well, you get the idea. By centering age (e.g. cyear = year – 1978), we get a more reasonable number for 1986 (0.01). (Hint: If you have problems with your model converging and you have large values for time, try centering them. It won’t always help, but it might).

So let’s center our year variable by subtracting 1978 and fit a model that includes a random slope.

gen cyear = year - 1978
xtmixed gsp cyear, || region: || state: cyear, cov(indep)

Slide21

I’ve color-coded the output so that we can match each part of the output back to the model and the graph. The fixed part of the model appears in the top table and it looks like any other simple linear regression model. The random part of the model is definitely more complicated. If you get lost, look back at the graphic of the deviations and remind yourself that we have simply partitioned the deviation of each observation into four components. If we did this for every observation, the standard deviations in our output are simply the average of those deviations.

Let’s look at a graph of our new “random slope” model for Region 7 and see how well it fits our data.

predict GrandMean, xb
label var GrandMean "GrandMean"
predict RegionEffect, reffects level(region)
predict StateEffect_year StateEffect_cons, reffects level(state)

gen RegionMean = GrandMean + RegionEffect
gen StateMean_cons = GrandMean + RegionEffect + StateEffect_cons
gen StateMean_year = GrandMean + RegionEffect + StateEffect_cons + ///
                     (cyear*StateEffect_year)

twoway  (line GrandMean cyear, lcolor(black) lwidth(thick))             ///
        (line RegionMean cyear, lcolor(blue) lwidth(medthick))          ///
        (line StateMean_cons cyear, lcolor(green) connect(ascending))   ///
        (line StateMean_year cyear, lcolor(brown) connect(ascending))   ///
        (scatter gsp cyear, mcolor(red) msize(medsmall))                ///
        if region ==7,                                                  ///
        ytitle(log(Gross State Product), margin(medsmall))              ///
        legend(cols(3) size(small))                                     ///
        title("Multilevel Model of GSP for Region 7", size(medsmall))

Graph6

The top brown line fits the data slightly better, but the brown line below it (second from the top) is a much better fit. Mission accomplished!

Where do we go from here?

I hope I have been able to convince you that multilevel modeling is easy using Stata’s xtmixed command and that this is a tool that you will want to add to your kit. I would love to say something like “And that’s all there is to it. Go forth and build models!”, but I would be remiss if I didn’t point out that I have glossed over many critical topics.

In our GSP example, we would still like to consider the impact of other independent variables. I haven’t mentioned choice of estimation methods (ML or REML in the case of xtmixed). I’ve assessed the fit of our models by looking at graphs, an approach important but incomplete. We haven’t thought about hypothesis testing. Oh — and, all the usual residual diagnostics for linear regression such as checking for outliers, influential observations, heteroskedasticity and normality still apply….times four! But now that you understand the concepts and some of the mechanics, it shouldn’t be difficult to fill in the details. If you’d like to learn more, check out the links below.

I hope this was helpful…thanks for stopping by.

For more information

If you’d like to learn more about modeling multilevel and longitudinal data, check out

Multilevel and Longitudinal Modeling Using Stata, Third Edition
Volume I: Continuous Responses
Volume II: Categorical Responses, Counts, and Survival
by Sophia Rabe-Hesketh and Anders Skrondal

or sign up for our popular public training course Multilevel/Mixed Models Using Stata.

Multilevel linear models in Stata, part 1: Components of variance

In the last 15-20 years multilevel modeling has evolved from a specialty area of statistical research into a standard analytical tool used by many applied researchers.

Stata has a lot of multilevel modeling capababilities.

I want to show you how easy it is to fit multilevel models in Stata. Along the way, we’ll unavoidably introduce some of the jargon of multilevel modeling.

I’m going to focus on concepts and ignore many of the details that would be part of a formal data analysis. I’ll give you some suggestions for learning more at the end of the post.

    The videos

Stata has a friendly dialog box that can assist you in building multilevel models. If you would like a brief introduction using the GUI, you can watch a demonstration on Stata’s YouTube Channel:

Introduction to multilevel linear models in Stata, part 1: The xtmixed command

    Multilevel data

Multilevel data are characterized by a hierarchical structure. A classic example is children nested within classrooms and classrooms nested within schools. The test scores of students within the same classroom may be correlated due to exposure to the same teacher or textbook. Likewise, the average test scores of classes might be correlated within a school due to the similar socioeconomic level of the students.

You may have run across datasets with these kinds of structures in your own work. For our example, I would like to use a dataset that has both longitudinal and classical hierarchical features. You can access this dataset from within Stata by typing the following command:

use http://www.stata-press.com/data/r12/productivity.dta

We are going to build a model of gross state product for 48 states in the USA measured annually from 1970 to 1986. The states have been grouped into nine regions based on their economic similarity. For distributional reasons, we will be modeling the logarithm of annual Gross State Product (GSP) but in the interest of readability, I will simply refer to the dependent variable as GSP.

. describe gsp year state region

              storage  display     value
variable name   type   format      label      variable label
-----------------------------------------------------------------------------
gsp             float  %9.0g                  log(gross state product)
year            int    %9.0g                  years 1970-1986
state           byte   %9.0g                  states 1-48
region          byte   %9.0g                  regions 1-9

Let’s look at a graph of these data to see what we’re working with.

twoway (line gsp year, connect(ascending)), ///
        by(region, title("log(Gross State Product) by Region", size(medsmall)))

graph1

Each line represents the trajectory of a state’s (log) GSP over the years 1970 to 1986. The first thing I notice is that the groups of lines are different in each of the nine regions. Some groups of lines seem higher and some groups seem lower. The second thing that I notice is that the slopes of the lines are not the same. I’d like to incorporate those attributes of the data into my model.

    Components of variance

Let’s tackle the vertical differences in the groups of lines first. If we think about the hierarchical structure of these data, I have repeated observations nested within states which are in turn nested within regions. I used color to keep track of the data hierarchy.

slide2

We could compute the mean GSP within each state and note that the observations within in each state vary about their state mean.

slide3

Likewise, we could compute the mean GSP within each region and note that the state means vary about their regional mean.

slide4

We could also compute a grand mean and note that the regional means vary about the grand mean.

slide5

Next, let’s introduce some notation to help us keep track of our mutlilevel structure. In the jargon of multilevel modelling, the repeated measurements of GSP are described as “level 1″, the states are referred to as “level 2″ and the regions are “level 3″. I can add a three-part subscript to each observation to keep track of its place in the hierarchy.

slide7

Now let’s think about our model. The simplest regression model is the intercept-only model which is equivalent to the sample mean. The sample mean is the “fixed” part of the model and the difference between the observation and the mean is the residual or “random” part of the model. Econometricians often prefer the term “disturbance”. I’m going to use the symbol μ to denote the fixed part of the model. μ could represent something as simple as the sample mean or it could represent a collection of independent variables and their parameters.

slide8

Each observation can then be described in terms of its deviation from the fixed part of the model.

slide9

If we computed this deviation of each observation, we could estimate the variability of those deviations. Let’s try that for our data using Stata’s xtmixed command to fit the model:

. xtmixed gsp

Mixed-effects ML regression                     Number of obs      =       816

                                                Wald chi2(0)       =         .
Log likelihood = -1174.4175                     Prob > chi2        =         .

------------------------------------------------------------------------------
         gsp |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       _cons |   10.50885   .0357249   294.16   0.000     10.43883    10.57887
------------------------------------------------------------------------------

------------------------------------------------------------------------------
  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
                sd(Residual) |   1.020506   .0252613      .9721766    1.071238
------------------------------------------------------------------------------

The top table in the output shows the fixed part of the model which looks like any other regression output from Stata, and the bottom table displays the random part of the model. Let’s look at a graph of our model along with the raw data and interpret our results.

predict GrandMean, xb
label var GrandMean "GrandMean"
twoway  (line GrandMean year, lcolor(black) lwidth(thick))              ///
        (scatter gsp year, mcolor(red) msize(tiny)),                    ///
        ytitle(log(Gross State Product), margin(medsmall))              ///
        legend(cols(4) size(small))                                     ///
        title("GSP for 1970-1986 by Region", size(medsmall))

graph1b

The thick black line in the center of the graph is the estimate of _cons, which is an estimate of the fixed part of model for GSP. In this simple model, _cons is the sample mean which is equal to 10.51. In “Random-effects Parameters” section of the output, sd(Residual) is the average vertical distance between each observation (the red dots) and fixed part of the model (the black line). In this model, sd(Residual) is the estimate of the sample standard deviation which equals 1.02.

At this point you may be thinking to yourself – “That’s not very interesting – I could have done that with Stata’s summarize command”. And you would be correct.

. summ gsp

    Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
         gsp |       816    10.50885    1.021132    8.37885   13.04882

But here’s where it does become interesting. Let’s make the random part of the model more complex to account for the hierarchical structure of the data. Consider a single observation, yijk and take another look at its residual.

slide11

The observation deviates from its state mean by an amount that we will denote eijk. The observation’s state mean deviates from the the regionals mean uij. and the observation’s regional mean deviates from the fixed part of the model, μ, by an amount that we will denote ui... We have partitioned the observation’s residual into three parts, aka “components”, that describe its magnitude relative to the state, region and grand means. If we calculated this set of residuals for each observation, wecould estimate the variability of those residuals and make distributional assumptions about them.

slide12

These kinds of models are often called “variance component” models because they estimate the variability accounted for by each level of the hierarchy. We can estimate a variance component model for GSP using Stata’s xtmixed command:

xtmixed gsp, || region: || state:

------------------------------------------------------------------------------
         gsp |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       _cons |   10.65961   .2503806    42.57   0.000     10.16887    11.15035
------------------------------------------------------------------------------

------------------------------------------------------------------------------
  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
region: Identity             |   
                   sd(_cons) |   .6615227   .2038944       .361566    1.210325
-----------------------------+------------------------------------------------
state: Identity              |   
                   sd(_cons) |   .7797837   .0886614      .6240114    .9744415
-----------------------------+------------------------------------------------
                sd(Residual) |   .1570457   .0040071       .149385    .1650992
------------------------------------------------------------------------------

The fixed part of the model, _cons, is still the sample mean. But now there are three parameters estimates in the bottom table labeled “Random-effects Parameters”. Each quantifies the average deviation at each level of the hierarchy.

Let’s graph the predictions from our model and see how well they fit the data.

predict GrandMean, xb
label var GrandMean "GrandMean"
predict RegionEffect, reffects level(region)
predict StateEffect, reffects level(state)
gen RegionMean = GrandMean + RegionEffect
gen StateMean = GrandMean + RegionEffect + StateEffect

twoway  (line GrandMean year, lcolor(black) lwidth(thick))      ///
        (line RegionMean year, lcolor(blue) lwidth(medthick))   ///
        (line StateMean year, lcolor(green) connect(ascending)) ///
        (scatter gsp year, mcolor(red) msize(tiny)),            ///
        ytitle(log(Gross State Product), margin(medsmall))      ///
        legend(cols(4) size(small))                             ///
        by(region, title("Multilevel Model of GSP by Region", size(medsmall)))

graph2

Wow – that’s a nice graph if I do say so myself. It would be impressive for a report or publication, but it’s a little tough to read with all nine regions displayed at once. Let’s take a closer look at Region 7 instead.

twoway  (line GrandMean year, lcolor(black) lwidth(thick))      ///
        (line RegionMean year, lcolor(blue) lwidth(medthick))   ///
        (line StateMean year, lcolor(green) connect(ascending)) ///
        (scatter gsp year, mcolor(red) msize(medsmall))         ///
        if region ==7,                                          ///
        ytitle(log(Gross State Product), margin(medsmall))      ///
        legend(cols(4) size(small))                             ///
        title("Multilevel Model of GSP for Region 7", size(medsmall))

graph3

The red dots are the observations of GSP for each state within Region 7. The green lines are the estimated mean GSP within each State and the blue line is the estimated mean GSP within Region 7. The thick black line in the center is the overall grand mean for all nine regions. The model appears to fit the data fairly well but I can’t help noticing that the red dots seem to have an upward slant to them. Our model predicts that GSP is constant within each state and region from 1970 to 1986 when clearly the data show an upward trend.

So we’ve tackled the first feature of our data. We’ve succesfully incorporated the basic hierarchical structure into our model by fitting a variance componentis using Stata’s xtmixed command. But our graph tells us that we aren’t finished yet.

Next time we’ll tackle the second feature of our data — the longitudinal nature of the observations.

    For more information

If you’d like to learn more about modelling multilevel and longitudinal data, check out

Multilevel and Longitudinal Modeling Using Stata, Third Edition
Volume I: Continuous Responses
Volume II: Categorical Responses, Counts, and Survival
by Sophia Rabe-Hesketh and Anders Skrondal

or sign up for our popular public training course “Multilevel/Mixed Models Using Stata“.

There’s a course coming up in Washington, DC on February 7-8, 2013.

Using Stata’s random-number generators, part 4, details

24 October 2012

For those interested in how pseudo random number generators work, I just wrote something on Statalist which you can see in the Statalist archives by clicking the link even if you do not subscribe:

http://www.stata.com/statalist/archive/2012-10/msg01129.html

To remind you, I’ve been writing about how to use random-number generators in parts 1, 2, and 3, and I still have one more posting I want to write on the subject. What I just wrote on Statalist, however, is about how random-number generators work, and I think you will find it interesting.

To find out more about Statalist, see

Statalist

How to successfully ask a question on Statalist

Using Stata’s SEM features to model the Beck Depression Inventory

I just got back from the 2012 Stata Conference in San Diego where I gave a talk on Psychometric Analysis Using Stata and from the 2012 American Psychological Association Meeting in Orlando. Stata’s structural equation modeling (SEM) builder was popular at both meetings and I wanted to show you how easy it is to use. If you are not familiar with the basics of SEM, please refer to the references at the end of the post. My goal is simply to show you how to use the SEM builder assuming that you already know something about SEM. If you would like to view a video demonstration of the SEM builder, please click the play button below:

The data used here and for the silly examples in my talk were simulated to resemble one of the most commonly used measures of depression: the Beck Depression Inventory (BDI). If you find these data too silly or not relevant to your own research, you could instead imagine it being a set of questions to measure mathematical ability, the ability to use a statistical package, or whatever you wanted.

The Beck Depression Inventory

Originally published by Aaron Beck and colleagues in 1961, the BDI marked an important change in the conceptualization of depression from a psychoanalytic perspective to a cognitive/behavioral perspective. It was also a landmark in the measurement of depression shifting from lengthy, expensive interviews with a psychiatrist to a brief, inexpensive questionnaire that could be scored and quantified. The original inventory consisted of 21 questions each allowing ordinal responses of increasing symptom severity from 0-3. The sum of the responses could then be used to classify a respondent’s depressive symptoms as none, mild, moderate or severe. Many studies have demonstrated that the BDI has good psychometric properties such as high test-retest reliability and the scores correlate well with the assessments of psychiatrists and psychologists. The 21 questions can also be grouped into two subscales. The affective scale includes questions like “I feel sad” and “I feel like a failure” that quantify emotional symptoms of depression. The somatic or physical scale includes questions like “I have lost my appetite” and “I have trouble sleeping” that quantify physical symptoms of depression. Since its original publication, the BDI has undergone two revisions in response to the American Psychiatric Association’s (APA) Diagnostic and Statistical Manuals (DSM) and the BDI-II remains very popular.

The Stata Depression Inventory

Since the BDI is a copyrighted psychometric instrument, I created a fictitious instrument called the “Stata Depression Inventory”. It consists of 20 questions each beginning with the phrase “My statistical software makes me…”. The individual questions are listed in the variable labels below.

. describe qu1-qu20

variable  storage  display    value
 name       type   format     label      variable label
------------------------------------------------------------------------------
qu1         byte   %16.0g     response   ...feel sad
qu2         byte   %16.0g     response   ...feel pessimistic about the future
qu3         byte   %16.0g     response   ...feel like a failure
qu4         byte   %16.0g     response   ...feel dissatisfied
qu5         byte   %16.0g     response   ...feel guilty or unworthy
qu6         byte   %16.0g     response   ...feel that I am being punished
qu7         byte   %16.0g     response   ...feel disappointed in myself
qu8         byte   %16.0g     response   ...feel am very critical of myself
qu9         byte   %16.0g     response   ...feel like harming myself
qu10        byte   %16.0g     response   ...feel like crying more than usual
qu11        byte   %16.0g     response   ...become annoyed or irritated easily
qu12        byte   %16.0g     response   ...have lost interest in other people
qu13        byte   %16.0g     qu13_t1    ...have trouble making decisions
qu14        byte   %16.0g     qu14_t1    ...feel unattractive
qu15        byte   %16.0g     qu15_t1    ...feel like not working
qu16        byte   %16.0g     qu16_t1    ...have trouble sleeping
qu17        byte   %16.0g     qu17_t1    ...feel tired or fatigued
qu18        byte   %16.0g     qu18_t1    ...makes my appetite lower than usual
qu19        byte   %16.0g     qu19_t1    ...concerned about my health
qu20        byte   %16.0g     qu20_t1    ...experience decreased libido

The responses consist of a 5-point Likert scale ranging from 1 (Strongly Disagree) to 5 (Strongly Agree). Questions 1-10 form the affective scale of the inventory and questions 11-20 form the physical scale. Data were simulated for 1000 imaginary people and included demographic variables such as age, sex and race. The responses can be summarized succinctly in a matrix of bar graphs:

Classical statistical analysis

The beginning of a classical statistical analysis of these data might consist of summing the responses for questions 1-10 and referring to them as the “Affective Depression Score” and summing questions 11-20 and referring to them as the “Physical Depression Score”.

egen Affective = rowtotal(qu1-qu10)
label var Affective "Affective Depression Score"
egen physical = rowtotal(qu11-qu20)
label var physical "Physical Depression Score"

We could be more sophisticated and use principal components to create the affective and physical depression score:

pca qu1-qu20, components(2)
predict Affective Physical
label var Affective "Affective Depression Score"
label var Physical "Physical Depression Score"

We could then ask questions such as “Are there differences in affective and physical depression scores by sex?” and test these hypotheses using multivariate statistics such as Hotelling’s T-squared statistic. The problem with this analysis strategy is that it treats the depression scores as though they were measured without error and can lead to inaccurate p-values for our test statistics.

Structural equation modeling

Structural equation modeling (SEM) is an ideal way to analyze data where the outcome of interest is a scale or scales derived from a set of measured variables. The affective and physical scores are treated as latent variables in the model resulting in accurate p-values and, best of all….these models are very easy to fit using Stata! We begin by selecting the SEM builder from the Statistics menu:

In the SEM builder, we can select the “Add Measurement Component” icon:

which will open the following dialog box:

In the box labeled “Latent Variable Name” we can type “Affective” (red arrow below) and we can select the variables qu1-qu10 in the “Measured variables” box (blue arrow below).

When we click “OK”, the affective measurement component appears in the builder:

We can repeat this process to create a measurement component for our physical depression scale (images not shown). We can also allow for covariance/correlation between our affective and physical depression scales using the “Add Covariance” icon on the toolbar (red arrow below).

I’ll omit the intermediate steps to build the full model shown below but it’s easy to use the “Add Observed Variable” and “Add Path” icons to create the full model:

Now we’re ready to estimate the parameters for our model. To do this, we click the “Estimate” icon on the toolbar (duh!):

And the flowing dialog box appears:

Let’s ignore the estimation options for now and use the default settings. Click “OK” and the parameter estimates will appear in the diagram:

Some of the parameter estimates are difficult to read in this form but it is easy to rearrange the placement and formatting of the estimates to make them easier to read.

If we look at Stata’s output window and scroll up, you’ll notice that the SEM Builder automatically generated the command for our model:

sem (Affective -> qu1) (Affective -> qu2) (Affective -> qu3)
    (Affective -> qu4) (Affective -> qu5) (Affective -> qu6)
    (Affective -> qu7) (Affective -> qu8) (Affective -> qu9)
    (Affective -> qu10) (Physical -> qu11) (Physical -> qu12)
    (Physical -> qu13) (Physical -> qu14) (Physical -> qu15)
    (Physical -> qu16) (Physical -> qu17) (Physical -> qu18)
    (Physical -> qu19) (Physical -> qu20) (sex -> Affective)
    (sex -> Physical), latent(Affective Physical) cov(e.Physical*e.Affective)

We can gather terms and abbreviate some things to make the command much easier to read:

sem (Affective -> qu1-qu10) ///
    (Physical -> qu11-qu20) /// 
    (sex -> Affective Physical) ///
    , latent(Affective Physical ) ///
    cov( e.Physical*e.Affective)

We could then calculate a Wald statistic to test the null hypothesis that there is no association between sex and our affective and physical depression scales.

test sex

 ( 1)  [Affective]sex = 0
 ( 2)  [Physical]sex = 0

           chi2(  2) =    2.51
         Prob > chi2 =    0.2854

Final thoughts
This is an admittedly oversimplified example – we haven’t considered the fit of the model or considered any alternative models. We have only included one dichotomous independent variable. We might prefer to use a likelihood ratio test or a score test. Those are all very important issues and should not be ignored in a proper data analysis. But my goal was to demonstrate how easy it is to use Stata’s SEM builder to model data such as those arising from the Beck Depression Inventory. Incidentally, if these data were collected using a complex survey design, it would not be difficult to incorporate the sampling structure and sample weights into the analysis. Missing data can be handled easily as well using Full Information Maximum Likelihood (FIML) but those are topics for another day.

If you would like view the slides from my talk, download the data used in this example or view a video demonstration of Stata’s SEM builder using these data, please use the links below. For the dataset, you can also type use followed by the URL for the data to load it directly into Stata.

Slides:
http://stata.com/meeting/sandiego12/materials/sd12_huber.pdf

Data:
http://stata.com/meeting/sandiego12/materials/Huber_2012SanDiego.dta

YouTube video demonstration:
http://www.youtube.com/watch?v=Xj0gBlqwYHI

References

Beck AT, Ward CH, Mendelson M, Mock J, Erbaugh J (June 1961). An inventory for measuring depression. Arch. Gen. Psychiatry 4 (6): 561–71.

Beck AT, Ward C, Mendelson M (1961). Beck Depression Inventory (BDI). Arch Gen Psychiatry 4 (6): 561–571

Beck AT, Steer RA, Ball R, Ranieri W (December 1996). Comparison of Beck Depression Inventories -IA and -II in psychiatric outpatients. Journal of Personality Assessment 67 (3): 588–97
Bollen, KA. (1989). Structural Equations With Latent Variables. New York, NY: John Wiley and Sons

Kline, RB (2011). Principles and Practice of Structural Equation Modeling. New York, NY: Guilford Press

Raykov, T & Marcoulides, GA (2006). A First Course in Structural Equation Modeling. Mahwah, NJ: Lawrence Erlbaum

Schumacker, RE & Lomax, RG (2012) A Beginner’s Guide to Structural Equation Modeling, 3rd Ed. New York, NY: Routledge

Categories: Statistics Tags: , ,

Stata YouTube channel announced!

StataCorp now provides free tutorial videos on StataCorp’s YouTube channel,

http://www.youtube.com/user/statacorp

There are 24 videos providing 1 hour 51 minutes of instructional entertainment:

Stata Quick Tour (5:47)
Stata Quick Help (2:47)
Stata PDF Documentation (6:37)

Stata One-sample t-test (3:43)
Stata t-test for Two Independent Samples (5:09)
Stata t-test for Two Paired Samples (4:42)

Stata Simple Linear Regression (5:33)

Stata SEM Builder (8:09)
Stata One-way ANOVA (5:15)
Stata Two-way ANOVA (5:57)

Stata Pearson’s Correlation Coefficient (3:29)
Stata Pearson’s Chi2 and Fisher’s Exact Test (3:16)

Stata Box Plots (4:04)
Stata Basic Scatterplots (5:19)
Stata Bar Graphs (4:15)
Stata Histograms (4:50)
Stata Pie Charts (5:32)

Stata Descriptive Statistics (5:49)

Stata Tables and Crosstabulations (7:20)
Stata Combining Crosstabs and Descriptives (5:58)

Stata Converting Data to Stata with Stat/Transfer (2:47)
Stata Import Excel Data (1:33)
Stata Excel Copy/Paste (1:16)
Stata Example Data Included with Stata (2:14)

And more are forthcoming.

 

The inside story

Alright, that’s the official announcement.

Last Friday, 21 September 2012, was an exciting day here at StataCorp. After a couple of years of “wouldn’t it be cool if”, and a couple of months of “we’re almost there”, Stata’s YouTube channel was finally ready for prime time.

Stata’s YouTube Channel was the brainchild of Karen Strope, StataCorp’s Director of Marketing, but I had something to do with it, too. Well, maybe more than something, but I’m a modest guy. Anyway, I thought it sounded like fun and recorded a few prototype videos. Annette Fett, StataCorp’s Graphic Designer, added the cool splash-screen and after a few experiments, we soon had 24 Blu-ray resolution videos. We’ve kicked off with videos covering topics such as a tour of Stata’s interface, how to create basic graphs, how to conduct many common statistical analyses, and more.

My personal favorite is the video entitled Combining Crosstabs and Descriptives because it’s relevant to nearly all Stata users and works well as a video demonstration.

Videos about Stata – isn’t that like dancing about architecture?

Stata has over 9,000 pages of documentation included in PDF format, a built-in Help system, and a collection of books on general and special topics published by Stata Press, and an extensive collection of dialog boxes that make even the most complex graphs and analyses easy to perform.

So aren’t the videos, ahh, unnecessary?

The problem is, it’s cumbersome to describe how to use all of Stata’s features, especially dialog boxes, in a manual, even when you have 9,000 pages, and 9,000 pages tries even the most dedicated user’s patience.

In a 3-7 minutes video, we can show you how to create complicated graphs or a sophisticated structural equation model.

We have three audiences in mind.

  1. Videos for non-Stata users, whom we call future Stata users; videos intended to provide a loosely guided tour of Stata’s features.
  2. Videos for new Stata users, such as the person who might simply want to know “How do I calculate a twoway ANOVA in Stata?” or “How do I create a Pie Chart?”. These videos will get them up and running quickly and painlessly.
  3. Videos for experienced Stata users who want to learn new tips and tricks.

There’s actually a fourth group that’s of interest, too; experienced Stata users teaching statistics or data analysis classes, who don’t want to spend valuable class time showing their students how to use Stata. They can refer their students to the relevant videos as homework and thus free class time for the teaching of statistics.

Request for comments

One of the fun things about working at StataCorp is that management doesn’t much use the word “no”. New ideas are more often met with the phrase, “well, let’s try it and see what happens”. So I’m trying this. My plan is to add a couple of videos to the channel every week or two as time permits. I have a list of topics I’d like to cover including things like multiple imputation, survey analysis, mixed models, Stata’s “immediate” commands (tabi, ttesti, csi, cci, etc…), and more examples using the SEM Builder.

However, I will take requests. If you have a suggested topic or a future video, leave a comment.

I’d like to keep the videos brief, between 3-7 minutes, so please don’t request feature-length films like “How to do survival analysis in Stata”. Similarly, topics that are only interesting to you and your two post-docs such as “Please describe the difference between the Laplacian Approximation and Adaptive Gauss-Hermite Quadrature in the xtmepoisson command” are not likely to see the light of day. But I am very interested in your ideas for small, bite-sized topics that will work in a video format.

Categories: Company Tags: , ,

Using Stata’s random-number generators, part 3, drawing with replacement

The topic for today is drawing random samples with replacement. If you haven’t read part 1 and part 2 of this series on random numbers, do so. In the series we’ve
discussed that

  1. Stata’s runiform() function produces random numbers over the range [0,1). To produce such random numbers, type
    . generate double u = runiform()
    

     

  2. To produce continuous random numbers over [a,b), type
    . generate double u = (b-a)*runiform() + a
    

     

  3. To produce integer random numbers over [a,b], type
    . generate ui = floor((b-a+1)*runiform() + a)
    

    If b > 16,777,216, type

    . generate long ui = floor((b-a+1)*runiform() + a)
    

     

  4. To place observations in random order — to shuffle observations — type
    . set seed #
    . generate double u = runiform()
    . sort u
    

     

  5. To draw without replacement a random sample of n observations from a dataset of N observations, type
    . set seed #
    . sort variables_that_put_dta_in_unique_order
    . generate double u = runiform()
    . sort u
    . keep in 1/n
    

    If N>1,000, generate two random variables u1 and u2 in place of u, and substitute sort u1 u2 for sort u.

     

  6. To draw without replacement a P-percent random sample, type
    . set seed #
    . keep if runiform() <= P/100
    

I’ve glossed over details, but the above is the gist of it.

Today I’m going to tell you

  1. To draw a random sample of size n with replacement from a dataset of size N, type
    . set seed #
    . drop _all
    . set obs n
    . generate long obsno = floor(N*runiform()+1)
    . sort obsno
    . save obsnos_to_draw
    
    . use your_dataset, clear
    . generate long obsno = _n
    . merge 1:m obsno using obsnos_to_draw, keep(match) nogen
    

     

  2. You need to set the random-number seed only if you care about reproducibility. I’ll also mention that if N ≤ 16,777,216, it is not necessary to specify that new variable obsno be stored as long; the default float will be sufficient.

     
    The above solution works whether n<N, n=N, or n>N.

 

Drawing samples with replacement

The solution to sampling with replacement n observations from a dataset of size N is

  1. Draw n observation numbers 1, …, N with replacement. For instance, if N=4 and n=3, we might draw observation numbers 1, 3, and 3.

  2. Select those observations from the dataset of interest. For instance, select observations 1, 3, and 3.

As previously discussed in part 1, to generate random integers drawn with replacement over the range [a, b], use the formula

generate varname = floor((b-a+1)*runiform() + a)

In this case, we want a=1 and b=N, and the formula reduces to,

generate varname = floor(N*runiform() + 1)

So the first half of our solution could read

. drop _all
. set obs n
. generate obsno = floor(N*runiform() + 1)

Now we are merely left with the problem of selecting those observations from our dataset, which we can do using merge by typing

. sort obsno
. save obsnos_to_draw
. use dataset_of_interest, clear
. generate obsno = _n
. merge 1:m obsno using obsnos_to_draw, keep(match) nogen

Let’s do an example. In part 2 of this series, I had a dataset with observations corresponding to playing cards:

. use cards

. list in 1/5

     +-------------+
     | rank   suit |
     |-------------|
  1. |  Ace   Club |
  2. |    2   Club |
  3. |    3   Club |
  4. |    4   Club |
  5. |    5   Club |
     +-------------+

There are 52 observations in the dataset; I’m showing you just the first five. Let’s draw 10 cards from the deck, but with replacement.

The first step is to draw the observation numbers. We have N=52 cards in the deck, and we want to draw n=10, so we generate 10 random integers from the integers [1, 52]:

. drop _all

. set obs 10                            // we want n=10
obs was 0, now 10

. gen obsno = floor(52*runiform()+1)    // we draw from N=52


. list obsno                            // let's see what we have

     +-------+
     | obsno |
     |-------|
  1. |    42 |
  2. |    52 |
  3. |    16 |
  4. |     9 |
  5. |    40 |
     |-------|
  6. |    11 |
  7. |    34 |
  8. |    20 |
  9. |    49 |
 10. |    42 |
     +-------+

If you look carefully at the list, you will see that observation number 42 repeats. It will be easier to see the duplicate if we sort the list,

. sort obsno
. list

     +-------+
     | obsno |
     |-------|
  1. |     9 |
  2. |    11 |
  3. |    16 |
  4. |    20 |
  5. |    34 |
     |-------|
  6. |    40 |
  7. |    42 |     <- Obs. 42 repeats
  8. |    42 |     <- See?
  9. |    49 |
 10. |    52 |
     +-------+

An observation didn’t have to repeat, but it’s not surprising that one did because in drawing n=10 from N=52, we would expect one or more repeated cards about 60% of the time.

Anyway, we now know which cards we want, namely cards 9, 11, 16, 20, 34, 40, 42, 42 (again), 49, and 52.

The final step is to select those observations from cards.dta. The way to do that is to perform a one-to-many merge of cards.dta with the list above and keep the matches. Before we can do that, however, we must (1) save the list of observation numbers as a dataset, (2) load cards.dta, and (3) add a variable called obsno to it. Then we will be able to perform the merge. So let’s get that out of the way,

. save obsnos_to_draw                // 1. save the list above
file obsnos_to_draw.dta saved

. use cards                          // 2. load cards.dta

. gen obsno = _n                     // 3.  Add variable obsno to it

Now we can perform the merge:

. merge 1:m obsno using obsnos_to_draw, keep(matched) nogen

    Result                           # of obs.
    -----------------------------------------
    not matched                             0
    matched                                10
    -----------------------------------------

I’ll list the result, but let me first briefly explain the command

merge 1:m obsno using obsnos_to_draw, keep(matched) nogen

merge …, we are performing the merge command,

1:m …, the merge is one-to-many,

using obsnos_to_draw …, we merge data in memory with obsnos_todraw.dta,

, keep(matched) …, we keep observations that appear in both datasets,

nogen, do not add variable _merge to the resulting dataset; _merge reports the source of the resulting observations; we said keep(matched) so we know each came from both sources.

And here is the result:

. list

     +-------------------------+
     |  rank      suit   obsno |
     |-------------------------|
  1. |     8      Club       9 |
  2. |  Jack      Club      11 |
  3. |   Ace     Spade      16 |
  4. |     2   Diamond      20 |
  5. |     6     Spade      34 |
     |-------------------------|
  6. |     8     Spade      40 |
  7. |     9     Heart      42 |   <- Obs. 42 is here ...
  8. | Queen     Spade      49 |
  9. |  King     Spade      52 |
 10. |     9     Heart      42 |   <- and here
     +-------------------------+

We drew 10 cards — those are the observation numbers on the left. Variable obsno in our dataset records the original observation (card) number and really, we no longer need the variable. Anyway, obsno==42 appears twice, in real observations 7 and 10, and thus we drew the 9 of Hearts twice.

 

What could go wrong?

Not much can go wrong, it turns out. At this point, our generic solution is

. drop _all
. set obs n
. generate obsno = floor(n*runiform()+1)
. sort obsno
. save obsnos_to_draw

. use our_dataset
. gen obsno = _n
. merge 1:m obsno using obsnos_to_draw, keep(matched) nogen

If you study this code, there are two lines that might cause problems,

. generate obsno = floor(N*runiform()+1)

and

. generate obsno = _n

When you are looking for problems and see a generate or replace, think about rounding.

Let’s look at the right-hand side first. Both calculations produce integers over the range [1, N]. generate performs all calculations in double and the largest integer that can be stored without rounding is 9,007,199,254,740,992 (see previous blog post on precision). Stata allows datasets up to 2,147,483,646, so we can be sure that N is less than the maximum precise-integer double. There are no rounding issues on the right-hand side.

Next let’s look at the left-hand side. Variable obsno is being stored as a float because we did not instruct otherwise. The largest integer value that can be stored without rounding as a float (also covered in previous blog post on precision) is 16,777,216, and that is less than Stata’s 2,147,483,646 maximum observations. When N exceeds 16,777,216, the solution is to store obsno as a long. We could remember to use long on the rare occasion when dealing with such large datasets, but I’m going to change the generic solution to use longs in all cases, even when it’s unnecessary.

What else could go wrong? Well, we tried an example with n<N and that seemed to work. We should now try examples with n=N and n>N to verify there’s no hidden bug or assumption in our code. I’ve tried examples of both and the code works fine.

 

We’re done for today

That’s it. Drawing samples with replacement turns out to be easy, and that shouldn’t surprise us because we have a random-number generator that draws with replacement.

We could complicate the discussion and consider solutions that would run a bit more efficiently when n=N, which is of special interest in statistics because it is a key ingredient in bootstrapping, but we will not. The above solution works fine in the n=N case, and I always advise researchers to favor simple-even-if-slower solutions because they will probably save you time. Writing complicated code takes longer than writing simple code, and testing complicated code takes even longer. I know because that’s what we do at StataCorp.

Using Stata’s random-number generators, part 2, drawing without replacement

Last time I told you that Stata’s runiform() function generates rectangularly (uniformly) distributed random numbers over [0, 1), from 0 to nearly 1, and to be precise, over [0, 0.999999999767169356]. And I gave you two formulas,

  1. To generate continuous random numbers between a and b, use

    generate double u = (b-a)*runiform() + a

    The random numbers will not actually be between a and b: they will be between a and nearly b, but the top will be so close to b, namely 0.999999999767169356*b, that it will not matter.

  2. To generate integer random numbers between a and b, use

    generate ui = floor((b-a+1)*runiform() + a)

I also mentioned that runiform() can solve a variety of problems, including

  • shuffling data (putting observations in random order),
  • drawing random samples without replacement (there’s a minor detail we’ll have to discuss because runiform() itself produces values drawn with replacement),
  • drawing random samples with replacement (which is easier to do than most people realize),
  • drawing stratified random samples (with or without replacement),
  • manufacturing fictional data (something teachers, textbook authors, manual writers, and blog writers often need to do).

Today we will cover shuffling and drawing random samples without replacement — the first two topics on the list — and we will leave drawing random samples with replacement for next time. I’m going to tell you

  1. To place observations in random order — to shuffle the observations — type
    . generate double u = runiform()
    . sort u
    
  2. To draw without replacement a random sample of n observations from a dataset of N observations, type
    . set seed #
    . generate double u = runiform()
    . sort u
    . keep in 1/n
    

    I will tell you that there are good statistical reasons for setting the random-number seed even if you don’t care about reproducibility.

    If you do care about reproducibility, I will mention (1) that you need to use sort to put the original data in a known, reproducible order, before you generate the random variate u, and I will explain (2) a subtle issue that leads us to use different code for N≤1,000 and N>1,000. The code for for N≤1,000 is

    . set seed #
    . sort variables_that_put_data_in_unique_order
    . generate double u = runiform()
    . sort u
    . keep in 1/n
    

    and the code for N>1,000 is

    . set seed #
    . sort variables_that_put_data_in_unique_order
    . generate double u1 = runiform()
    . generate double u2 = runiform()
    . sort u1 u2
    . keep in 1/n
    

    You can use the N>1,000 code for the N≤1,000 case.

  3. To draw without replacement a P-percent random sample, type
    . set seed #
    . keep if runiform() <= P/100
    

    There’s no issue in this case when N is large.

As I mentioned, we’ll discuss drawing random samples with replacement next time. Today, the topic is random samples without replacement. Let’s start.

 
Shuffling data

I have a deck of 52 cards, in order, the first four of which are

. list in 1/4

     +-------------+
     | rank   suit |
     |-------------|
  1. |  Ace   Club |
  2. |    1   Club |
  3. |    2   Club |
  4. |    3   Club |
     +-------------+

Well, actually I just have a Stata dataset with observations corresponding to playing cards. To shuffle the deck — to place the observations in random order — type

. generate double u = runiform()

. sort u

Having done that, here’s your hand,

. list in 1/5

     +----------------------------+
     |  rank      suit          u |
     |----------------------------|
  1. | Queen      Club   .0445188 |
  2. |     5   Diamond   .0580662 |
  3. |     7      Club   .0610638 |
  4. |  King     Heart   .0907986 |
  5. |     6     Spade   .0981878 |
     +----------------------------+

and here’s mine:

. list in 6/10

     +---------------------------+
     | rank      suit          u |
     |---------------------------|
  6. |    8   Diamond   .1024369 |
  7. |    5      Club   .1086679 |
  8. |    8     Spade   .1091783 |
  9. |    2     Spade   .1180158 |
 10. |  Ace      Club   .1369841 |
     +---------------------------+

All I did was generate random numbers — one per observation (card) — and then place the observations in ascending order of the random values. Doing that is equivalent to shuffling the deck. I used runiform() random numbers, meaning rectangularly distributed random numbers over [0, 1), but since I’m only exploiting the random-numbers’ ordinal properties, I could have used random numbers from any continuous distribution.

This simple, elegant, and obvious solution to shuffling data will play an important part of the solution to drawing observations without replacement. I have already more than hinted at the solution when I showed you your hand and mine.

 
Drawing n observations without replacement

Drawing without replacement is exactly the same problem as dealing cards. The solution to the physical card problem is to shuffle the cards and then draw the top cards. The solution to randomly selecting n from N observations is to put the N observations in random order and keep the first n of them.

. use cards, clear

. generate double u = runiform()

. sort u

. keep in 1/5
(47 observations deleted)

. list

     +---------------------------+
     | rank      suit          u |
     |---------------------------|
  1. |  Ace   Diamond   .0064866 |
  2. |    6     Heart   .0087578 |
  3. | King     Spade    .014819 |
  4. |    3     Spade   .0955155 |
  5. | King   Diamond   .1007262 |
     +---------------------------+

. drop u

 
Reproducibility

You might later want to reproduce the analysis, meaning you do not want to draw another random sample, but you want to draw the same random sample. Perhaps you informally distributed some preliminary results and, of course, then discovered a mistake. You want to redistribute updated results and show that your mistake didn’t change results by much, and to drive home the point, you want to use the same samples as you used previously.

Part of the solution is to set the random-number seed. You might type

. set seed 49983882

. use cards, clear

. generate double u = runiform()

. sort u

. keep in 1/5

See help set seed in Stata. As a quick review, when you set the random-number seed, you set Stata’s random-number generator into a fixed, reproducible state, which is to say, the sequence of random numbers that runiform() produces is a function of the seed. Set the seed today to the same value as yesterday, and runiform() will produce the same sequence of random numbers today as it did yesterday. Thus, after setting the seed, if you repeat today exactly what you did yesterday, you will obtain the same results.

So imagine that you set the random number seed today to the value you set it to yesterday and you repeat the above commands. Even so, you might not get the same results! You will not get the same results if the observations in cards.dta are in a different order yesterday and today. Setting the seed merely ensures that if yesterday the smallest value of u was in observation 23, it will again be in observation 23 today (and it will be the same value). If yesterday, however, observation 23 was the 6 of Clubs, and today it’s the 7 of Hearts, then today you will select the 7 of Hearts in place of the 6 of Clubs.

So make sure the data are in the same order. One way to do that is put the dataset in a known order before generating the random values on which you will sort. For instance,

. set seed 49983882

. use cards, clear

. sort rank suit

. generate double u = runiform()

. sort u

. keep in 1/5

An even better solution would add the line

. by rank suit: assert _N==1

just before the generate. That line would check whether sorting on variables rank and suit uniquely orders the observations.

With cards.dta, you can argue that the assert is unnecessary, but not because you know each rank-suit combination occurs once. You have only my assurances about that. I recommend you never trust anyone’s assurances about data. In this case, however, you can argue that the assert is unnecessary because we sorted on all the variables in the dataset and thus uniqueness is not required. Pretend there are two Ace of Clubs in the deck. Would it matter that the first card was Ace of Clubs followed by Ace of Clubs as opposed to being the other way around? Of course it would not; the two states are indistinguishable.

So let’s assume there is another variable in the dataset, say whether there was a grease spot on the back of the card. Yesterday, after sorting, the ordering might have been,

     +---------------------------------+
     | rank   suit   grease          u |
     |---------------------------------|
  1. |  Ace   Club      yes   .6012949 |
  2. |  Ace   Club       no   .1859054 |
     +---------------------------------+

and today,

     +---------------------------------+
     | rank   suit   grease          u |
     |---------------------------------|
  1. |  Ace   Club       no   .6012949 |
  2. |  Ace   Club      yes   .1859054 |
     +---------------------------------+

If yesterday you selected the Ace of Clubs without grease, today you would select the Ace of Clubs with grease.

My recommendation is (1) sort on whatever variables put the data into a unique order, and then verify that, or (2) sort on all the variables in the dataset and then don’t worry whether the order is unique.

 
Ensuring a random ordering

Included in our reproducible solution but omitted from our base solution was setting the random-number seed,

. set seed 49983882

Setting the seed is important even if you don’t care about reproducibility. Each time you launch Stata, Stata sets the same random-number seed, namely 123456789, and that means that runiform() generates the same sequence of random numbers, and that means that if you generated all your random samples right after launching Stata, you would always select the same observations, at least holding N constant.

So set the seed, but don’t set it too often. You set the seed once per problem. If I wanted to draw 10,000 random samples from the same data, I could code:

  use dataset, clear
  set seed 1702213
  sort variables_that_put_data_in_unique_order
  preserve
  forvalues i=1(1)10000 {
          generate double u = runiform()
          sort u
          keep in 1/n
          drop u
          save sample`i', replace
          restore, preserve  
}

In the example I save each sample in a file. In real life, I seldom (never) save the samples; I perform whatever analysis on the samples I need and save the results, which I usually append into a single dataset. I don’t need to save the individual samples because I can recreate them.

 
And the result still might not be reproducible …

runiform() draws random-numbers with replacement. It is thus possible that two or more observations could have the same random values associated with them. Well yes, you’re thinking, I see that it’s possible, but surely it’s so unlikely that it just doesn’t happen. But it does happen:

. clear all

. set obs 100000
obs was 0, now 100000

. generate double u = runiform() 

. by u, sort: assert _N==1
1 contradiction in 99999 by-groups
assertion is false      
r(9); 

In the 100,000-observation dataset I just created, I got a duplicate! By the way, I didn’t have to look hard for such an example, I got it the first time I tried.

I have three things I want to tell you:

  1. Duplicates happen more often than you might guess.

  2. Do not panic about the duplicates. Because of how Stata is written, duplicates do not lower the quality of the sample selected. I’ll explain.

  3. Duplicates do interfere with reproducibility, however, and there is an easy way around that problem.

Let’s start with the chances of observing duplicates. I mentioned in passing last time that runiform() is a 32-bit random-number generator. That means runiform() can return any of 232 values. Their values are, in order,

          0  =  0
      1/232  =  2.32830643654e-10
      2/232  =  4.65661287308e-10 
      3/232  =  6.98491930962e-10
             .
             .
             .
 (232-2)/232  =  0.9999999995343387
 (232-1)/232  =  0.9999999997671694

So what are the chances that in N draws with replacement from an urn containing these 232 values, that all values are distinct? The probability p that all values are distinct is

      232 * (232-1) * ... *(232-N)
p  =  ----------------------------
                 N*232

Here are some values for various values of N. p is the probability that all values are unique, and 1-p is the probability of observing one or more repeated values.

------------------------------------
      N         p            1-p
------------------------------------
     50   0.999999715    0.000000285
    500   0.999970955    0.000029045
  1,000   0.999883707    0.000116293
  5,000   0.997094436    0.002905564
 10,000   0.988427154    0.011572846
 50,000   0.747490440    0.252509560
100,000   0.312187988    0.687812012
200,000   0.009498117    0.990501883
300,000   0.000028161    0.999971839
400,000   0.000000008    0.999999992
500,000   0.000000000    1.000000000
------------------------------------

In shuffling cards we generated N=52 random values. The probability of a repeated values is infinitesimal. In datasets of N=10,000, I expect to see repeated values 1% of the time. In datasets of N=50,000, I expect to see repeated values 25% of the time. By N=100,000, I expect to see repeated values more often than not. By N=500,000, I expect to see repeated value in virtually all sequences.

Even so, I promised you that this problem does not affect the randomness of the ordering. It does not because of how Stata’s sort command is written. Remember the basic solution,

 
. use dataset, clear

. generate double u = runiform()

. sort u

. keep in 1/n

Did you know sort has its own, private random-number generator built into it? It does, and sort uses its random-number generator to determine the order of tied observations. In the manuals we at StataCorp are fond of writing, “the ties will be ordered randomly” and a few sophisticated users probably took that to mean, “the ties will be ordered in a way that we at StataCorp do not know and even though they might be ordered in a way that will cause a bias in the subsequent analysis, because we don’t know, we’ll ignore the possibility.” But we meant it when wrote that the ties will be ordered randomly; we know that because we put a random number generator into sort to ensure the result. And that is why I can now write that repeated values of the runiform() function cause a reproducibility issue, but not a statistical issue.

The solution to the reproducibility issue is to draw two random numbers and use the random-number pair to order the observations:

 
. use dataset, clear

. sort varnames

. set seed #

. generate double u1 = runiform()

. generate double u2 = runiform()

. sort u1 u2

. keep in 1/n

You might wonder if we would ever need three random numbers. It is very unlikely. p, the probability of no problem, equals 1 to at least 5 digits for N=500,000. Of course, the chances of duplication are always nonzero. If you are concerned about this problem, you could add an assert to the code to verify that the two random numbers together do uniquely identify the observations:

. use dataset, clear

. sort varnames

. set seed #

. generate double u1 = runiform()

. generate double u2 = runiform()

. sort u1 u2

. by u1 u2: assert _N==1            // added line

. keep in 1/n

I do not believe that doing that is necessary.

 
Is using doubles necessary?

In the generation of random numbers in all of the above, note that I am storing them as doubles. For the reproducibility issue, that is important. As I mentioned in part 1, the 32-bit random numbers that runiform() produces will be rounded if forced into 23-bit floats.

Above I gave you a table of probabilities p that, in creating

. generate double u = runiform()

the values of u would be distinct. Here is what would happen if you instead stored u as a float:

                               u stored as
          ---------------------------------------------------------
          -------- double ----------     ----------float ----------
      N         p            1-p               p           1-p
-------------------------------------------------------------------
     50   0.999999715    0.000000285     0.999853979    0.000146021
    500   0.999970955    0.000029045     0.985238383    0.014761617
  1,000   0.999883707    0.000116293     0.942190868    0.057809132
  5,000   0.997094436    0.002905564     0.225346930    0.774653070
 10,000   0.988427154    0.011572846     0.002574145    0.997425855
 50,000   0.747490440    0.252509560     0.000000000    1.000000000
100,000   0.312187988    0.687812012     0.000000000    1.000000000
200,000   0.009498117    0.990501883     0.000000000    1.000000000
300,000   0.000028161    0.999971839     0.000000000    1.000000000
400,000   0.000000008    0.999999992     0.000000000    1.000000000
500,000   0.000000000    1.000000000     0.000000000    1.000000000
-------------------------------------------------------------------

 
Drawing without replacement P-percent random samples

We have discussed drawing without replacement n observations from N observations. The number of observations selected has been fixed. Say instead we wanted to draw a 10% random sample, meaning that we independently allow each observation to have a 10% chance of appearing in our sample. In that case, the final number of observations is expected to be 0.1*N, but it may (and probably will) vary from that. The basic solution for drawing a 10% random sample is

. keep if runiform() <= 0.10

and the basic solution for drawing a P% random sample is

. keep if runiform() <= P/100

It is unlikely to matter whether you code <= or < in the comparison. As you now know, runiform() produces values drawn from 232 possible values, and thus the chance of equality is 2-32 or roughly 0.000000000232830644. If you want a P% sample, however, theory says you should code <=.

If you care about reproducibility, you should expand the basic solution to read,

. set seed #

. use data, clear 

. sort variables_that_put_data_in_unique_order

. keep if runiform() <= P/100

Below I draw a 10% sample from the card.dta:

. set seed 838   

. use cards, clear

. sort rank suit

. keep if runiform() <= 10/100
(46 observations deleted)

. list

     +-----------------+
     |  rank      suit |
     |-----------------|
  1. |     2   Diamond |
  2. |     2     Heart |
  3. |     3      Club |
  4. |     5     Heart |
  5. |  Jack   Diamond |
     |-----------------|
  6. | Queen     Spade |
     +-----------------+

 
We’re not done, but we’re done for today

In part 3 of this series I will discuss drawing random samples with replacement.

Using Stata’s random-number generators, part 1

I want to start a series on using Stata’s random-number function. Stata in fact has ten random-number functions:

  1. runiform() generates rectangularly (uniformly) distributed random number over [0,1).
  2. rbeta(a, b) generates beta-distribution beta(a, b) random numbers.
  3. rbinomial(n, p) generates binomial(n, p) random numbers, where n is the number of trials and p the probability of a success.
  4. rchi2(df) generates χ2 with df degrees of freedom random numbers.
  5. rgamma(a, b) generates Γ(a, b) random numbers, where a is the shape parameter and b, the scale parameter.
  6. rhypergeometric(N, K, n) generates hypergeometric random numbers, where N is the population size, K is the number of in the population having the attribute of interest, and n is the sample size.
  7. rnbinomial(n, p) generates negative binomial -- the number of failures before the nth success -- random numbers, where p is the probability of a success. (n can also be noninteger.)
  8. rnormal(μ, σ) generates Gaussian normal random numbers.
  9. rpoisson(m) generates Poisson(m) random numbers.
  10. rt(df) generates Student's t(df) random numbers.

You already know that these random-number generators do not really produce random numbers; they produce pseudo-random numbers. This series is not about that, so we'll be relaxed about calling them random-number generators.

You should already know that you can set the random-number seed before using the generators. That is not required but it is recommended. You set the seed not to obtain better random numbers, but to obtain reproducible random numbers. In fact, setting the seed too often can actually reduce the quality of the random numbers! If you don't know that, then read help set seed in Stata. I should probably pull out the part about setting the seed too often, expand it, and turn it into a blog entry. Anyway, this series is not about that either.

This series is about the use of random-number generators to solve problems, just as most users usually use them. The series will provide practical advice. I'll stay away from describing how they work internally, although long-time readers know that I won't keep the promise. At least I'll try to make sure that any technical details are things you really need to know. As a result, I probably won't even get to write once that if this is the kind of thing that interests you, StataCorp would be delighted to have you join our development staff.

 

runiform(), generating uniformly distributed random numbers

Mostly I'm going to write about runiform() because runiform() can solve such a variety of problems. runiform() can be used to solve,

  • shuffling data (putting observations in random order),
  • drawing random samples without replacement (there's a minor detail we'll have to discuss because runiform() itself produces values drawn with replacement),
  • drawing random samples with replacement (which is easier to do than most people realize),
  • drawing stratified random samples (with or without replacement),
  • manufacturing fictional data (something teachers, textbook authors, manual writers, and blog writers often need to do).

runiform() generates uniformly, a.k.a. rectangularly distributed, random numbers over the interval, I quote from the manual, "0 to nearly 1".

Nearly 1? "Why not all the way to 1?" you should be asking. "And what exactly do you mean by nearly 1?"

The answer is that the generator is more useful if it omits 1 from the interval, and so we shaved just a little off. runiform() produces random numbers over [0, 0.999999999767169356].

Here are two useful formulas you should commit to memory.

  1. If you want to generate continuous random numbers between a and b, use

    generate double u = (b-a)*runiform() + a

    The random numbers will not actually be between a and b, they will be between a and nearly b, but the top will be so close to b, namely 0.999999999767169356*b, that it will not matter.

    Remember to store continuous random values as doubles.

  2. If you want to generate integer random numbers between a and b, use

    generate ui = floor((b-a+1)*runiform() + a)

    In particular, do not even consider using the formula for continuous values but rounded to integers, which is to say, round(u) = round((b-a)*runiform() + a). If you use that formula, and if b-a>1, then a and b will be under represented by 50% each in the samples you generate!

    I stored ui as a default float, so I am assuming that -16,777,216 ≤ a < b ≤ 16,777,216. If you have integers outside of that range, however, store as a long or double.

I’m going to spend the rest of this blog entry explaining the above.

First, I want to show you how I got the two formulas and why you must use the second formula for generating integer uniform deviates.

Then I want explain why we shaved a little from the top of runiform(), namely (1) while it wouldn’t matter for formula 1, it made formula 2 a little easier, (2) the code would run more quickly, (3) we could more easily prove that we had implemented the random-number generator correctly, and (4) anyone digging deeper into our random numbers would not be misled into thinking they had more than 32 bits of resolution. That last point will be important in a future blog entry.

 

Continuous uniforms over [a, b)

runiform() produces random numbers over [0, 1). It therefore obviously follows that (b-a)*runiform()+a produces number over [a, b). Substitute 0 for runiform() and the lower limit is obtained. Substitute 1 for runiform() and the upper limit is obtained.

I can tell you that in fact, runiform() produces random numbers over [0, (232-1)/232].

Thus (b-a)*runiform()+a produces random numbers over [a, ((232-1)/232)*b].

(232-1)/232) approximately equals 0.999999999767169356 and exactly equals 1.fffffffeX-01 if you will allow me to use %21x format, which Stata understands and which you can understand if you see my previous blog posting on precision.

Thus, if you are concerned about results being in the interval [a, b) rather than [a, b], you can use the formula

generate double u = ((b-a)*runiform() + a) / 1.fffffffeX-01

There are seven f’s followed by e in the hexadecimal constant. Alternatively, you could type

generate double u = ((b-a)*runiform() + a) * ((2^32-1)/2^32)

but multiplying by 1.fffffffeX-01 is less typing so I’d type that. Actually I wouldn’t type either one; the small difference between values lying in [a, b) or [a, b] is unimportant.

 

Integer uniforms over [a, b]

Whether we produce real, continuous random numbers over [a, b) or [a, b] may be unimportant, but if we want to draw random integers, the distinction is important.

runiform() produces continuous results over [0, 1).

(b-a)*runiform()+a produces continuous results over [a, b).

To produce integer results, we might round continuous results over segments of the number line:

           a    a+.5  a+1  a+1.5  a+2  a+2.5       b-1.5  b-1  b-.5    b
real line  +-----+-----+-----+-----+-----+-----------+-----+-----+-----+
int  line  |<-a->|<---a+1--->|<---a+2--->|           |<---b-1--->|<-b->|

In the diagram above, think of the numbers being produced by the continuous formula u=(b-a)*runiform()+a as being arrayed along the real line. Then imagine rounding those values, say by using Stata's round(u) function. If you rounded in that way, then

  • Values of u between a and a+0.5 will be rounded to a.
  • Values of u between a+0.5 and a+1.5 will be rounded to a+1.
  • Values of u between a+1.5 and a+2.5 will be rounded to a+2.
  • ...
  • Values of u between b-1.5 and b-0.5 will be rounded to b-1.
  • Values of u between b-0.5 and b-1 will be rounded to b.

Note that the width of the first and last intervals is half that of the other intervals. Given that u follows the rectangular distribution, we thus expect half as many values rounded to a and to b as to a+1 or a+2 or ... or b-1.

And indeed, that is exactly what we would see:

. set obs 100000
obs was 0, now 100000

. gen double u = (5-1)*runiform() + 1

. gen i = round(u)

. summarize u i 

    Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
           u |    100000    3.005933    1.156486   1.000012   4.999983
           i |    100000     3.00489    1.225757          1          5

. tabulate i

          i |      Freq.     Percent        Cum.
------------+-----------------------------------
          1 |     12,525       12.53       12.53
          2 |     24,785       24.79       37.31
          3 |     24,886       24.89       62.20
          4 |     25,284       25.28       87.48
          5 |     12,520       12.52      100.00
------------+-----------------------------------
      Total |    100,000      100.00

To avoid the problem we need to make the widths of all the intervals equal, and that is what the formula floor((b-a+1)*runiform() + a) does.

           a          a+1         a+2                     b-1          b          b+1
real line  +-----+-----+-----+-----+-----------------------+-----+-----+-----+-----+
int  line  |<--- a --->|<-- a+1 -->|                       |<-- b-1 -->|<--- b --->)

Our intervals are of equal width and thus we expect to see roughly the same number of observations in each:

. gen better = floor((5-1+1)*runiform() + 1)

. tabulate better

     better |      Freq.     Percent        Cum.
------------+-----------------------------------
          1 |     19,808       19.81       19.81
          2 |     20,025       20.02       39.83
          3 |     19,963       19.96       59.80
          4 |     20,051       20.05       79.85
          5 |     20,153       20.15      100.00
------------+-----------------------------------
      Total |    100,000      100.00

So now you know why we shaved a little off the top when we implemented runiform(); it made the formula

floor((b-a+1)*runiform() + a):

easier. Our integer [a, b] formula did not have to concern itself that runiform() would sometimes — rarely — return 1. If runiform() did return the occasional 1, the simple formula above would produce the (correspondingly occasional) b+1.

 

How Stata calculates continuous random numbers

I’ve said that we shaved a little off the top, but the fact was that it was easier for us to do the shaving than not.

runiform() is based on the KISS random number generator. KISS produces 32-bit integers, meaning integers the range [0, 232-1], or [0, 4,294,967,295]. You might wonder how we converted that range to being continuous over [0, 1).

Start by thinking of the number KISS produces in its binary form:

b31b30b29b28b27b26b25b24b23b22b21b20b19b18b17b16b15b14b13b12b11b10b9b8b7b6b5b4b3b2b1b0

The corresponding integer is b31*231 + b31*230 + ... + b0*20. All we did was insert a binary point out front:

. b31b30b29b28b27b26b25b24b23b22b21b20b19b18b17b16b15b14b13b12b11b10b9b8b7b6b5b4b3b2b1b0

making the real value b31*2-1 + b30*2-2 + ... + b0*2-32. Doing that is equivalent to dividing by 2-32, except insertion of the binary point is faster. Nonetheless, if we had wanted runiform() to produce numbers over [0, 1], we could have divided by 232-1.

Anyway, if the KISS random number generator produced 3190625931, which in binary is

10111110001011010001011010001011

we converted that to

0.10111110001011010001011010001011

which equals 0.74287549 in base 10.

The largest number the KISS random number generator can produce is, of course,

11111111111111111111111111111111

and 0.11111111111111111111111111111111 equals 0.999999999767169356 in base 10. Thus, the runiform() implementation of KISS generates random numbers in the range [0, 0.999999999767169356].

I could have presented all of this mathematically in base 10: KISS produces integers in the range [0, 232-1], and in runiform() we divide by 232 to thus produce continuous numbers over the range [0, (232-1)/232]. I could have said that, but it loses the flavor and intuition of my longer explanation, and it would gloss over the fact that we just inserted the binary point. If I asked you, a base-10 user, to divide 232 by 10, you wouldn’t actually divide in the same way that they would divide by, say 9. Dividing by 9 is work. Dividing by 10 merely requires shifting the decimal point. 232 divided by 10 is obviously 23.2. You may not have realized that modern digital computers, when programmed by “advanced” programmers, follow similar procedures.

Oh gosh, I do get to say it! If this sort of thing interests you, consider a career at StataCorp. We’d love to have you.

 

Is it important that runiform() values be stored as doubles?

Sometimes it is important. It’s obviously not important when you are generating random integers using floor((b-a+1)*runiform() + a) and -16,777,216 ≤ a < b ≤ 16,777,216. Integers in that range fit into a float without rounding.

When creating continuous values, remember that runiform() produces 32 bits. floats store 23 bits and doubles store 52, so if you store the result of runiform() as a float, it will be rounded. Sometimes the rounding matters, and sometimes it does not. Next time, we will discuss drawing random samples without replacement. In that case, the rounding matters. In most other cases, including drawing random samples with replacement — something else for later — the rounding
does not matter. Rather than thinking hard about the issue, I store all my non-integer
random values as doubles.

 

Tune in for the next episode

Yes, please do tune in for the next episode of everything you need to know about using random-number generators. As I already mentioned, we’ll discuss drawing random samples without replacement. In the third installment, I’m pretty sure we’ll discuss random samples with replacement. After that, I’m a little unsure about the ordering, but I want to discuss oversampling of some groups relative to others and, separately, discuss the manufacturing of fictional data.

Am I forgetting something?

Using import excel with real world data

Stata 12′s new import excel command can help you easily import real-world Excel files into Stata. Excel files often contain header and footer information in the first few and last few rows of a sheet, and you may not want that information loaded. Also, the column labels used in the sheet are invalid Stata variable names and therefore cannot be loaded. Both of these issues can be easily solved using import excel.

Let’s start by looking at an Excel spreadsheet, metro_gdp.xls, that is downloaded from the Bureau of Economic Analysis website.

Microsoft Excel screenshot

 

As you can see, the first five rows of the Excel file contain a description of the data, and rows 374 through 381 contain footer notes. We don’t want to load these rows into Stata. import excel has a cellrange() option that can help us avoid unwanted information being loaded.

With cellrange(), you specify the upper left cell and the lower right cell (using standard Excel notation) of the area of data you want loaded. In the file metro_gdp.xls, we want all the data from column A row 6 (upper left cell) to column L row 373 (lower right cell) loaded into Stata. To do this, we type

. import excel metro_gdp.xls, cellrange(A6:L373) clear

In Stata, we open the Data Editor to inspect the loaded data.

Stata Data Editor

 

The first row of the data we loaded contained column labels. Because of these labels, import excel loaded all the data as strings. import excel again has an easy fix. We need to specify the firstrow option to tell import excel that the first row of data contains the variable names.

. import excel metro_gdp.xls, cellrange(A6:L373) firstrow clear

We again open the Data Editor to inspect the data.

Stata Data Editor

 

The data are now in the correct format, but we are missing the year column labels. Stata does not accept numeric variable names, so import excel has to use the Excel column name (C, D, …) for the variable names instead of 2001, 2002, …. The simple solution is to rename the column headers in Excel to something like y2001, y2002, etc., before loading. You can also use Stata to rename the column headers. import excel saves the values in the first row of data as variable labels so that the information is not lost. If we describe the data, we will see all the column labels from the Excel file saved as variable labels.

. describe

Contains data
  obs:           367
 vars:            12
 size:        37,067
-------------------------------------------------------------------------------
              storage  display     value
variable name   type   format      label      variable label
-------------------------------------------------------------------------------
Fips            str5   %9s                    Fips
Area            str56  %56s                   Area
C               long   %10.0g                 2001
D               long   %10.0g                 2002
E               long   %10.0g                 2003
F               long   %10.0g                 2004
G               long   %10.0g                 2005
H               long   %10.0g                 2006
I               long   %10.0g                 2007
J               long   %10.0g                 2008
K               long   %10.0g                 2009
L               long   %10.0g                 2010
-------------------------------------------------------------------------------
Sorted by:
     Note:  dataset has changed since last saved

We want to grab the variable label for each variable by using the extended macro function :variable label varname, create a valid lowercase variable name from that label by using the strtoname() and lower() functions, and rename the variable to the new name by using rename. We can do this with a foreach loop.

foreach var of varlist _all {
        local label : variable label `var'
        local new_name = lower(strtoname("`label'"))
        rename `var' `new_name'
}

Now when we describe our data, they look like this:

. describe

Contains data
  obs:           367
 vars:            12
 size:        37,067                          
-------------------------------------------------------------------------------
              storage  display     value      
variable name   type   format      label      variable label
-------------------------------------------------------------------------------
fips            str5   %9s                    Fips
area            str56  %56s                   Area
_2001           long   %10.0g                 2001
_2002           long   %10.0g                 2002
_2003           long   %10.0g                 2003
_2004           long   %10.0g                 2004
_2005           long   %10.0g                 2005
_2006           long   %10.0g                 2006
_2007           long   %10.0g                 2007
_2008           long   %10.0g                 2008
_2009           long   %10.0g                 2009
_2010           long   %10.0g                 2010
-------------------------------------------------------------------------------
Sorted by:  
     Note:  dataset has changed since last saved

One last thing we might want to do is to rename the year variables from _20## to y20##, which we can easily accomplish with rename:

. rename (_*) (y*)

. describe

Contains data
  obs:           367
 vars:            12
 size:        37,067                          
-------------------------------------------------------------------------------
              storage  display     value      
variable name   type   format      label      variable label
-------------------------------------------------------------------------------
fips            str5   %9s                    Fips
area            str56  %56s                   Area
y2001           long   %10.0g                 2001
y2002           long   %10.0g                 2002
y2003           long   %10.0g                 2003
y2004           long   %10.0g                 2004
y2005           long   %10.0g                 2005
y2006           long   %10.0g                 2006
y2007           long   %10.0g                 2007
y2008           long   %10.0g                 2008
y2009           long   %10.0g                 2009
y2010           long   %10.0g                 2010
-------------------------------------------------------------------------------
Sorted by:  
     Note:  dataset has changed since last saved
Categories: Data Management Tags: ,

The Penultimate Guide to Precision

There have recently been occasional questions on precision and storage types on Statalist despite all that I have written on the subject, much of it posted in this blog. I take that as evidence that I have yet to produce a useful, readable piece that addresses all the questions researchers have.

So I want to try again. This time I’ll try to write the ultimate piece on the subject, making it as short and snappy as possible, and addressing every popular question of which I am aware — including some I haven’t addressed before — and doing all that without making you wade with me into all the messy details, which I know I have a tendency to do.

I am hopeful that from now on, every question that appears on Statalist that even remotely touches on the subject will be answered with a link back to this page. If I succeed, I will place this in the Stata manuals and get it indexed online in Stata so that users can find it the instant they have questions.

What follows is intended to provide everything scientific researchers need to know to judge the effect of storage precision on their work, to know what can go wrong, and to prevent that. I don’t want to raise expectations too much, however, so I will entitle it …

THE PENULTIMATE GUIDE TO PRECISION

  1. Contents

     1. Numeric types
    2. Floating-point types
    3. Integer types
    4. Integer precision
    5. Floating-point precision
    6. Advice concerning 0.1, 0.2, …
    7. Advice concerning exact data, such as currency data
    8. Advice for programmers
    9. How to interpret %21x format (if you care)
    10. Also see

  2. Numeric types

    1.1 Stata provides five numeric types for storing variables, three of them integer types and two of them floating point.

    1.2 The floating-point types are float and double.

    1.3 The integer types are byte, int, and long.

    1.4 Stata uses these five types for the storage of data.

    1.5 Stata makes all calculations in double precision (and sometimes quad precision) regardless of the type used to store the data.

  3. Floating-point types

    2.1 Stata provides two IEEE 754-2008 floating-point types: float and double.

    2.2 float variables are stored in 4 bytes.

    2.3 double variables are stored in 8 bytes.

    2.4 The ranges of float and double variables are

         Storage
         type             minimum                maximum
         -----------------------------------------------------
         float     -3.40282346639e+ 38      1.70141173319e+ 38
         double    -1.79769313486e+308      8.98846567431e+307
         -----------------------------------------------------
         In addition, float and double can record missing values 
         ., .a, .b, ..., .z.

    The above values are approximations. For those familiar with %21x floating-point hexadecimal format, the exact values are

         Storage
         type                   minimum                maximum
         ------------------------------------------------------- 
         float   -1.fffffe0000000X+07f     +1.fffffe0000000X+07e 
         double  -1.fffffffffffffX+3ff     +1.fffffffffffffX+3fe
         -------------------------------------------------------

    Said differently, and less precisely, float values are in the open interval (-2128, 2127), and double values are in the open interval (-21024, 21023). This is less precise because the intervals shown in the tables are closed intervals.

  4. Integer types

    3.1 Stata provides three integer storage formats: byte, int, and long. They are 1 byte, 2 bytes, and 4 bytes, respectively.

    3.2 Integers may also be stored in Stata’s IEEE 754-2008 floating-point storage formats float and double.

    3.3 Integer values may be stored precisely over the ranges

         storage
         type                   minimum                 maximum
         ------------------------------------------------------
         byte                      -127                     100
         int                    -32,767                  32,740
         long            -2,147,483,647           2,147,483,620
         ------------------------------------------------------
         float              -16,777,216              16,777,216
         double  -9,007,199,254,740,992   9,007,199,254,740,992
         ------------------------------------------------------
         In addition, all storage types can record missing values
         ., .a, .b, ..., .z.

    The overall ranges of float and double were shown in (2.4) and are wider than the ranges for them shown here. The ranges shown here are the subsets of the overall ranges over which no rounding of integer values occurs.

  5. Integer precision

    4.1 (Automatic promotion.) For the integer storage types — for byte, int, and long — numbers outside the ranges listed in (3.3) would be stored as missing (.) except that storage types are promoted automatically. As necessary, Stata promotes bytes to ints, ints to longs, and longs to doubles. Even if a variable is a byte, the effective range is still [-9,007,199,254,740,992, 9,007,199,254,740,992] in the sense that you could change a value of a byte variable to a large value and that value would be stored correctly; the variable that was a byte would, as if by magic, change its type to int, long, or double if that were necessary.

    4.2 (Data input.) Automatic promotion (4.1) applies after the data are input/read/imported/copied into Stata. When first reading, importing, copying, or creating data, it is your responsibility to choose appropriate storage types. Be aware that Stata’s default storage type is float, so if you have large integers, it is usually necessary to specify explicitly the types you wish to use.

    If you are unsure of the type to specify for your integer variables, specify double. After reading the data, you can use compress to demote storage types. compress never results in a loss of precision.

    4.3 Note that you can use the floating-point types float and double to store integer data.

    4.3.1 Integers outside the range [-2,147,483,647, 2,147,483,620] must be stored as doubles if they are to be precisely recorded.

    4.3.2 Integers can be stored as float, but avoid doing that unless you are certain they will be inside the range [-16,777,216, 16,777,216] not just when you initially read, import, or copy them into Stata, but subsequently as you make transformations.

    4.3.3 If you read your integer data as floats, and assuming they are within the allowed range, we recommend that you change them to an integer type. You can do that simply by typing compress. We make that recommendation so that your integer variables will benefit from the automatic promotion described in (4.1).

    4.4 Let us show what can go wrong if you do not follow our advice in (4.3). For the floating-point types — for float and double — integer values outside the ranges listed in (3.3) are rounded.

    Consider a float variable, and remember that the integer range for floats is [-16,777,216, 16,777,216]. If you tried to store a value outside the range in the variable — say, 16,777,221 — and if you checked afterward, you would discover that actually stored was 16,777,220! Here are some other examples of rounding:

         desired value                            stored (rounded)
         to store            true value             float value 
         ------------------------------------------------------
         maximum             16,777,216              16,777,216 
         maximum+1           16,777,217              16,777,216
         ------------------------------------------------------
         maximum+2           16,777,218              16,777,218
         ------------------------------------------------------
         maximum+3           16,777,219              16,777,220
         maximum+4           16,777,220              16,777,220
         maximum+5           16,777,221              16,777,220
         ------------------------------------------------------
         maximum+6           16,777,222              16,777,222
         ------------------------------------------------------
         maximum+7           16,777,223              16,777,224
         maximum+8           16,777,224              16,777,224
         maximum+9           16,777,225              16,777,224
         ------------------------------------------------------
         maximum+10          16,777,226              16,777,226
         ------------------------------------------------------

    When you store large integers in float variables, values will be rounded and no mention will be made of that fact.

    And that is why we say that if you have integer data that must be recorded precisely and if the values might be large — outside the range ±16,777,216 — do not use float. Use long or use double; or just use the compress command and let automatic promotion handle the problem for you.

    4.5 Unlike byte, int, and long, float and double variables are not promoted to preserve integer precision.

    Float values are not promoted because, well, they are not. Actually, there is a deep reason, but it has to do with the use of float variables for their real purpose, which is to store non-integer values.

    Double values are not promoted because there is nothing to promote them to. Double is Stata’s most precise storage type. The largest integer value Stata can store precisely is 9,007,199,254,740,992 and the smallest is -9,007,199,254,740,992.

    Integer values outside the range for doubles round in the same way that float values round, except at absolutely larger values.

  6. Floating-point precision

    5.1 The smallest, nonzero value that can be stored in float and double is

         Storage
         type      value          value in %21x         value in base 10
         -----------------------------------------------------------------
         float     ±2^-127    ±1.0000000000000X-07f   ±5.877471754111e-039
         double    ±2^-1022   ±1.0000000000000X-3fe   ±2.225073858507e-308
         -----------------------------------------------------------------

    We include the value shown in the third column, the value in %21x, for those who know how to read it. It is described in (9), but it is unimportant. We are merely emphasizing that these are the smallest values for properly normalized numbers.

    5.2 The smallest value of epsilon such that 1+epsilon ≠ 1 is

         Storage
         type      epsilon       epsilon in %21x        epsilon in base 10
         -----------------------------------------------------------------
         float      ±2^-23     ±1.0000000000000X-017    ±1.19209289551e-07
         double     ±2^-52     ±1.0000000000000X-034    ±2.22044604925e-16
         -----------------------------------------------------------------

    Epsilon is the distance from 1 to the next number on the floating-point number line. The corresponding unit roundoff error is u = ±epsilon/2. The unit roundoff error is the maximum relative roundoff error that is introduced by the floating-point number storage scheme.

    The smallest value of epsilon such that x+epsilon ≠ x is approximately |x|*epsilon, and the corresponding unit roundoff error is ±|x|*epsilon/2.

    5.3 The precision of the floating-point types is, depending on how you want to measure it,

         Measurement                           float              double
         ----------------------------------------------------------------
         # of binary digits                       23                  52
         # of base 10 digits (approximate)         7                  16 
    
         Relative precision                   ±2^-24              ±2^-53
         ... in base 10 (approximate)      ±5.96e-08           ±1.11e-16
         ----------------------------------------------------------------

    Relative precision is defined as

                           |x - x_as_stored|
                  ± max   ------------------    
                     x            x

    performed using infinite precision arithmetic, x chosen from the subset of reals between the minimum and maximum values that can be stored. It is worth appreciating that relative precision is a worst-case relative error over all possible numbers that can be stored. Relative precision is identical to roundoff error, but perhaps this definition is easier to appreciate.

    5.4 Stata never makes calculations in float precision, even if the data are stored as float.

    Stata makes double-precision calculations regardless of how the numeric data are stored. In some cases, Stata internally uses quad precision, which provides approximately 32 decimal digits of precision. If the result of the calculation is being stored back into a variable in the dataset, then the double (or quad) result is rounded as necessary to be stored.

    5.5 (False precision.) Double precision is 536,870,912 times more accurate than float precision. You may worry that float precision is inadequate to accurately record your data.

    Little in this world is measured to a relative accuracy of ±2-24, the accuracy provided by float precision.

    Ms. Smith, it is reported, made $112,293 this year. Do you believe that is recorded to an accuracy of ±2-24*112,293, or approximately ±0.7 cents?

    David was born on 21jan1952, so on 27mar2012 he was 21,981 days old, or 60.18 years old. Recorded in float precision, the precision is ±60.18*2-24, or roughly ±1.89 minutes.

    Joe reported that he drives 12,234 miles per year. Do you believe that Joe’s report is accurate to ±12,234*2-24, equivalent to ±3.85 feet?

    A sample of 102,400 people reported that they drove, in total, 1,252,761,600 miles last year. Is that accurate to ±74.7 miles (float precision)? If it is, each of them is reporting with an accuracy of roughly ±3.85 feet.

    The distance from the Earth to the moon is often reported as 384,401 kilometers. Recorded as a float, the precision is ±384,401*2-24, or ±23 meters, or ±0.023 kilometers. Because the number was not reported as 384,401.000, one would assume float precision would be accurate to record that result. In fact, float precision is more than sufficiently accurate to record the distance because the distance from the Earth to the moon varies from 356,400 to 406,700 kilometers, some 50,300 kilometers. The distance would have been better reported as 384,401 ±25,150 kilometers. At best, the measurement 384,401 has relative accuracy of ±0.033 (it is accurate to roughly two digits).

    Nonetheless, a few things have been measured with more than float accuracy, and they stand out as crowning accomplishments of mankind. Use double as required.

  7. Advice concerning 0.1, 0.2, …

    6.1 Stata uses base 2, binary. Popular numbers such as 0.1, 0.2, 100.21, and so on, have no exact binary representation in a finite number of binary digits. There are a few exceptions, such as 0.5 and 0.25, but not many.

    6.2 If you create a float variable containing 1.1 and list it, it will list as 1.1 but that is only because Stata’s default display format is %9.0g. If you changed that format to %16.0g, the result would appear as 1.1000000238419.

    This scares some users. If this scares you, go back and read (5.5) False Precision. The relative error is still a modest ±2-24. The number 1.1000000238419 is likely a perfectly acceptable approximation to 1.1 because the 1.1 was never measured to an accuracy of less than ±2-24 anyway.

    6.3 One reason perfectly acceptable approximations to 1.1 such as 1.1000000238419 may bother you is that you cannot select observations containing 1.1 by typing if x==1.1 if x is a float variable. You cannot because the 1.1 on the right is interpreted as double precision 1.1. To select the observations, you have to type if x==float(1.1).

    6.4 If this bothers you, record the data as doubles. It is best to do this at the point when you read the original data or when you make the original calculation. The number will then appear to be 1.1. It will not really be 1.1, but it will have less relative error, namely, ±2-53.

    6.5 If you originally read the data and stored them as floats, it is still sometimes possible to recover the double-precision accuracy just as if you had originally read the data into doubles. You can do this if you know how many decimal digits were recorded after the decimal point and if the values are within a certain range.

    If there was one digit after the decimal point and if the data are in the range [-1,048,576, 1,048,576], which means the values could be -1,048,576, -1,048,575.9, …, -1, 0, 1, …, 1,048,575.9, 1,048,576, then typing

    . gen double y = round(x*10)/10

    will recover the full double-precision result. Stored in y will be the number in double precision just as if you had originally read it that way.

    It is not possible, however, to recover the original result if x is outside the range ±1,048,576 because the float variable contains too little information.

    You can do something similar when there are two, three, or more decimal digits:

         # digits to
         right of 
         decimal pt.   range     command
         -----------------------------------------------------------------
             1      ±1,048,576   gen double y = round(x*10)/10
             2      ±  131,072   gen double y = round(x*100)/100
             3      ±   16,384   gen double y = round(x*1000)/1000
             4      ±    1,024   gen double y = round(x*10000)/10000
             5      ±      128   gen double y = round(x*100000)/100000
             6      ±       16   gen double y = round(x*1000000)/1000000
             7      ±        1   gen double y = round(x*10000000)/10000000
         -----------------------------------------------------------------

    Range is the range of x over which command will produce correct results. For instance, range = ±16 in the next-to-the-last line means that the values recorded in x must be -16 ≤ x ≤ 16.

  8. Advice concerning exact data, such as currency data

    7.1 Yes, there are exact data in this world. Such data are usually counts of something or are currency data, which you can think of as counts of pennies ($0.01) or the smallest unit in whatever currency you are using.

    7.2 Just because the data are exact does not mean you need exact answers. It may still be that calculated answers are adequate if the data are recorded to a relative accuracy of ±2-24 (float). For most analyses — even of currency data — this is often adequate. The U.S. deficit in 2011 was $1.5 trillion. Stored as a float, this amount has a (maximum) error of ±2-24*1.5e+12 = ±$89,406.97. It would be difficult to imagine that ±$89,406.97 would affect any government decision maker dealing with the full $1.5 trillion.

    7.3 That said, you sometimes do need to make exact calculations. Banks tracking their accounts need exact amounts. It is not enough to say to account holders that we have your money within a few pennies, dollars, or hundreds of dollars.

    In that case, the currency data should be converted to integers (pennies) and stored as integers, and then processed as described in (4). Assuming the dollar-and-cent amounts were read into doubles, you can convert them into pennies by typing

    . replace x = x*100

    7.4 If you mistakenly read the currency data as a float, you do not have to re-read the data if the dollar amounts are between ±$131,072. You can type

    . gen double x_in_pennies = round(x*100)

    This works only if x is between ±131,072.

  9. Advice for programmers

    8.1 Stata does all calculations in double (and sometimes quad) precision.

    Float precision may be adequate for recording most data, but float precision is inadequate for performing calculations. That is why Stata does all calculations in double precision. Float precision is also inadequate for storing the results of intermediate calculations.

    There is only one situation in which you need to exercise caution — if you create variables in the data containing intermediate results. Be sure to create all such variables as doubles.

    8.2 The same quad-precision routines StataCorp uses are available to you in Mata; see the manual entries [M-5] mean, [M-5] sum, [M-5] runningsum, and [M-5] quadcross. Use them as you judge necessary.

  10. How to interpret %21x format (if you care)

    9.1 Stata has a display format that will display IEEE 754-2008 floating-point numbers in their full binary glory but in a readable way. You probably do not care; if so, skip this section.

    9.2 IEEE 754-2008 floating-point numbers are stored as a pair of numbers (a, b) that are given the interpretation

    z = a * 2b

    where -2 < a < 2. In double precision, a is recorded with 52 binary digits. In float precision, a is recorded with 23 binary digits. For example, the number 2 is recorded in double precision as

    a = +1.0000000000000000000000000000000000000000000000000000
    b = +1

    The value of pi is recorded as

    a = +1.1001001000011111101101010100010001000010110100011000
    b = +1

    9.3 %21x presents a and b in base 16. The double-precision value of 2 is shown in %21x format as

    +1.0000000000000X+001

    and the value of pi is shown as

    +1.921fb54442d18X+001

    In the case of pi, the interpretation is

    a = +1.921fb54442d18 (base 16)
    b = +001             (base 16)

    Reading this requires practice. It helps to remember that one-half corresponds to 0.8 (base 16). Thus, we can see that a is slightly larger than 1.5 (base 10) and b = 1 (base 10), so _pi is something over 1.5*21 = 3.

    The number 100,000 in %21x is

    +1.86a0000000000X+010

    which is to say

    a = +1.86a0000000000 (base 16)
    b = +010             (base 16)

    We see that a is slightly over 1.5 (base 10), and b is 16 (base 10), so 100,000 is something over 1.5*216 = 98,304.

    9.4 %21x faithfully presents how the computer thinks of the number. For instance, we can easily see that the nice number 1.1 (base 10) is, in binary, a number with many digits to the right of the binary point:

    . display %21x 1.1
    +1.199999999999aX+000

    We can also see why 1.1 stored as a float is different from 1.1 stored as a double:

    . display %21x float(1.1)
    +1.19999a0000000X+000

    Float precision assigns fewer digits to the mantissa than does double precision, and 1.1 (base 10) in base 16 is a repeating hexadecimal.

    9.5 %21x can be used as an input format as well as an output format. For instance, Stata understands

    . gen x = 1.86ax+10

    Stored in x will be 100,000 (base 10).

    9.6 StataCorp has seen too many competent scientific programmers who, needing a perturbance for later use in their program, code something like

    epsilon = 1e-8

    It is worth examining that number:

    . display %21x 1e-8
    +1.5798ee2308c3aX-01b

    That is an ugly number that can only lead to the introduction of roundoff error in their program. A far better number would be

    epsilon = 1.0x-1b

    Stata and Mata understand the above statement because %21x may be used as input as well as output. Naturally, 1.0x-1b looks just like what it is,

    . display %21x 1.0x-1b
    +1.0000000000000X-01b

    and all those pretty zeros will reduce numerical roundoff error.

    In base 10, the pretty 1.0x-1b looks like

    . display %20.0g 1.0x-1b
    7.4505805969238e-09

    and that number may not look pretty to you, but you are not a base-2 digital computer.

    Perhaps the programmer feels that epsilon really needs to be closer to 1e-8. In %21x, we see that 1e-8 is +1.5798ee2308c3aX-01b, so if we want to get closer, perhaps we use

    epsilon = 1.6x-1b

    9.7 %21x was invented by StataCorp.

  11. Also see

    If you wish to learn more, see

    How to read the %21x format

    How to read the %21x format, part 2

    Precision (yet again), Part I

    Precision (yet again), Part II