Update on the Stata YouTube Channel

What is it about round numbers that compels us to pause and reflect? We celebrate 20-year school reunions, 25-year wedding anniversaries, 50th birthdays and other similar milestones. I don’t know the answer but the Stata YouTube Channel recently passed several milestones – more than 1500 subscribers, over 50,000 video views and it was launched six months ago. We felt the need for a small celebration to mark the occasion, and I thought that I would give you a brief update.

I could tell you about re-recording the original 24 videos with a larger font to make them easier to read. I could tell you about the hardware and software that we use to record them including our experiments with various condenser and dynamic microphones. I could share quotes from some of the nice messages we’ve received. But I think it would be more fun to talk about….you!

YouTube collects data about the number of views each video receives as well as summary data about who, what, when, where, and how you are watching them. There is no need to be concerned about your privacy; there are no personal identifiers of any kind associated with these data. But the summary data are interesting, and I thought it might be fun to share some of the data with you.

Who’s watching?

Figure1

Figure 1 shows the age distribution of Stata YouTube Channel viewers. If you have ever attended a Stata Conference, you will not be surprised by this graph…until you notice the age group at the bottom. I would not have guessed that 13-17 year olds are watching our videos. Perhaps they saw Stata in the movie “Moneyball” with Brad Pitt and wanted to learn more. Or maybe they were influenced by the latest fashion craze sweeping the youth of the world.

What are you watching?

Figure2

We have posted more than 50 videos over a wide range of topics. Figure 2 shows the total number of views for the ten most popular videos. The more popular of the ten are about broad topics. These broader videos are mostly older and have thus had time to accumulate more views.

Even so, these videos receive more views per day currently than do the special topic videos that have been posted more recently. This supports my belief that Stata YouTube Channel viewers tend to be relatively new Stata users who want to learn about general topics, and that means more generic videos in the future. So you and your two post-docs will just have to read the manual if you want to learn how to fit asymmetric power ARCH models with outer-product gradient standard errors.

When are you watching?

Figure3

We usually post new videos on Tuesday mornings which might lead you to believe that the peak viewing day would also be Tuesday. Figure 3, however, shows us that the average number of views per day (vpd) is higher on Wednesdays at 420 vpd and in fact peaks on Thursdays at 430 vpd before declining Friday through Sunday.

Figure4

Figure 4 also shows us that late September may have been not the best time to launch the Stata YouTube Channel. Our early momentum in September and October slowed during the November and December holiday seasons. We were, however, pleased to see that 49 of you spent New Years Eve watching our videos. Perhaps next year we’ll prepare something more festive just for you!

Where are you watching?

What do the Czech Republic, Pakistan, Uganda, Madagascar, the United Kingdom, the Bahamas, the United States, Montenegro, and Italy have in common? Correct! They are all countries in which you are watching our videos. They are also locations depicted in one of my favorite action films but I’ll leave that to the trivia buffs. I think the most exciting information that we found in our data is that the Stata YouTube Channel is being viewed in 164 countries!

Figure5

You might not be surprised to learn that roughly half of the people watching the videos live in the United States, the United Kingdom, or Canada. The results may be unexpected when we consider the “view rate” defined as the number of views per 100,000 residents. Figure 5 shows the top 20 countries ranked by view rate for countries with at least four million residents. Denmark had the highest view rate which was nearly twice the rate of Norway which had the second highest view rate. The view rate in Denmark was more than three times the rate in the US and the UK.

How are you watching?

You might think that I would have anything to report about “how” you are watching the videos, but it turns out that 5.2% of you are watching on mobile devices. Perhaps this explains the 13-17 year old demographic or the 49 people watching on New Year’s Eve. Or maybe we are helping you pass the time in the dentist office waiting room.

Final thoughts

Six months isn’t much of a milestone. We Stata folk will use any excuse to break out the cake and ice cream. Even so, the Stata YouTube Channel began as an experiment and often experiments do not work out as we would like. This experiment has exceeded our expectations and, as a result, we have started taking requests for videos on our Facebook page and we’ll be adding more videos every week. So thanks for watching and stay tuned!

Now if you will excuse me, I’m going to get some cake and ice cream.

Categories: Resources Tags: ,

Multilevel linear models in Stata, part 2: Longitudinal data

In my last posting, I introduced you to the concepts of hierarchical or “multilevel” data. In today’s post, I’d like to show you how to use multilevel modeling techniques to analyse longitudinal data with Stata’s xtmixed command.

Last time, we noticed that our data had two features. First, we noticed that the means within each level of the hierarchy were different from each other and we incorporated that into our data analysis by fitting a “variance component” model using Stata’s xtmixed command.

The second feature that we noticed is that repeated measurement of GSP showed an upward trend. We’ll pick up where we left off last time and stick to the concepts again and you can refer to the references at the end to learn more about the details.

The videos

Stata has a very friendly dialog box that can assist you in building multilevel models. If you would like a brief introduction using the GUI, you can watch a demonstration on Stata’s YouTube Channel:

Introduction to multilevel linear models in Stata, part 2: Longitudinal data

Longitudinal data

I’m often asked by beginning data analysts – “What’s the difference between longitudinal data and time-series data? Aren’t they the same thing?”.

The confusion is understandable — both types of data involve some measurement of time. But the answer is no, they are not the same thing.

Univariate time series data typically arise from the collection of many data points over time from a single source, such as from a person, country, financial instrument, etc.

Longitudinal data typically arise from collecting a few observations over time from many sources, such as a few blood pressure measurements from many people.

There are some multivariate time series that blur this distinction but a rule of thumb for distinguishing between the two is that time series have more repeated observations than subjects while longitudinal data have more subjects than repeated observations.

Because our GSP data from last time involve 17 measurements from 48 states (more sources than measurements), we will treat them as longitudinal data.

GSP Data: http://www.stata-press.com/data/r12/productivity.dta

Random intercept models

As I mentioned last time, repeated observations on a group of individuals can be conceptualized as multilevel data and modeled just as any other multilevel data. We left off last time with a variance component model for GSP (Gross State Product, logged) and noted that our model assumed a constant GSP over time while the data showed a clear upward trend.

Graph3

If we consider a single observation and think about our model, nothing in the fixed or random part of the models is a function of time.

Slide15

Let’s begin by adding the variable year to the fixed part of our model.

Slide16

As we expected, our grand mean has become a linear regression which more accurately reflects the change over time in GSP. What might be unexpected is that each state’s and region’s mean has changed as well and now has the same slope as the regression line. This is because none of the random components of our model are a function of time. Let’s fit this model with the xtmixed command:

. xtmixed gsp year, || region: || state:

------------------------------------------------------------------------------
         gsp |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
        year |   .0274903   .0005247    52.39   0.000     .0264618    .0285188
       _cons |  -43.71617   1.067718   -40.94   0.000    -45.80886   -41.62348
------------------------------------------------------------------------------

------------------------------------------------------------------------------
  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
region: Identity             |
                   sd(_cons) |   .6615238   .2038949      .3615664    1.210327
-----------------------------+------------------------------------------------
state: Identity              |
                   sd(_cons) |   .7805107   .0885788      .6248525    .9749452
-----------------------------+------------------------------------------------
                sd(Residual) |   .0734343   .0018737      .0698522    .0772001
------------------------------------------------------------------------------

The fixed part of our model now displays an estimate of the intercept (_cons = -43.7) and the slope (year = 0.027). Let’s graph the model for Region 7 and see if it fits the data better than the variance component model.

predict GrandMean, xb
label var GrandMean "GrandMean"
predict RegionEffect, reffects level(region)
predict StateEffect, reffects level(state)
gen RegionMean = GrandMean + RegionEffect
gen StateMean = GrandMean + RegionEffect + StateEffect

twoway  (line GrandMean year, lcolor(black) lwidth(thick))      ///
        (line RegionMean year, lcolor(blue) lwidth(medthick))   ///
        (line StateMean year, lcolor(green) connect(ascending)) ///
        (scatter gsp year, mcolor(red) msize(medsmall))         ///
        if region ==7,                                          ///
        ytitle(log(Gross State Product), margin(medsmall))      ///
        legend(cols(4) size(small))                             ///
        title("Multilevel Model of GSP for Region 7", size(medsmall))

Graph4

That looks like a much better fit than our variance-components model from last time. Perhaps I should leave well enough alone, but I can’t help noticing that the slopes of the green lines for each state don’t fit as well as they could. The top green line fits nicely but the second from the top looks like it slopes upward more than is necessary. That’s the best fit we can achieve if the regression lines are forced to be parallel to each other. But what if the lines were not forced to be parallel? What if we could fit a “mini-regression model” for each state within the context of my overall multilevel model. Well, good news — we can!

Random slope models

By introducing the variable year to the fixed part of the model, we turned our grand mean into a regression line. Next I’d like to incorporate the variable year into the random part of the model. By introducing a fourth random component that is a function of time, I am effectively estimating a separate regression line within each state.

Slide19

Notice that the size of the new, brown deviation u1ij. is a function of time. If the observation were one year to the left, u1ij. would be smaller and if the observation were one year to the right, u1ij.would be larger.

It is common to “center” the time variable before fitting these kinds of models. Explaining why is for another day. The quick answer is that, at some point during the fitting of the model, Stata will have to compute the equivalent of the inverse of the square of year. For the year 1986 this turns out to be 2.535e-07. That’s a fairly small number and if we multiply it by another small number…well, you get the idea. By centering age (e.g. cyear = year – 1978), we get a more reasonable number for 1986 (0.01). (Hint: If you have problems with your model converging and you have large values for time, try centering them. It won’t always help, but it might).

So let’s center our year variable by subtracting 1978 and fit a model that includes a random slope.

gen cyear = year - 1978
xtmixed gsp cyear, || region: || state: cyear, cov(indep)

Slide21

I’ve color-coded the output so that we can match each part of the output back to the model and the graph. The fixed part of the model appears in the top table and it looks like any other simple linear regression model. The random part of the model is definitely more complicated. If you get lost, look back at the graphic of the deviations and remind yourself that we have simply partitioned the deviation of each observation into four components. If we did this for every observation, the standard deviations in our output are simply the average of those deviations.

Let’s look at a graph of our new “random slope” model for Region 7 and see how well it fits our data.

predict GrandMean, xb
label var GrandMean "GrandMean"
predict RegionEffect, reffects level(region)
predict StateEffect_year StateEffect_cons, reffects level(state)

gen RegionMean = GrandMean + RegionEffect
gen StateMean_cons = GrandMean + RegionEffect + StateEffect_cons
gen StateMean_year = GrandMean + RegionEffect + StateEffect_cons + ///
                     (cyear*StateEffect_year)

twoway  (line GrandMean cyear, lcolor(black) lwidth(thick))             ///
        (line RegionMean cyear, lcolor(blue) lwidth(medthick))          ///
        (line StateMean_cons cyear, lcolor(green) connect(ascending))   ///
        (line StateMean_year cyear, lcolor(brown) connect(ascending))   ///
        (scatter gsp cyear, mcolor(red) msize(medsmall))                ///
        if region ==7,                                                  ///
        ytitle(log(Gross State Product), margin(medsmall))              ///
        legend(cols(3) size(small))                                     ///
        title("Multilevel Model of GSP for Region 7", size(medsmall))

Graph6

The top brown line fits the data slightly better, but the brown line below it (second from the top) is a much better fit. Mission accomplished!

Where do we go from here?

I hope I have been able to convince you that multilevel modeling is easy using Stata’s xtmixed command and that this is a tool that you will want to add to your kit. I would love to say something like “And that’s all there is to it. Go forth and build models!”, but I would be remiss if I didn’t point out that I have glossed over many critical topics.

In our GSP example, we would still like to consider the impact of other independent variables. I haven’t mentioned choice of estimation methods (ML or REML in the case of xtmixed). I’ve assessed the fit of our models by looking at graphs, an approach important but incomplete. We haven’t thought about hypothesis testing. Oh — and, all the usual residual diagnostics for linear regression such as checking for outliers, influential observations, heteroskedasticity and normality still apply….times four! But now that you understand the concepts and some of the mechanics, it shouldn’t be difficult to fill in the details. If you’d like to learn more, check out the links below.

I hope this was helpful…thanks for stopping by.

For more information

If you’d like to learn more about modeling multilevel and longitudinal data, check out

Multilevel and Longitudinal Modeling Using Stata, Third Edition
Volume I: Continuous Responses
Volume II: Categorical Responses, Counts, and Survival
by Sophia Rabe-Hesketh and Anders Skrondal

or sign up for our popular public training course Multilevel/Mixed Models Using Stata.

Multilevel linear models in Stata, part 1: Components of variance

In the last 15-20 years multilevel modeling has evolved from a specialty area of statistical research into a standard analytical tool used by many applied researchers.

Stata has a lot of multilevel modeling capababilities.

I want to show you how easy it is to fit multilevel models in Stata. Along the way, we’ll unavoidably introduce some of the jargon of multilevel modeling.

I’m going to focus on concepts and ignore many of the details that would be part of a formal data analysis. I’ll give you some suggestions for learning more at the end of the post.

    The videos

Stata has a friendly dialog box that can assist you in building multilevel models. If you would like a brief introduction using the GUI, you can watch a demonstration on Stata’s YouTube Channel:

Introduction to multilevel linear models in Stata, part 1: The xtmixed command

    Multilevel data

Multilevel data are characterized by a hierarchical structure. A classic example is children nested within classrooms and classrooms nested within schools. The test scores of students within the same classroom may be correlated due to exposure to the same teacher or textbook. Likewise, the average test scores of classes might be correlated within a school due to the similar socioeconomic level of the students.

You may have run across datasets with these kinds of structures in your own work. For our example, I would like to use a dataset that has both longitudinal and classical hierarchical features. You can access this dataset from within Stata by typing the following command:

use http://www.stata-press.com/data/r12/productivity.dta

We are going to build a model of gross state product for 48 states in the USA measured annually from 1970 to 1986. The states have been grouped into nine regions based on their economic similarity. For distributional reasons, we will be modeling the logarithm of annual Gross State Product (GSP) but in the interest of readability, I will simply refer to the dependent variable as GSP.

. describe gsp year state region

              storage  display     value
variable name   type   format      label      variable label
-----------------------------------------------------------------------------
gsp             float  %9.0g                  log(gross state product)
year            int    %9.0g                  years 1970-1986
state           byte   %9.0g                  states 1-48
region          byte   %9.0g                  regions 1-9

Let’s look at a graph of these data to see what we’re working with.

twoway (line gsp year, connect(ascending)), ///
        by(region, title("log(Gross State Product) by Region", size(medsmall)))

graph1

Each line represents the trajectory of a state’s (log) GSP over the years 1970 to 1986. The first thing I notice is that the groups of lines are different in each of the nine regions. Some groups of lines seem higher and some groups seem lower. The second thing that I notice is that the slopes of the lines are not the same. I’d like to incorporate those attributes of the data into my model.

    Components of variance

Let’s tackle the vertical differences in the groups of lines first. If we think about the hierarchical structure of these data, I have repeated observations nested within states which are in turn nested within regions. I used color to keep track of the data hierarchy.

slide2

We could compute the mean GSP within each state and note that the observations within in each state vary about their state mean.

slide3

Likewise, we could compute the mean GSP within each region and note that the state means vary about their regional mean.

slide4

We could also compute a grand mean and note that the regional means vary about the grand mean.

slide5

Next, let’s introduce some notation to help us keep track of our mutlilevel structure. In the jargon of multilevel modelling, the repeated measurements of GSP are described as “level 1″, the states are referred to as “level 2″ and the regions are “level 3″. I can add a three-part subscript to each observation to keep track of its place in the hierarchy.

slide7

Now let’s think about our model. The simplest regression model is the intercept-only model which is equivalent to the sample mean. The sample mean is the “fixed” part of the model and the difference between the observation and the mean is the residual or “random” part of the model. Econometricians often prefer the term “disturbance”. I’m going to use the symbol μ to denote the fixed part of the model. μ could represent something as simple as the sample mean or it could represent a collection of independent variables and their parameters.

slide8

Each observation can then be described in terms of its deviation from the fixed part of the model.

slide9

If we computed this deviation of each observation, we could estimate the variability of those deviations. Let’s try that for our data using Stata’s xtmixed command to fit the model:

. xtmixed gsp

Mixed-effects ML regression                     Number of obs      =       816

                                                Wald chi2(0)       =         .
Log likelihood = -1174.4175                     Prob > chi2        =         .

------------------------------------------------------------------------------
         gsp |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       _cons |   10.50885   .0357249   294.16   0.000     10.43883    10.57887
------------------------------------------------------------------------------

------------------------------------------------------------------------------
  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
                sd(Residual) |   1.020506   .0252613      .9721766    1.071238
------------------------------------------------------------------------------

The top table in the output shows the fixed part of the model which looks like any other regression output from Stata, and the bottom table displays the random part of the model. Let’s look at a graph of our model along with the raw data and interpret our results.

predict GrandMean, xb
label var GrandMean "GrandMean"
twoway  (line GrandMean year, lcolor(black) lwidth(thick))              ///
        (scatter gsp year, mcolor(red) msize(tiny)),                    ///
        ytitle(log(Gross State Product), margin(medsmall))              ///
        legend(cols(4) size(small))                                     ///
        title("GSP for 1970-1986 by Region", size(medsmall))

graph1b

The thick black line in the center of the graph is the estimate of _cons, which is an estimate of the fixed part of model for GSP. In this simple model, _cons is the sample mean which is equal to 10.51. In “Random-effects Parameters” section of the output, sd(Residual) is the average vertical distance between each observation (the red dots) and fixed part of the model (the black line). In this model, sd(Residual) is the estimate of the sample standard deviation which equals 1.02.

At this point you may be thinking to yourself – “That’s not very interesting – I could have done that with Stata’s summarize command”. And you would be correct.

. summ gsp

    Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
         gsp |       816    10.50885    1.021132    8.37885   13.04882

But here’s where it does become interesting. Let’s make the random part of the model more complex to account for the hierarchical structure of the data. Consider a single observation, yijk and take another look at its residual.

slide11

The observation deviates from its state mean by an amount that we will denote eijk. The observation’s state mean deviates from the the regionals mean uij. and the observation’s regional mean deviates from the fixed part of the model, μ, by an amount that we will denote ui... We have partitioned the observation’s residual into three parts, aka “components”, that describe its magnitude relative to the state, region and grand means. If we calculated this set of residuals for each observation, wecould estimate the variability of those residuals and make distributional assumptions about them.

slide12

These kinds of models are often called “variance component” models because they estimate the variability accounted for by each level of the hierarchy. We can estimate a variance component model for GSP using Stata’s xtmixed command:

xtmixed gsp, || region: || state:

------------------------------------------------------------------------------
         gsp |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       _cons |   10.65961   .2503806    42.57   0.000     10.16887    11.15035
------------------------------------------------------------------------------

------------------------------------------------------------------------------
  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
region: Identity             |   
                   sd(_cons) |   .6615227   .2038944       .361566    1.210325
-----------------------------+------------------------------------------------
state: Identity              |   
                   sd(_cons) |   .7797837   .0886614      .6240114    .9744415
-----------------------------+------------------------------------------------
                sd(Residual) |   .1570457   .0040071       .149385    .1650992
------------------------------------------------------------------------------

The fixed part of the model, _cons, is still the sample mean. But now there are three parameters estimates in the bottom table labeled “Random-effects Parameters”. Each quantifies the average deviation at each level of the hierarchy.

Let’s graph the predictions from our model and see how well they fit the data.

predict GrandMean, xb
label var GrandMean "GrandMean"
predict RegionEffect, reffects level(region)
predict StateEffect, reffects level(state)
gen RegionMean = GrandMean + RegionEffect
gen StateMean = GrandMean + RegionEffect + StateEffect

twoway  (line GrandMean year, lcolor(black) lwidth(thick))      ///
        (line RegionMean year, lcolor(blue) lwidth(medthick))   ///
        (line StateMean year, lcolor(green) connect(ascending)) ///
        (scatter gsp year, mcolor(red) msize(tiny)),            ///
        ytitle(log(Gross State Product), margin(medsmall))      ///
        legend(cols(4) size(small))                             ///
        by(region, title("Multilevel Model of GSP by Region", size(medsmall)))

graph2

Wow – that’s a nice graph if I do say so myself. It would be impressive for a report or publication, but it’s a little tough to read with all nine regions displayed at once. Let’s take a closer look at Region 7 instead.

twoway  (line GrandMean year, lcolor(black) lwidth(thick))      ///
        (line RegionMean year, lcolor(blue) lwidth(medthick))   ///
        (line StateMean year, lcolor(green) connect(ascending)) ///
        (scatter gsp year, mcolor(red) msize(medsmall))         ///
        if region ==7,                                          ///
        ytitle(log(Gross State Product), margin(medsmall))      ///
        legend(cols(4) size(small))                             ///
        title("Multilevel Model of GSP for Region 7", size(medsmall))

graph3

The red dots are the observations of GSP for each state within Region 7. The green lines are the estimated mean GSP within each State and the blue line is the estimated mean GSP within Region 7. The thick black line in the center is the overall grand mean for all nine regions. The model appears to fit the data fairly well but I can’t help noticing that the red dots seem to have an upward slant to them. Our model predicts that GSP is constant within each state and region from 1970 to 1986 when clearly the data show an upward trend.

So we’ve tackled the first feature of our data. We’ve succesfully incorporated the basic hierarchical structure into our model by fitting a variance componentis using Stata’s xtmixed command. But our graph tells us that we aren’t finished yet.

Next time we’ll tackle the second feature of our data — the longitudinal nature of the observations.

    For more information

If you’d like to learn more about modelling multilevel and longitudinal data, check out

Multilevel and Longitudinal Modeling Using Stata, Third Edition
Volume I: Continuous Responses
Volume II: Categorical Responses, Counts, and Survival
by Sophia Rabe-Hesketh and Anders Skrondal

or sign up for our popular public training course “Multilevel/Mixed Models Using Stata“.

There’s a course coming up in Washington, DC on February 7-8, 2013.

Using Stata’s random-number generators, part 4, details

24 October 2012

For those interested in how pseudo random number generators work, I just wrote something on Statalist which you can see in the Statalist archives by clicking the link even if you do not subscribe:

http://www.stata.com/statalist/archive/2012-10/msg01129.html

To remind you, I’ve been writing about how to use random-number generators in parts 1, 2, and 3, and I still have one more posting I want to write on the subject. What I just wrote on Statalist, however, is about how random-number generators work, and I think you will find it interesting.

To find out more about Statalist, see

Statalist

How to successfully ask a question on Statalist

Using Stata’s SEM features to model the Beck Depression Inventory

I just got back from the 2012 Stata Conference in San Diego where I gave a talk on Psychometric Analysis Using Stata and from the 2012 American Psychological Association Meeting in Orlando. Stata’s structural equation modeling (SEM) builder was popular at both meetings and I wanted to show you how easy it is to use. If you are not familiar with the basics of SEM, please refer to the references at the end of the post. My goal is simply to show you how to use the SEM builder assuming that you already know something about SEM. If you would like to view a video demonstration of the SEM builder, please click the play button below:

The data used here and for the silly examples in my talk were simulated to resemble one of the most commonly used measures of depression: the Beck Depression Inventory (BDI). If you find these data too silly or not relevant to your own research, you could instead imagine it being a set of questions to measure mathematical ability, the ability to use a statistical package, or whatever you wanted.

The Beck Depression Inventory

Originally published by Aaron Beck and colleagues in 1961, the BDI marked an important change in the conceptualization of depression from a psychoanalytic perspective to a cognitive/behavioral perspective. It was also a landmark in the measurement of depression shifting from lengthy, expensive interviews with a psychiatrist to a brief, inexpensive questionnaire that could be scored and quantified. The original inventory consisted of 21 questions each allowing ordinal responses of increasing symptom severity from 0-3. The sum of the responses could then be used to classify a respondent’s depressive symptoms as none, mild, moderate or severe. Many studies have demonstrated that the BDI has good psychometric properties such as high test-retest reliability and the scores correlate well with the assessments of psychiatrists and psychologists. The 21 questions can also be grouped into two subscales. The affective scale includes questions like “I feel sad” and “I feel like a failure” that quantify emotional symptoms of depression. The somatic or physical scale includes questions like “I have lost my appetite” and “I have trouble sleeping” that quantify physical symptoms of depression. Since its original publication, the BDI has undergone two revisions in response to the American Psychiatric Association’s (APA) Diagnostic and Statistical Manuals (DSM) and the BDI-II remains very popular.

The Stata Depression Inventory

Since the BDI is a copyrighted psychometric instrument, I created a fictitious instrument called the “Stata Depression Inventory”. It consists of 20 questions each beginning with the phrase “My statistical software makes me…”. The individual questions are listed in the variable labels below.

. describe qu1-qu20

variable  storage  display    value
 name       type   format     label      variable label
------------------------------------------------------------------------------
qu1         byte   %16.0g     response   ...feel sad
qu2         byte   %16.0g     response   ...feel pessimistic about the future
qu3         byte   %16.0g     response   ...feel like a failure
qu4         byte   %16.0g     response   ...feel dissatisfied
qu5         byte   %16.0g     response   ...feel guilty or unworthy
qu6         byte   %16.0g     response   ...feel that I am being punished
qu7         byte   %16.0g     response   ...feel disappointed in myself
qu8         byte   %16.0g     response   ...feel am very critical of myself
qu9         byte   %16.0g     response   ...feel like harming myself
qu10        byte   %16.0g     response   ...feel like crying more than usual
qu11        byte   %16.0g     response   ...become annoyed or irritated easily
qu12        byte   %16.0g     response   ...have lost interest in other people
qu13        byte   %16.0g     qu13_t1    ...have trouble making decisions
qu14        byte   %16.0g     qu14_t1    ...feel unattractive
qu15        byte   %16.0g     qu15_t1    ...feel like not working
qu16        byte   %16.0g     qu16_t1    ...have trouble sleeping
qu17        byte   %16.0g     qu17_t1    ...feel tired or fatigued
qu18        byte   %16.0g     qu18_t1    ...makes my appetite lower than usual
qu19        byte   %16.0g     qu19_t1    ...concerned about my health
qu20        byte   %16.0g     qu20_t1    ...experience decreased libido

The responses consist of a 5-point Likert scale ranging from 1 (Strongly Disagree) to 5 (Strongly Agree). Questions 1-10 form the affective scale of the inventory and questions 11-20 form the physical scale. Data were simulated for 1000 imaginary people and included demographic variables such as age, sex and race. The responses can be summarized succinctly in a matrix of bar graphs:

Classical statistical analysis

The beginning of a classical statistical analysis of these data might consist of summing the responses for questions 1-10 and referring to them as the “Affective Depression Score” and summing questions 11-20 and referring to them as the “Physical Depression Score”.

egen Affective = rowtotal(qu1-qu10)
label var Affective "Affective Depression Score"
egen physical = rowtotal(qu11-qu20)
label var physical "Physical Depression Score"

We could be more sophisticated and use principal components to create the affective and physical depression score:

pca qu1-qu20, components(2)
predict Affective Physical
label var Affective "Affective Depression Score"
label var Physical "Physical Depression Score"

We could then ask questions such as “Are there differences in affective and physical depression scores by sex?” and test these hypotheses using multivariate statistics such as Hotelling’s T-squared statistic. The problem with this analysis strategy is that it treats the depression scores as though they were measured without error and can lead to inaccurate p-values for our test statistics.

Structural equation modeling

Structural equation modeling (SEM) is an ideal way to analyze data where the outcome of interest is a scale or scales derived from a set of measured variables. The affective and physical scores are treated as latent variables in the model resulting in accurate p-values and, best of all….these models are very easy to fit using Stata! We begin by selecting the SEM builder from the Statistics menu:

In the SEM builder, we can select the “Add Measurement Component” icon:

which will open the following dialog box:

In the box labeled “Latent Variable Name” we can type “Affective” (red arrow below) and we can select the variables qu1-qu10 in the “Measured variables” box (blue arrow below).

When we click “OK”, the affective measurement component appears in the builder:

We can repeat this process to create a measurement component for our physical depression scale (images not shown). We can also allow for covariance/correlation between our affective and physical depression scales using the “Add Covariance” icon on the toolbar (red arrow below).

I’ll omit the intermediate steps to build the full model shown below but it’s easy to use the “Add Observed Variable” and “Add Path” icons to create the full model:

Now we’re ready to estimate the parameters for our model. To do this, we click the “Estimate” icon on the toolbar (duh!):

And the flowing dialog box appears:

Let’s ignore the estimation options for now and use the default settings. Click “OK” and the parameter estimates will appear in the diagram:

Some of the parameter estimates are difficult to read in this form but it is easy to rearrange the placement and formatting of the estimates to make them easier to read.

If we look at Stata’s output window and scroll up, you’ll notice that the SEM Builder automatically generated the command for our model:

sem (Affective -> qu1) (Affective -> qu2) (Affective -> qu3)
    (Affective -> qu4) (Affective -> qu5) (Affective -> qu6)
    (Affective -> qu7) (Affective -> qu8) (Affective -> qu9)
    (Affective -> qu10) (Physical -> qu11) (Physical -> qu12)
    (Physical -> qu13) (Physical -> qu14) (Physical -> qu15)
    (Physical -> qu16) (Physical -> qu17) (Physical -> qu18)
    (Physical -> qu19) (Physical -> qu20) (sex -> Affective)
    (sex -> Physical), latent(Affective Physical) cov(e.Physical*e.Affective)

We can gather terms and abbreviate some things to make the command much easier to read:

sem (Affective -> qu1-qu10) ///
    (Physical -> qu11-qu20) /// 
    (sex -> Affective Physical) ///
    , latent(Affective Physical ) ///
    cov( e.Physical*e.Affective)

We could then calculate a Wald statistic to test the null hypothesis that there is no association between sex and our affective and physical depression scales.

test sex

 ( 1)  [Affective]sex = 0
 ( 2)  [Physical]sex = 0

           chi2(  2) =    2.51
         Prob > chi2 =    0.2854

Final thoughts
This is an admittedly oversimplified example – we haven’t considered the fit of the model or considered any alternative models. We have only included one dichotomous independent variable. We might prefer to use a likelihood ratio test or a score test. Those are all very important issues and should not be ignored in a proper data analysis. But my goal was to demonstrate how easy it is to use Stata’s SEM builder to model data such as those arising from the Beck Depression Inventory. Incidentally, if these data were collected using a complex survey design, it would not be difficult to incorporate the sampling structure and sample weights into the analysis. Missing data can be handled easily as well using Full Information Maximum Likelihood (FIML) but those are topics for another day.

If you would like view the slides from my talk, download the data used in this example or view a video demonstration of Stata’s SEM builder using these data, please use the links below. For the dataset, you can also type use followed by the URL for the data to load it directly into Stata.

Slides:
http://stata.com/meeting/sandiego12/materials/sd12_huber.pdf

Data:
http://stata.com/meeting/sandiego12/materials/Huber_2012SanDiego.dta

YouTube video demonstration:
http://www.youtube.com/watch?v=Xj0gBlqwYHI

References

Beck AT, Ward CH, Mendelson M, Mock J, Erbaugh J (June 1961). An inventory for measuring depression. Arch. Gen. Psychiatry 4 (6): 561–71.

Beck AT, Ward C, Mendelson M (1961). Beck Depression Inventory (BDI). Arch Gen Psychiatry 4 (6): 561–571

Beck AT, Steer RA, Ball R, Ranieri W (December 1996). Comparison of Beck Depression Inventories -IA and -II in psychiatric outpatients. Journal of Personality Assessment 67 (3): 588–97
Bollen, KA. (1989). Structural Equations With Latent Variables. New York, NY: John Wiley and Sons

Kline, RB (2011). Principles and Practice of Structural Equation Modeling. New York, NY: Guilford Press

Raykov, T & Marcoulides, GA (2006). A First Course in Structural Equation Modeling. Mahwah, NJ: Lawrence Erlbaum

Schumacker, RE & Lomax, RG (2012) A Beginner’s Guide to Structural Equation Modeling, 3rd Ed. New York, NY: Routledge

Categories: Statistics Tags: , ,

Stata YouTube channel announced!

StataCorp now provides free tutorial videos on StataCorp’s YouTube channel,

http://www.youtube.com/user/statacorp

There are 24 videos providing 1 hour 51 minutes of instructional entertainment:

Stata Quick Tour (5:47)
Stata Quick Help (2:47)
Stata PDF Documentation (6:37)

Stata One-sample t-test (3:43)
Stata t-test for Two Independent Samples (5:09)
Stata t-test for Two Paired Samples (4:42)

Stata Simple Linear Regression (5:33)

Stata SEM Builder (8:09)
Stata One-way ANOVA (5:15)
Stata Two-way ANOVA (5:57)

Stata Pearson’s Correlation Coefficient (3:29)
Stata Pearson’s Chi2 and Fisher’s Exact Test (3:16)

Stata Box Plots (4:04)
Stata Basic Scatterplots (5:19)
Stata Bar Graphs (4:15)
Stata Histograms (4:50)
Stata Pie Charts (5:32)

Stata Descriptive Statistics (5:49)

Stata Tables and Crosstabulations (7:20)
Stata Combining Crosstabs and Descriptives (5:58)

Stata Converting Data to Stata with Stat/Transfer (2:47)
Stata Import Excel Data (1:33)
Stata Excel Copy/Paste (1:16)
Stata Example Data Included with Stata (2:14)

And more are forthcoming.

 

The inside story

Alright, that’s the official announcement.

Last Friday, 21 September 2012, was an exciting day here at StataCorp. After a couple of years of “wouldn’t it be cool if”, and a couple of months of “we’re almost there”, Stata’s YouTube channel was finally ready for prime time.

Stata’s YouTube Channel was the brainchild of Karen Strope, StataCorp’s Director of Marketing, but I had something to do with it, too. Well, maybe more than something, but I’m a modest guy. Anyway, I thought it sounded like fun and recorded a few prototype videos. Annette Fett, StataCorp’s Graphic Designer, added the cool splash-screen and after a few experiments, we soon had 24 Blu-ray resolution videos. We’ve kicked off with videos covering topics such as a tour of Stata’s interface, how to create basic graphs, how to conduct many common statistical analyses, and more.

My personal favorite is the video entitled Combining Crosstabs and Descriptives because it’s relevant to nearly all Stata users and works well as a video demonstration.

Videos about Stata – isn’t that like dancing about architecture?

Stata has over 9,000 pages of documentation included in PDF format, a built-in Help system, and a collection of books on general and special topics published by Stata Press, and an extensive collection of dialog boxes that make even the most complex graphs and analyses easy to perform.

So aren’t the videos, ahh, unnecessary?

The problem is, it’s cumbersome to describe how to use all of Stata’s features, especially dialog boxes, in a manual, even when you have 9,000 pages, and 9,000 pages tries even the most dedicated user’s patience.

In a 3-7 minutes video, we can show you how to create complicated graphs or a sophisticated structural equation model.

We have three audiences in mind.

  1. Videos for non-Stata users, whom we call future Stata users; videos intended to provide a loosely guided tour of Stata’s features.
  2. Videos for new Stata users, such as the person who might simply want to know “How do I calculate a twoway ANOVA in Stata?” or “How do I create a Pie Chart?”. These videos will get them up and running quickly and painlessly.
  3. Videos for experienced Stata users who want to learn new tips and tricks.

There’s actually a fourth group that’s of interest, too; experienced Stata users teaching statistics or data analysis classes, who don’t want to spend valuable class time showing their students how to use Stata. They can refer their students to the relevant videos as homework and thus free class time for the teaching of statistics.

Request for comments

One of the fun things about working at StataCorp is that management doesn’t much use the word “no”. New ideas are more often met with the phrase, “well, let’s try it and see what happens”. So I’m trying this. My plan is to add a couple of videos to the channel every week or two as time permits. I have a list of topics I’d like to cover including things like multiple imputation, survey analysis, mixed models, Stata’s “immediate” commands (tabi, ttesti, csi, cci, etc…), and more examples using the SEM Builder.

However, I will take requests. If you have a suggested topic or a future video, leave a comment.

I’d like to keep the videos brief, between 3-7 minutes, so please don’t request feature-length films like “How to do survival analysis in Stata”. Similarly, topics that are only interesting to you and your two post-docs such as “Please describe the difference between the Laplacian Approximation and Adaptive Gauss-Hermite Quadrature in the xtmepoisson command” are not likely to see the light of day. But I am very interested in your ideas for small, bite-sized topics that will work in a video format.

Categories: Company Tags: , ,

Using Stata’s random-number generators, part 3, drawing with replacement

The topic for today is drawing random samples with replacement. If you haven’t read part 1 and part 2 of this series on random numbers, do so. In the series we’ve
discussed that

  1. Stata’s runiform() function produces random numbers over the range [0,1). To produce such random numbers, type
    . generate double u = runiform()
    

     

  2. To produce continuous random numbers over [a,b), type
    . generate double u = (b-a)*runiform() + a
    

     

  3. To produce integer random numbers over [a,b], type
    . generate ui = floor((b-a+1)*runiform() + a)
    

    If b > 16,777,216, type

    . generate long ui = floor((b-a+1)*runiform() + a)
    

     

  4. To place observations in random order — to shuffle observations — type
    . set seed #
    . generate double u = runiform()
    . sort u
    

     

  5. To draw without replacement a random sample of n observations from a dataset of N observations, type
    . set seed #
    . sort variables_that_put_dta_in_unique_order
    . generate double u = runiform()
    . sort u
    . keep in 1/n
    

    If N>1,000, generate two random variables u1 and u2 in place of u, and substitute sort u1 u2 for sort u.

     

  6. To draw without replacement a P-percent random sample, type
    . set seed #
    . keep if runiform() <= P/100
    

I’ve glossed over details, but the above is the gist of it.

Today I’m going to tell you

  1. To draw a random sample of size n with replacement from a dataset of size N, type
    . set seed #
    . drop _all
    . set obs n
    . generate long obsno = floor(N*runiform()+1)
    . sort obsno
    . save obsnos_to_draw
    
    . use your_dataset, clear
    . generate long obsno = _n
    . merge 1:m obsno using obsnos_to_draw, keep(match) nogen
    

     

  2. You need to set the random-number seed only if you care about reproducibility. I’ll also mention that if N ≤ 16,777,216, it is not necessary to specify that new variable obsno be stored as long; the default float will be sufficient.

     
    The above solution works whether n<N, n=N, or n>N.

 

Drawing samples with replacement

The solution to sampling with replacement n observations from a dataset of size N is

  1. Draw n observation numbers 1, …, N with replacement. For instance, if N=4 and n=3, we might draw observation numbers 1, 3, and 3.

  2. Select those observations from the dataset of interest. For instance, select observations 1, 3, and 3.

As previously discussed in part 1, to generate random integers drawn with replacement over the range [a, b], use the formula

generate varname = floor((b-a+1)*runiform() + a)

In this case, we want a=1 and b=N, and the formula reduces to,

generate varname = floor(N*runiform() + 1)

So the first half of our solution could read

. drop _all
. set obs n
. generate obsno = floor(N*runiform() + 1)

Now we are merely left with the problem of selecting those observations from our dataset, which we can do using merge by typing

. sort obsno
. save obsnos_to_draw
. use dataset_of_interest, clear
. generate obsno = _n
. merge 1:m obsno using obsnos_to_draw, keep(match) nogen

Let’s do an example. In part 2 of this series, I had a dataset with observations corresponding to playing cards:

. use cards

. list in 1/5

     +-------------+
     | rank   suit |
     |-------------|
  1. |  Ace   Club |
  2. |    2   Club |
  3. |    3   Club |
  4. |    4   Club |
  5. |    5   Club |
     +-------------+

There are 52 observations in the dataset; I’m showing you just the first five. Let’s draw 10 cards from the deck, but with replacement.

The first step is to draw the observation numbers. We have N=52 cards in the deck, and we want to draw n=10, so we generate 10 random integers from the integers [1, 52]:

. drop _all

. set obs 10                            // we want n=10
obs was 0, now 10

. gen obsno = floor(52*runiform()+1)    // we draw from N=52


. list obsno                            // let's see what we have

     +-------+
     | obsno |
     |-------|
  1. |    42 |
  2. |    52 |
  3. |    16 |
  4. |     9 |
  5. |    40 |
     |-------|
  6. |    11 |
  7. |    34 |
  8. |    20 |
  9. |    49 |
 10. |    42 |
     +-------+

If you look carefully at the list, you will see that observation number 42 repeats. It will be easier to see the duplicate if we sort the list,

. sort obsno
. list

     +-------+
     | obsno |
     |-------|
  1. |     9 |
  2. |    11 |
  3. |    16 |
  4. |    20 |
  5. |    34 |
     |-------|
  6. |    40 |
  7. |    42 |     <- Obs. 42 repeats
  8. |    42 |     <- See?
  9. |    49 |
 10. |    52 |
     +-------+

An observation didn’t have to repeat, but it’s not surprising that one did because in drawing n=10 from N=52, we would expect one or more repeated cards about 60% of the time.

Anyway, we now know which cards we want, namely cards 9, 11, 16, 20, 34, 40, 42, 42 (again), 49, and 52.

The final step is to select those observations from cards.dta. The way to do that is to perform a one-to-many merge of cards.dta with the list above and keep the matches. Before we can do that, however, we must (1) save the list of observation numbers as a dataset, (2) load cards.dta, and (3) add a variable called obsno to it. Then we will be able to perform the merge. So let’s get that out of the way,

. save obsnos_to_draw                // 1. save the list above
file obsnos_to_draw.dta saved

. use cards                          // 2. load cards.dta

. gen obsno = _n                     // 3.  Add variable obsno to it

Now we can perform the merge:

. merge 1:m obsno using obsnos_to_draw, keep(matched) nogen

    Result                           # of obs.
    -----------------------------------------
    not matched                             0
    matched                                10
    -----------------------------------------

I’ll list the result, but let me first briefly explain the command

merge 1:m obsno using obsnos_to_draw, keep(matched) nogen

merge …, we are performing the merge command,

1:m …, the merge is one-to-many,

using obsnos_to_draw …, we merge data in memory with obsnos_todraw.dta,

, keep(matched) …, we keep observations that appear in both datasets,

nogen, do not add variable _merge to the resulting dataset; _merge reports the source of the resulting observations; we said keep(matched) so we know each came from both sources.

And here is the result:

. list

     +-------------------------+
     |  rank      suit   obsno |
     |-------------------------|
  1. |     8      Club       9 |
  2. |  Jack      Club      11 |
  3. |   Ace     Spade      16 |
  4. |     2   Diamond      20 |
  5. |     6     Spade      34 |
     |-------------------------|
  6. |     8     Spade      40 |
  7. |     9     Heart      42 |   <- Obs. 42 is here ...
  8. | Queen     Spade      49 |
  9. |  King     Spade      52 |
 10. |     9     Heart      42 |   <- and here
     +-------------------------+

We drew 10 cards — those are the observation numbers on the left. Variable obsno in our dataset records the original observation (card) number and really, we no longer need the variable. Anyway, obsno==42 appears twice, in real observations 7 and 10, and thus we drew the 9 of Hearts twice.

 

What could go wrong?

Not much can go wrong, it turns out. At this point, our generic solution is

. drop _all
. set obs n
. generate obsno = floor(n*runiform()+1)
. sort obsno
. save obsnos_to_draw

. use our_dataset
. gen obsno = _n
. merge 1:m obsno using obsnos_to_draw, keep(matched) nogen

If you study this code, there are two lines that might cause problems,

. generate obsno = floor(N*runiform()+1)

and

. generate obsno = _n

When you are looking for problems and see a generate or replace, think about rounding.

Let’s look at the right-hand side first. Both calculations produce integers over the range [1, N]. generate performs all calculations in double and the largest integer that can be stored without rounding is 9,007,199,254,740,992 (see previous blog post on precision). Stata allows datasets up to 2,147,483,646, so we can be sure that N is less than the maximum precise-integer double. There are no rounding issues on the right-hand side.

Next let’s look at the left-hand side. Variable obsno is being stored as a float because we did not instruct otherwise. The largest integer value that can be stored without rounding as a float (also covered in previous blog post on precision) is 16,777,216, and that is less than Stata’s 2,147,483,646 maximum observations. When N exceeds 16,777,216, the solution is to store obsno as a long. We could remember to use long on the rare occasion when dealing with such large datasets, but I’m going to change the generic solution to use longs in all cases, even when it’s unnecessary.

What else could go wrong? Well, we tried an example with n<N and that seemed to work. We should now try examples with n=N and n>N to verify there’s no hidden bug or assumption in our code. I’ve tried examples of both and the code works fine.

 

We’re done for today

That’s it. Drawing samples with replacement turns out to be easy, and that shouldn’t surprise us because we have a random-number generator that draws with replacement.

We could complicate the discussion and consider solutions that would run a bit more efficiently when n=N, which is of special interest in statistics because it is a key ingredient in bootstrapping, but we will not. The above solution works fine in the n=N case, and I always advise researchers to favor simple-even-if-slower solutions because they will probably save you time. Writing complicated code takes longer than writing simple code, and testing complicated code takes even longer. I know because that’s what we do at StataCorp.