## Monte Carlo simulations using Stata

**Overview**

A Monte Carlo simulation (MCS) of an estimator approximates the sampling distribution of an estimator by simulation methods for a particular data-generating process (DGP) and sample size. I use an MCS to learn how well estimation techniques perform for specific DGPs. In this post, I show how to perform an MCS study of an estimator in Stata and how to interpret the results.

Large-sample theory tells us that the sample average is a good estimator for the mean when the true DGP is a random sample from a \(\chi^2\) distribution with 1 degree of freedom, denoted by \(\chi^2(1)\). But a friend of mine claims this estimator will not work well for this DGP because the \(\chi^2(1)\) distribution will produce outliers. In this post, I use an MCS to see if the large-sample theory works well for this DGP in a sample of 500 observations.

**A first pass at an MCS**

I begin by showing how to draw a random sample of size 500 from a \(\chi^2(1)\) distribution and how to estimate the mean and a standard error for the mean.

**Example 1: The mean of simulated data**

. drop _all . set obs 500 number of observations (_N) was 0, now 500 . set seed 12345 . generate y = rchi2(1) . mean y Mean estimation Number of obs = 500 -------------------------------------------------------------- | Mean Std. Err. [95% Conf. Interval] -------------+------------------------------------------------ y | .9107644 .0548647 .8029702 1.018559 --------------------------------------------------------------

I specified **set seed 12345** to set the seed of the random-number generator so that the results will be reproducible. The sample average estimate of the mean from this random sample is \(0.91\), and the estimated standard error is \(0.055\).

If I had many estimates, each from an independently drawn random sample, I could estimate the mean and the standard deviation of the sampling distribution of the estimator. To obtain many estimates, I need to repeat the following process many times:

- Draw from the DGP
- Compute the estimate
- Store the estimate.

I need to know how to store the many estimates to proceed with this process. I also need to know how to repeat the process many times and how to access Stata estimates, but I put these details into appendices I and II, respectively, because many readers are already familiar with these topics and I want to focus on how to store the results from many draws.

I want to put the many estimates someplace where they will become part of a dataset that I can subsequently analyze. I use the commands **postfile**, **post**, and **postclose** to store the estimates in memory and write all the stored estimates out to a dataset when I am done. Example 2 illustrates the process, when there are three draws.

**Example 2: Estimated means of three draws**

. set seed 12345 . postfile buffer mhat using mcs, replace . forvalues i=1/3 { 2. quietly drop _all 3. quietly set obs 500 4. quietly generate y = rchi2(1) 5. quietly mean y 6. post buffer (_b[y]) 7. } . postclose buffer . use mcs, clear . list +----------+ | mhat | |----------| 1. | .9107645 | 2. | 1.03821 | 3. | 1.039254 | +----------+

The command

postfile buffer mhat using mcs, replace

creates a place in memory called **buffer** in which I can store the results that will eventually be written out to a dataset. **mhat** is the name of the variable that will hold the estimates in the new dataset called **mcs.dta**. The keyword **using** separates the new variable name from the name of the new dataset. I specified the option **replace** to replace any previous versions of **msc.dta** with the one created here.

I used

forvalues i=1/3 {

to repeat the process three times. (See appendix I if you want a refresher on this syntax.) The commands

quietly drop _all quietly set obs 500 quietly generate y = rchi2(1) quietly mean y

drop the previous data, draw a sample of size 500 from a \(\chi^2(1)\) distribution, and estimate the mean. (The **quietly** before each command suppresses the output.) The command

post buffer (_b[y])

stores the estimated mean for the current draw in **buffer** for what will be the next observation on **mhat**. The command

postclose buffer

writes the stuff stored in **buffer** to the file **mcs.dta**. The commands

use mcs, clear list

drop the last \(\chi^2(1)\) sample from memory, read in the **msc** dataset, and list out the dataset.

Example 3 below is a modified version of example 2; I increased the number of draws and summarized the results.

**Example 3: The mean of 2,000 estimated means**

. set seed 12345 . postfile buffer mhat using mcs, replace . forvalues i=1/2000 { 2. quietly drop _all 3. quietly set obs 500 4. quietly generate y = rchi2(1) 5. quietly mean y 6. post buffer (_b[y]) 7. } . postclose buffer . use mcs, clear . summarize Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- mhat | 2,000 1.00017 .0625367 .7792076 1.22256

The average of the \(2,000\) estimates is an estimator for the mean of the sampling distribution of the estimator, and it is close to the true value of \(1.0\). The sample standard deviation of the \(2,000\) estimates is an estimator for the standard deviation of the sampling distribution of the estimator, and it is close to the true value of \(\sqrt{\sigma^2/N}=\sqrt{2/500}\approx 0.0632\), where \(\sigma^2\) is the variance of the \(\chi^2(1)\) random variable.

**Including standard errors**

The standard error of the estimator reported by **mean** is an estimate of the standard deviation of the sampling distribution of the estimator. If the large-sample distribution is doing a good job of approximating the sampling distribution of the estimator, the mean of the estimated standard

errors should be close to the sample standard deviation of the many mean estimates.

To compare the standard deviation of the estimates with the mean of the estimated standard errors, I modify example 3 to also store the standard errors.

**Example 4: The mean of 2,000 standard errors**

. set seed 12345 . postfile buffer mhat sehat using mcs, replace . forvalues i=1/2000 { 2. quietly drop _all 3. quietly set obs 500 4. quietly generate y = rchi2(1) 5. quietly mean y 6. post buffer (_b[y]) (_se[y]) 7. } . postclose buffer . use mcs, clear . summarize Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- mhat | 2,000 1.00017 .0625367 .7792076 1.22256 sehat | 2,000 .0629644 .0051703 .0464698 .0819693

Mechanically, the command

postfile buffer mhat sehat using mcs, replace

makes room in **buffer** for the new variables **mhat** and **sehat**, and

post buffer (_b[y]) (_se[y])

stores each estimated mean in the memory for **mhat** and each estimated standard error in the memory for **sehat**. (As in example 3, the command **postclose buffer** writes what is stored in memory to the new dataset.)

The sample standard deviation of the \(2,000\) estimates is \(0.0625\), and it is close to the mean of the \(2,000\) estimated standard errors, which is \(0.0630\).

You may be thinking I should have written “very close”, but how close is \(0.0625\) to \(0.0630\)? Honestly, I cannot tell if these two numbers are sufficiently close to each other because the distance between them does not automatically tell me how reliable the resulting inference will be.

**Estimating a rejection rate**

In frequentist statistics, we reject a null hypothesis if the *p*-value is below a specified size. If the large-sample distribution approximates the finite-sample distribution well, the rejection rate of the test against the true null hypothesis should be close to the specified size.

To compare the rejection rate with the size of 5%, I modify example 4 to compute and store an indicator for whether I reject a Wald test against the true null hypothesis. (See appendix III for a discussion of the mechanics.)

**Example 5: Estimating the rejection rate**

. set seed 12345 . postfile buffer mhat sehat reject using mcs, replace . forvalues i=1/2000 { 2. quietly drop _all 3. quietly set obs 500 4. quietly generate y = rchi2(1) 5. quietly mean y 6. quietly test _b[y]=1 7. local r = (r(p)<.05) 8. post buffer (_b[y]) (_se[y]) (`r') 9. } . postclose buffer . use mcs, clear . summarize Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- mhat | 2,000 1.00017 .0625367 .7792076 1.22256 sehat | 2,000 .0629644 .0051703 .0464698 .0819693 reject | 2,000 .0475 .212759 0 1

The rejection rate of \(0.048\) is very close to the size of \(0.05\).

**Done and undone**

In this post, I have shown how to perform an MCS of an estimator in Stata. I discussed the mechanics of using the **post** commands to store the many estimates and how to interpret the mean of the many estimates and the mean of the many estimated standard errors. I also recommended using an estimated rejection rate to evaluate the usefulness of the large-sample approximation to the sampling distribution of an estimator for a given DGP and sample size.

The example illustrates that the sample average performs as predicted by large-sample theory as an estimator for the mean. This conclusion does not mean that my friend's concerns about outliers were entirely misplaced. Other estimators that are more robust to outliers may have better properties. I plan to illustrate some of the trade-offs in future posts.

**Appendix I: Repeating a process many times**

This appendix provides a quick introduction to local macros and how to use them to repeat some commands many times; see **[P] macro** and **[P] forvalues** for more details.

I can store and access string information in local macros. Below, I store the string ``hello" in the local macro named **value**.

local value "hello"

To access the stored information, I adorn the name of the local macro. Specifically, I precede it with the single left quote (**`**) and follow it with the single right quote (**'**). Below, I access and display the value stored in the local macro **value**.

. display "`value'" hello

I can also store numbers as strings, as follows

. local value "2.134" . display "`value'" 2.134

To repeat some commands many times, I put them in a {\tt forvalues} loop. For example, the code below repeats the **display** command three times.

. forvalues i=1/3 { 2. display "i is now `i'" 3. } i is now 1 i is now 2 i is now 3

The above example illustrates that **forvalues** defines a local macro that takes on each value in the specified list of values. In the above example, the name of the local macro is **i**, and the specified values are **1/3**=\(\{1, 2, 3\}\).

**Appendix II: Accessing estimates**

After a Stata estimation command, you can access the point estimate of a parameter named **y** by typing **_b[y]**, and you can access the estimated standard error by typing **_se[y]**. The example below illustrates this process.

**Example 6: Accessing estimated values**

. drop _all . set obs 500 number of observations (_N) was 0, now 500 . set seed 12345 . generate y = rchi2(1) . mean y Mean estimation Number of obs = 500 -------------------------------------------------------------- | Mean Std. Err. [95% Conf. Interval] -------------+------------------------------------------------ y | .9107644 .0548647 .8029702 1.018559 -------------------------------------------------------------- . display _b[y] .91076444 . display _se[y] .05486467

**Appendix III: Getting a p-value computed by test**

This appendix explains the mechanics of creating an indicator for whether a Wald test rejects the null hypothesis at a specific size.

I begin by generating some data and performing a Wald test against the true null hypothesis.

**Example 7: Wald test results**

. drop _all . set obs 500 number of observations (_N) was 0, now 500 . set seed 12345 . generate y = rchi2(1) . mean y Mean estimation Number of obs = 500 -------------------------------------------------------------- | Mean Std. Err. [95% Conf. Interval] -------------+------------------------------------------------ y | .9107644 .0548647 .8029702 1.018559 -------------------------------------------------------------- . test _b[y]=1 ( 1) y = 1 F( 1, 499) = 2.65 Prob > F = 0.1045

The results reported by **test** are stored in **r()**. Below, I use **return list** to see them, type **help return list** for details.

**Example 8: Results stored by test**

. return list scalars: r(drop) = 0 r(df_r) = 499 r(F) = 2.645393485924886 r(df) = 1 r(p) = .1044817353734439

The *p*-value reported by **test** is stored in **r(p)**. Below, I store a 0/1 indicator for whether the *p*-value is less than \(0.05|0 in the local macro **r**. (See appendix II for an introduction to local macros.) I complete the illustration by displaying that the local macro contains the value \(0\).

. local r = (r(p)<.05) . display "`r'" 0

Pingback: The Stata Blog » Efficiency comparisons by Monte Carlo simulation()

Pingback: The Stata Blog » A simulation-based explanation of consistency and asymptotic normality()

Pingback: The Stata Blog » ARMA processes with nonnormal disturbances()

Pingback: The Stata Blog » Understanding the generalized method of moments (GMM): A simple example()