As of Stata 16, see [BAYES] bayesstats grubin and Bayesian analysis: Gelman-Rubin convergence diagnostic.
The original blog posted May 26, 2016, omitted option initrandom from the bayesmh command. The code and the text of the blog entry were updated on August 9, 2018, to reflect this.
Overview
MCMC algorithms used for simulating posterior distributions are indispensable tools in Bayesian analysis. A major consideration in MCMC simulations is that of convergence. Has the simulated Markov chain fully explored the target posterior distribution so far, or do we need longer simulations? A common approach in assessing MCMC convergence is based on running and analyzing the difference between multiple chains.
For a given Bayesian model, bayesmh is capable of producing multiple Markov chains with randomly dispersed initial values by using the initrandom option, available as of the update on 19 May 2016. In this post, I demonstrate the Gelman–Rubin diagnostic as a more formal test for convergence using multiple chains. For graphical diagnostics, see Graphical diagnostics using multiple chains in [BAYES] bayesmh for more details. To compute the Gelman–Rubin diagnostic, I use an unofficial command, grubin, which can be installed by typing the following in Stata: Read more…
Initial thoughts
Estimating causal relationships from data is one of the fundamental endeavors of researchers. Ideally, we could conduct a controlled experiment to estimate causal relations. However, conducting a controlled experiment may be infeasible. For example, education researchers cannot randomize education attainment and they must learn from observational data.
In the absence of experimental data, we construct models to capture the relevant features of the causal relationship we have an interest in, using observational data. Models are successful if the features we did not include can be ignored without affecting our ability to ascertain the causal relationship we are interested in. Sometimes, however, ignoring some features of reality results in models that yield relationships that cannot be interpreted causally. In a regression framework, depending on our discipline or our research question, we give a different name to this phenomenon: endogeneity, omitted confounders, omitted variable bias, simultaneity bias, selection bias, etc.
Below I show how we can understand many of these problems in a unified regression framework and use simulated data to illustrate how they affect estimation and inference. Read more…
\(
\newcommand{\xb}{{\bf x}}
\newcommand{\gb}{{\bf g}}
\newcommand{\Hb}{{\bf H}}
\newcommand{\Gb}{{\bf G}}
\newcommand{\Eb}{{\bf E}}
\newcommand{\betab}{\boldsymbol{\beta}}
\)I write ado-commands that estimate the parameters of an exponential conditional mean (ECM) model and a probit conditional mean (PCM) model by nonlinear least squares, using the methods that I discussed in the post Programming an estimation command in Stata: Nonlinear least-squares estimators. These commands will either share lots of code or repeat lots of code, because they are so similar. It is almost always better to share code than to repeat code. Shared code only needs to be changed in one place to add a feature or to fix a problem; repeated code must be changed everywhere. I introduce Mata libraries to share Mata functions across ado-commands, and I introduce wrapper commands to share ado-code.
This is the 27th post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
Ado-commands for ECM and PCM models
I now convert the examples of Read more…
\(\newcommand{\xb}{{\bf x}}
\newcommand{\gb}{{\bf g}}
\newcommand{\Hb}{{\bf H}}
\newcommand{\Gb}{{\bf G}}
\newcommand{\Eb}{{\bf E}}
\newcommand{\betab}{\boldsymbol{\beta}}\)I want to write ado-commands to estimate the parameters of an exponential conditional mean (ECM) model and probit conditional mean (PCM) model by nonlinear least squares (NLS). Before I can write these commands, I need to show how to trick optimize() into performing the Gauss–Newton algorithm and apply this trick to these two problems.
This is the 26th post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
Gauss–Newton algorithm
Gauss–Newton algorithms frequently perform better than Read more…
Autoregressive (AR) and moving-average (MA) models are combined to obtain ARMA models. The parameters of an ARMA model are typically estimated by maximizing a likelihood function assuming independently and identically distributed Gaussian errors. This is a rather strict assumption. If the underlying distribution of the error is nonnormal, does maximum likelihood estimation still work? The short answer is yes under certain regularity conditions and the estimator is known as the quasi-maximum likelihood estimator (QMLE) (White 1982).
In this post, I use Monte Carlo Simulations (MCS) to verify that the QMLE of a stationary and invertible ARMA model is consistent and asymptotically normal. See Yao and Brockwell (2006) for a formal proof. For an overview of performing MCS in Stata, refer to Monte Carlo simulations using Stata. Also see A simulation-based explanation of consistency and asymptotic normality for a discussion of performing such an exercise in Stata.
Simulation
Let’s begin by Read more…