Archive

Posts Tagged ‘time series’

Vector autoregressions in Stata

Introduction

In a univariate autoregression, a stationary time-series variable \(y_t\) can often be modeled as depending on its own lagged values:

\begin{align}
y_t = \alpha_0 + \alpha_1 y_{t-1} + \alpha_2 y_{t-2} + \dots
+ \alpha_k y_{t-k} + \varepsilon_t
\end{align}

When one analyzes multiple time series, the natural extension to the autoregressive model is the vector autoregression, or VAR, in which a vector of variables is modeled as depending on their own lags and on the lags of every other variable in the vector. A two-variable VAR with one lag looks like

\begin{align}
y_t &= \alpha_{0} + \alpha_{1} y_{t-1} + \alpha_{2} x_{t-1}
+ \varepsilon_{1t} \\
x_t &= \beta_0 + \beta_{1} y_{t-1} + \beta_{2} x_{t-1}
+ \varepsilon_{2t}
\end{align}

Applied macroeconomists use models of this form to both describe macroeconomic data and to perform causal inference and provide policy advice.

In this post, I will estimate a three-variable VAR using the U.S. unemployment rate, the inflation rate, and the nominal interest rate. This VAR is similar to those used in macroeconomics for monetary policy analysis. I focus on basic issues in estimation and postestimation. Data and do-files are provided at the end. Additional background and theoretical details can be found in Ashish Rajbhandari’s [earlier post], which explored VAR estimation using simulated data. Read more…

Unit-root tests in Stata

\(\newcommand{\mub}{{\boldsymbol{\mu}}}
\newcommand{\eb}{{\boldsymbol{e}}}
\newcommand{\betab}{\boldsymbol{\beta}}\)Determining the stationarity of a time series is a key step before embarking on any analysis. The statistical properties of most estimators in time series rely on the data being (weakly) stationary. Loosely speaking, a weakly stationary process is characterized by a time-invariant mean, variance, and autocovariance.

In most observed series, however, the presence of a trend component results in the series being nonstationary. Furthermore, the trend can be either deterministic or stochastic, depending on which appropriate transformations must be applied to obtain a stationary series. For example, a stochastic trend, or commonly known as a unit root, is eliminated by differencing the series. However, differencing a series that in fact contains a deterministic trend results in a unit root in the moving-average process. Similarly, subtracting a deterministic trend from a series that in fact contains a stochastic trend does not render a stationary series. Hence, it is important to identify whether nonstationarity is due to a deterministic or a stochastic trend before applying the proper transformations.

In this post, Read more…

ARMA processes with nonnormal disturbances

Autoregressive (AR) and moving-average (MA) models are combined to obtain ARMA models. The parameters of an ARMA model are typically estimated by maximizing a likelihood function assuming independently and identically distributed Gaussian errors. This is a rather strict assumption. If the underlying distribution of the error is nonnormal, does maximum likelihood estimation still work? The short answer is yes under certain regularity conditions and the estimator is known as the quasi-maximum likelihood estimator (QMLE) (White 1982).

In this post, I use Monte Carlo Simulations (MCS) to verify that the QMLE of a stationary and invertible ARMA model is consistent and asymptotically normal. See Yao and Brockwell (2006) for a formal proof. For an overview of performing MCS in Stata, refer to Monte Carlo simulations using Stata. Also see A simulation-based explanation of consistency and asymptotic normality for a discussion of performing such an exercise in Stata.

Simulation

Let’s begin by Read more…

Vector autoregression—simulation, estimation, and inference in Stata

\(\newcommand{\epsb}{{\boldsymbol{\epsilon}}}
\newcommand{\mub}{{\boldsymbol{\mu}}}
\newcommand{\thetab}{{\boldsymbol{\theta}}}
\newcommand{\Thetab}{{\boldsymbol{\Theta}}}
\newcommand{\etab}{{\boldsymbol{\eta}}}
\newcommand{\Sigmab}{{\boldsymbol{\Sigma}}}
\newcommand{\Phib}{{\boldsymbol{\Phi}}}
\newcommand{\Phat}{\hat{{\bf P}}}\)Vector autoregression (VAR) is a useful tool for analyzing the dynamics of multiple time series. VAR expresses a vector of observed variables as a function of its own lags.

Simulation

Let’s begin by simulating a bivariate VAR(2) process using the following specification,

\[
\begin{bmatrix} y_{1,t}\\ y_{2,t}
\end{bmatrix}
= \mub + {\bf A}_1 \begin{bmatrix} y_{1,t-1}\\ y_{2,t-1}
\end{bmatrix} + {\bf A}_2 \begin{bmatrix} y_{1,t-2}\\ y_{2,t-2}
\end{bmatrix} + \epsb_t
\]

where \(y_{1,t}\) and \(y_{2,t}\) are the observed series at time \(t\), \(\mub\) is a \(2 \times 1\) vector of intercepts, \({\bf A}_1\) and \({\bf A}_2\) are \(2\times 2\) parameter matrices, and \(\epsb_t\) is a \(2\times 1\) vector of innovations that is uncorrelated over time. I assume a \(N({\bf 0},\Sigmab)\) distribution for the innovations \(\epsb_t\), where \(\Sigmab\) is a \(2\times 2\) covariance matrix.

I set my sample size to 1,100 and Read more…