Archive

Posts Tagged ‘time series’

Unit-root tests in Stata

\(\newcommand{\mub}{{\boldsymbol{\mu}}}
\newcommand{\eb}{{\boldsymbol{e}}}
\newcommand{\betab}{\boldsymbol{\beta}}\)Determining the stationarity of a time series is a key step before embarking on any analysis. The statistical properties of most estimators in time series rely on the data being (weakly) stationary. Loosely speaking, a weakly stationary process is characterized by a time-invariant mean, variance, and autocovariance.

In most observed series, however, the presence of a trend component results in the series being nonstationary. Furthermore, the trend can be either deterministic or stochastic, depending on which appropriate transformations must be applied to obtain a stationary series. For example, a stochastic trend, or commonly known as a unit root, is eliminated by differencing the series. However, differencing a series that in fact contains a deterministic trend results in a unit root in the moving-average process. Similarly, subtracting a deterministic trend from a series that in fact contains a stochastic trend does not render a stationary series. Hence, it is important to identify whether nonstationarity is due to a deterministic or a stochastic trend before applying the proper transformations.

In this post, Read more…

ARMA processes with nonnormal disturbances

Autoregressive (AR) and moving-average (MA) models are combined to obtain ARMA models. The parameters of an ARMA model are typically estimated by maximizing a likelihood function assuming independently and identically distributed Gaussian errors. This is a rather strict assumption. If the underlying distribution of the error is nonnormal, does maximum likelihood estimation still work? The short answer is yes under certain regularity conditions and the estimator is known as the quasi-maximum likelihood estimator (QMLE) (White 1982).

In this post, I use Monte Carlo Simulations (MCS) to verify that the QMLE of a stationary and invertible ARMA model is consistent and asymptotically normal. See Yao and Brockwell (2006) for a formal proof. For an overview of performing MCS in Stata, refer to Monte Carlo simulations using Stata. Also see A simulation-based explanation of consistency and asymptotic normality for a discussion of performing such an exercise in Stata.

Simulation

Let’s begin by Read more…

Vector autoregression—simulation, estimation, and inference in Stata

\(\newcommand{\epsb}{{\boldsymbol{\epsilon}}}
\newcommand{\mub}{{\boldsymbol{\mu}}}
\newcommand{\thetab}{{\boldsymbol{\theta}}}
\newcommand{\Thetab}{{\boldsymbol{\Theta}}}
\newcommand{\etab}{{\boldsymbol{\eta}}}
\newcommand{\Sigmab}{{\boldsymbol{\Sigma}}}
\newcommand{\Phib}{{\boldsymbol{\Phi}}}
\newcommand{\Phat}{\hat{{\bf P}}}\)Vector autoregression (VAR) is a useful tool for analyzing the dynamics of multiple time series. VAR expresses a vector of observed variables as a function of its own lags.

Simulation

Let’s begin by simulating a bivariate VAR(2) process using the following specification,

\[
\begin{bmatrix} y_{1,t}\\ y_{2,t}
\end{bmatrix}
= \mub + {\bf A}_1 \begin{bmatrix} y_{1,t-1}\\ y_{2,t-1}
\end{bmatrix} + {\bf A}_2 \begin{bmatrix} y_{1,t-2}\\ y_{2,t-2}
\end{bmatrix} + \epsb_t
\]

where \(y_{1,t}\) and \(y_{2,t}\) are the observed series at time \(t\), \(\mub\) is a \(2 \times 1\) vector of intercepts, \({\bf A}_1\) and \({\bf A}_2\) are \(2\times 2\) parameter matrices, and \(\epsb_t\) is a \(2\times 1\) vector of innovations that is uncorrelated over time. I assume a \(N({\bf 0},\Sigmab)\) distribution for the innovations \(\epsb_t\), where \(\Sigmab\) is a \(2\times 2\) covariance matrix.

I set my sample size to 1,100 and Read more…