## Using the lasso for inference in high-dimensional models

__Why use lasso to do inference about coefficients in high-dimensional models?__

High-dimensional models, which have too many potential covariates for the sample size at hand, are increasingly common in applied research. The lasso, discussed in the previous post, can be used to estimate the coefficients of interest in a high-dimensional model. This post discusses commands in Stata 16 that estimate the coefficients of interest in a high-dimensional model.

An example helps us discuss the issues at hand. We have an extract of the data Sunyer et al. (2017) used to estimate the effect of air pollution on the response time of primary school children. The model is

$$

{\tt htime}_i = {\tt no2\_class}_{\,i}\gamma + {\bf x}_i \boldsymbol{\beta}’ + \epsilon_i

$$

where

htime |
measures of the response time of child \(i\) on a test |

no2_class |
measures the pollution level in the school attended by child \(i\) |

\({\bf x}_i\) | vector of control covariates that might need to be included |

We want to estimate the effect of **no2_class** on **htime** and to estimate a confidence interval for the size of this effect. The problem is that there are 252 controls in \({\bf x}\), but we have only 1,084 observations. The usual method of regressing **htime** on **no2_class** and all the controls in \({\bf x}\) will not produce reliable estimates for \(\gamma\) when we include all 252 controls.

Looking a little more closely at our problem, we see that many of the controls are second-order terms. We think that we need to include some of these terms, but not too many, along with **no2_class** to get a good approximation to the process that generated the data.

In technical terms, our model is an example of a sparse high-dimensional model. The model is high-dimensional in that the number of controls in \({\bf x}\) that could potentially be included is too large to reliably estimate \(\gamma\) when all of them are included in the regression. The model is sparse in that the number of controls that actually need to be included is small, relative to the sample size.

Returning to our example, let’s define \(\tilde{{\bf x}}\) to be the subset of \({\bf x}\) that must be included to get a good estimate of \(\gamma\) for the sample size. If we knew \(\tilde{{\bf x}}\), we could use the model

\begin{equation*}

{\tt htime}_i = {\tt no2\_class}_{\,i}\gamma + \tilde{{\bf x}}_i

\tilde{\boldsymbol{\beta}}’ + \tilde{\epsilon}_i

\end{equation*}

The sparse structure implies that we could estimate \(\gamma\) by regressing **htime** on **no2_class** and \(\tilde{{\bf x}}\), if we knew \(\tilde{{\bf x}}\). But we don’t know which of the 252 potential controls in \({\bf x}\) belong in \(\tilde{{\bf x}}\). So we have a covariate-selection problem, and we have to solve it to estimate \(\gamma\).

The lasso discussed in the last post, immediately offers two possible solutions. First, it seems that we could use the lasso estimates of the coefficients. This does not work because the penalty term in the lasso biases its coefficient estimates toward zero. The lack of standard errors for the lasso estimates also prevents this approach from working. Second, it seems that using the covariates selected by the lasso would allow us to estimate \(\gamma\). Some versions of this second option work, but some explanation is required.

One approach that suggests itself is the following simple postselection (SPS) estimator. The SPS estimator is a multistep estimator. First, the SPS estimator uses a lasso of the dependent variable on the covariates of interest and the control covariates to select which control covariates should be included. (The covariates of interest are not penalized so that they are always included in the model.) Second, it regresses the dependent variable on the covariates of interest and the control covariates included in the lasso run in the first step.

The SPS estimator produces unreliable inference for \(\gamma\). Leeb and Pötscher (2008) showed that estimators like the SPS that include the control covariates selected by the lasso in a subsequent regression produce unreliable inference. Formally, Leeb and Pötscher (2008) showed that estimators like the SPS estimator generally do not have a large-sample normal distribution and that using the usual large-sample theory can produce unreliable inference in finite samples.

The root of the problem is that the lasso cannot always find the covariates with small coefficients. An example of a small coefficient is one whose magnitude is not zero but is less than twice its standard error. (The technical definition includes a broader range but is harder to explain.) In repeated samples, the lasso sometimes includes covariates with small coefficients, and it sometimes excludes these covariates. The sample-to-sample variation of which covariates are included and the intermittent omitted-variable bias caused by missing some relevant covariates prevent the large-sample distribution of the SPS estimator from being normal. This lack of normality is not just a theoretical issue. Many simulations have shown that the inference produced by estimators like the SPS is unreliable in finite samples; see, for example, Belloni, Chernozhukov, and Hansen (2014) and Belloni, Chernozhukov, and Wei (2016).

Belloni et al. (2012), Belloni, Chernozhukov, and Hansen (2014), Belloni, Chernozhukov, and Wei (2016), and Chernozhukov et al. (2018) derived three types of estimators that provide reliable inference for \(\gamma\) after using covariate selection to determine which covariates belong in \(\tilde{{\bf x}}\). These types are known as partialing-out (PO) estimators, double-selection (DS) estimators, and cross-fit partialing-out (XPO) estimators. Figure 1 details the commands in Stata 16 that implement these types of estimators for several different models.

**Figure 1. Stata 16 commands**

model | PO command | DS command | XPO command |

linear | poregress |
dsregress |
xporegress |

logit | pologit |
dslogit |
xpologit |

Poisson | popoisson |
dspoisson |
xpopoisson |

linear IV | poivregress |
xpoivregress |

__A PO estimator__

In the remainder of this post, we discuss some examples using a linear model and provide some intuition behind the three types of estimators. We discuss a PO estimator first, and we begin our discussion of a PO estimator with an example.

We use **breathe7.dta**, which is an extract of the data used by Sunyer et al. (2017), in our examples. We use local macros to store the list of control covariates. In the output below, we put the list of continuous controls in the local macro **ccontrols**, and we put the list factor-variable controls in the local macro **fcontrols**. We then use Stata’s factor-variable notation to put all the potential controls in the local macro **ctrls**. **ctrls** contains the continuous controls, the indicators from the factor controls, and the interactions between the continuous controls and the indicators created from the factor controls.

. use breathe7, clear . local ccontrols "sev_home sev_sch age ppt age_start_sch oldsibl " . local ccontrols "`ccontrols' youngsibl no2_home ndvi_mn noise_sch " . . local fcontrols "grade sex lbweight lbfeed smokep " . local fcontrols "`fcontrols' feduc4 meduc4 overwt_who " . . local ctrls "i.(`fcontrols') c.(`ccontrols') " . local ctrls "`ctrls' i.(`fcontrols')#c.(`ccontrols') "

The **c.**, **i.**, and **#** notations are Stata’s way of specifying whether variables are continuous or categorical (factor) and whether they are interacted. **c.(`ccontrols’)** specifies that each variable in the local macro **ccontrols** enter the list of potential controls as a continuous variable. **i.(`fcontrols’)** specifies that each variable in the local macro **fcontrols** enter the list of the potential controls as a set of indicators for each level for the variable. **i.(`fcontrols’)#c.(`ccontrols’)** specifies that interactions of each level of each factor variable in the local macro **fcontrols** be interacted with each continuous variable in the local macro **ccontrols**.

We now describe the outcome variable **htime**, the covariate of interest (**no2_class**), the continuous controls, and the factor-variable controls. The potential controls in the model will include the continuous controls, the factor controls, and interactions between the continuous and the factor controls.

. describe htime no2_class `fcontrols' `ccontrols' storage display value variable name type format label variable label ------------------------------------------------------------------------------- htime double %10.0g ANT: mean hit reaction time (ms) no2_class float %9.0g Classroom NO2 levels (μg/m3) grade byte %9.0g grade Grade in school sex byte %9.0g sex Sex lbweight float %9.0g 1 if low birthweight lbfeed byte %19.0f bfeed duration of breastfeeding smokep byte %3.0f noyes 1 if smoked during pregnancy feduc4 byte %17.0g edu Paternal education meduc4 byte %17.0g edu Maternal education overwt_who byte %32.0g over_wt WHO/CDC-overweight 0:no/1:yes sev_home float %9.0g Home vulnerability index sev_sch float %9.0g School vulnerability index age float %9.0g Child's age (in years) ppt double %10.0g Daily total precipitation age_start_sch double %4.1f Age started school oldsibl byte %1.0f Older siblings living in house youngsibl byte %1.0f Younger siblings living in house no2_home float %9.0g Residential NO2 levels (μg/m3) ndvi_mn double %10.0g Home greenness (NDVI), 300m buffer noise_sch float %9.0g Measured school noise (in dB)

Now, we use the linear partialing-out estimator implemented in **poregress** to estimate the effect of **no2_class** on **htime**. The option **controls()** specifies potential control covariates. In this example, we included the levels of the factor controls, the levels of the continuous controls, and the interactions between the factor controls and the continuous controls. We used **estimates store** to store these results in memory under the name **poplug**.

. poregress htime no2_class, controls(`ctrls') Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Partialing-out linear model Number of obs = 1,036 Number of controls = 252 Number of selected controls = 11 Wald chi2(1) = 24.19 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ | Robust htime | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- no2_class | 2.354892 .4787494 4.92 0.000 1.416561 3.293224 ------------------------------------------------------------------------------ Note: Chi-squared test is a Wald test of the coefficients of the variables of interest jointly equal to zero. Lassos select controls for model estimation. Type lassoinfo to see number of selected variables in each lasso. . estimates store poplug

For the moment, let’s focus on the estimate and its interpretation. The results imply that another microgram of NO2 per cubic meter increases the mean reaction time by 2.35 milliseconds.

Only the coefficient on the covariate of interest is estimated. The coefficients on the control covariates are not estimated. The cost of using covariate-selection methods is that these estimators do not produce estimates for the coefficients on the control covariates.

The PO estimators extend the standard partialing-out estimator of obtaining some regression coefficients after removing the effects of other covariates. See section 3-2f in Wooldridge (2020) for an introduction to the standard method. The PO estimators use multiple lassos to select the control covariates whose impacts should be removed from the dependent variable and from the covariates of interest. A regression of the partialed-out dependent variable on the partialed-out covariates of interest estimates the coefficients of interest.

The mechanics of the PO estimators provide some context for some more advanced comments on this approach. For simplicity, we consider a linear model for the outcome \(y\) with one covariate of interest (\(d\)) and the potential controls \({\bf x}\).

\begin{equation}

y = d\gamma + {\bf x}\boldsymbol{\beta} + \epsilon

\tag{1}

\end{equation}

Here are the steps involved in a PO estimator for \(\gamma\) in (1).

- Use a lasso of \(y\) on \({\bf x}\) to select covariates \(\tilde{{\bf x}}_y\) that predict \(y\).
- Regress \(y\) on \(\tilde{{\bf x}}_y\), and let \(\tilde{y}\) be residuals from this regression.
- Use a lasso of \(d\) on \({\bf x}\) to select covariates \(\tilde{{\bf x}}_d\) that predict \(d\).
- Regress \(d\) on \(\tilde{{\bf x}}_d\), and let \(\tilde{d}\) be residuals from this regression.
- Regress \(\tilde{y}\) on \(\tilde{d}\) to get estimate and standard error for \(\gamma\).

Heuristically, the moment conditions used in step 5 are unrelated to the selected covariates. Formally, the moments conditions used in step 5 have been orthogonalized, or “immunized”, to small mistakes in covariate selection. This robustness to the mistakes that the lasso makes in covariate selection is why the estimator provides a reliable estimate of \(\gamma\). See Chernozhukov, Hansen, and Spindler (2015a,b) for formal discussions.

Now that we are familiar with the PO approach, let’s take another look at the output.

. poregress Partialing-out linear model Number of obs = 1,036 Number of controls = 252 Number of selected controls = 11 Wald chi2(1) = 24.19 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ | Robust htime | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- no2_class | 2.354892 .4787494 4.92 0.000 1.416561 3.293224 ------------------------------------------------------------------------------ Note: Chi-squared test is a Wald test of the coefficients of the variables of interest jointly equal to zero. Lassos select controls for model estimation. Type lassoinfo to see number of selected variables in each lasso.

The output indicates that the estimator used a plug-in-based lasso for **htime** and a plug-in-based lasso for **no2_class** to select the controls. The plug-in is the default method for selecting the lasso penalty parameter. We discuss the tradeoffs of using other methods below in the section *Selecting the lasso penalty parameter*.

We also see that only 11 of the 252 potential controls were selected as controls by these lassos. We can use **lassocoef** to find out which controls were included in each lasso.

. lassocoef ( ., for(htime)) ( ., for(no2_class)) ---------------------------------------- | htime no2_class ------------------+--------------------- age | x | grade#c.ndvi_mn | 4th | x | grade#c.noise_sch | 2nd | x | sex#c.age | 0 | x | feduc4#c.age | 4 | x | sev_sch | x ppt | x no2_home | x ndvi_mn | x noise_sch | x | grade#c.sev_sch | 2nd | x | _cons | x x ---------------------------------------- Legend: b - base level e - empty cell o - omitted x - estimated

We see that **age** and four interaction terms were included in the lasso for **htime**. We also see that **sev_sch**, **ppt**, **no2_home**, **ndvi_mn**, **noise_sch**, and an interaction term were included in the lasso for **no2_class**. Some of the variables used in interaction terms were included in both lassos, but otherwise the sets of included controls are distinct.

We could have used **lassoknots** instead of **lassocoef** to find out which controls were included in each lasso, which we illustrate in the output below.

. lassoknots , for(htime) ------------------------------------------------------------------------------- | No. of | | nonzero In-sample | Variables (A)dded, (R)emoved, ID | lambda coef. R-squared | or left (U)nchanged -------+-------------------------------+--------------------------------------- * 1 | .1375306 5 0.1619 | A age 0.sex#c.age | | 3.grade#c.ndvi_mn | | 1.grade#c.noise_sch | | 4.feduc4#c.age ------------------------------------------------------------------------------- * lambda selected by plugin assuming heteroskedastic. . lassoknots , for(no2_class) ------------------------------------------------------------------------------- | No. of | | nonzero In-sample | Variables (A)dded, (R)emoved, ID | lambda coef. R-squared | or left (U)nchanged -------+-------------------------------+--------------------------------------- * 1 | .1375306 6 0.3411 | A ppt sev_sch ndvi_mn | | no2_home noise_sch | | 1.grade#c.sev_sch ------------------------------------------------------------------------------- * lambda selected by plugin assuming heteroskedastic.

__A DS estimator__

The DS estimators extend the PO approach. In short, the DS estimators include the extra control covariates that make the estimator robust to the mistakes that the lasso makes in selecting covariates that affect the outcome.

To be more concrete, we use the linear DS estimator implemented in **dsregress** to estimate the effect of **no2_class** on **htime**. We use the option **controls()** to specify the same set of potential control covariates as we did for **poregress**. We store the results in memory under the name **dsplug**.

. dsregress htime no2_class, controls(`ctrls') Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Double-selection linear model Number of obs = 1,036 Number of controls = 252 Number of selected controls = 11 Wald chi2(1) = 23.71 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ | Robust htime | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- no2_class | 2.370022 .4867462 4.87 0.000 1.416017 3.324027 ------------------------------------------------------------------------------ Note: Chi-squared test is a Wald test of the coefficients of the variables of interest jointly equal to zero. Lassos select controls for model estimation. Type lassoinfo to see number of selected variables in each lasso. . estimates store dsplug

The interpretation is the same as for **poregress**, and the point estimate is almost the same.

The mechanics of the DS estimators also provide some context for some more advanced comments on this approach. Here are steps for the DS for the linear model in (1).

- Use a lasso of \(y\) on \({\bf x}\) to select covariates \(\tilde{{\bf x}}_y\) that predict \(y\).
- Use a lasso of \(d\) on \({\bf x}\) to select covariates \(\tilde{{\bf x}}_d\) that predict \(d\).
- Let \(\tilde{{\bf x}}_u\) be the union of the covariates in \(\tilde{{\bf x}}_y\) and \(\tilde{{\bf x}}_d\).
- Regress \(y\) on \(d\) and \(\tilde{{\bf x}}_u\).

The estimation results for the coefficient on \(d\) are the estimation results for \(\gamma\).

The DS estimator has two chances to find the relevant controls. Belloni, Chernozhukov, and Wei (2016) report that the DS estimator performed a little better than the PO in their simulations, although the two estimators have the same large-sample properties. The better finite-sample performance of the DS estimator might be due to it including a control found in either lasso in a single regression instead of using the selected controls in separate regressions.

Comparing the DS and PO steps, we see that the PO and the DS estimators use the same lassos to select controls in this model with one covariate of interest.

As with **poregress**, we could use **lassocoef** or **lassoknots** to see which controls were selected in each of the lassos. We omit these examples because they produce the same results as for the example using **poregress** above.

__An XPO estimator__

Cross-fitting, which is also known as double machine learning (DML), is a split-sample estimation technique that Chernozhukov et al. (2018) derived to create versions of PO estimators that have better theoretical properties and provide better finite sample performance. The most important theoretical difference is that the XPO estimators require a weaker sparsity condition than the single-sample PO estimators. In practice, this means that XPO estimators can provide reliable inference about processes that include more controls than single-sample PO estimators can handle.

The XPO estimators have better properties because the split-sample techniques further reduce the impact of covariate selection on the estimator for \(\gamma\).

It’s the combination of a sample-splitting technique with a PO estimator that gives XPO estimators their reliability. Chernozhukov et al. (2018) show that just using a split-sample estimation technique is not sufficient to make an inferential estimator that uses the lasso or another machine-learning method that uses covariate-selection robust to covariate selection mistakes.

We now use the linear XPO estimator implemented in **xporegress** to estimate the effect of **no2_class** on **htime**.

. xporegress htime no2_class, controls(`ctrls') Cross-fit fold 1 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit fold 2 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit fold 3 of 10 ... Estimating lasso for htime using plugin note: 1.meduc4#c.youngsibl dropped because it is constant Estimating lasso for no2_class using plugin note: 1.meduc4#c.youngsibl dropped because it is constant Cross-fit fold 4 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit fold 5 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit fold 6 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit fold 7 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit fold 8 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit fold 9 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit fold 10 of 10 ... Estimating lasso for htime using plugin Estimating lasso for no2_class using plugin Cross-fit partialing-out Number of obs = 1,036 linear model Number of controls = 252 Number of selected controls = 18 Number of folds in cross-fit = 10 Number of resamples = 1 Wald chi2(1) = 24.99 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ | Robust htime | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- no2_class | 2.393125 .4787479 5.00 0.000 1.454796 3.331453 ------------------------------------------------------------------------------ Note: Chi-squared test is a Wald test of the coefficients of the variables of interest jointly equal to zero. Lassos select controls for model estimation. Type lassoinfo to see number of selected variables in each lasso. . estimates store xpoplug

The interpretation is the same as for the previous estimators, and the point estimate is similar.

The output shows that a part of the process was repeated over 10 folds.

Split-sample techniques divide the data into subsets called “folds”.

To clarify what is being done, let’s consider the steps that would be

performed by a linear XPO estimator for the \(\gamma\) in (1) when there are only 2 folds in the model.

- Split data into 2 folds called A and B.
- Use the data in fold A to select the covariates and to estimate the postselection coefficients.
- Use a lasso of \(y\) on \({\bf x}\) to select the controls \(\tilde{{\bf x}}_y\) that predict \(y\).
- Regress \(y\) on \(\tilde{{\bf x}}_y\), and let \(\tilde{\boldsymbol{\beta}}_A\) be the estimated coefficients.
- Use a lasso of \(d\) on \({\bf x}\) to select the controls \(\tilde{{\bf x}}_d\) that predict \(d\).
- Regress \(d\) on \(\tilde{{\bf x}}_d\), and let \(\tilde{\boldsymbol{\delta}}_A\) be the estimated coefficients.

- Fill in the residuals for \(y\) and for \(d\) in fold B using the coefficients estimated using the data in fold A. Using the data in fold B, do the following:
- Fill in the residuals for \(\tilde{y}=y-\tilde{{\bf x}}_y\tilde{\boldsymbol{\beta}}_A\).
- Fill in the residuals for \(\tilde{d}=d-\tilde{{\bf x}}_d\tilde{\boldsymbol{\delta}}_A\).

- Use the data in fold B to select the controls and to estimate the postselection coefficients. Using the data in fold B, do the following:
- Use a lasso of \(y\) on \({\bf x}\) to select the controls \(\tilde{{\bf x}}_y\) that predict \(y\).
- Regress \(y\) on \(\tilde{{\bf x}}_y\), and let \(\tilde{\boldsymbol{\beta}}_B\) be the estimated coefficients.
- Use a lasso of \(d\) on \({\bf x}\) to select the controls \(\tilde{{\bf x}}_d\) that predict \(d\).
- Regress \(d\) on \(\tilde{{\bf x}}_d\), and let \(\tilde{\boldsymbol{\delta}}_B\) be the estimated coefficients.

- Fill in the residuals in fold A using the coefficients estimated using the data in fold B. Using the data in fold A, do the following:
- Fill in the residuals for \(\tilde{y}=y-\tilde{{\bf x}}_y\tilde{\boldsymbol{\beta}}_B\).
- Fill in the residuals for \(\tilde{d}=d-\tilde{{\bf x}}_d\tilde{\boldsymbol{\delta}}_B\).

- Now that the residuals are filled in for the whole sample, regress \(\tilde{y}\) on \(\tilde{d}\) to estimate \(\gamma\).

When there are 10 folds instead of 2, the algorithm has the same structure. The data are divided into 10 folds. For each fold \(k\), the data in the other folds are used to select the controls and to estimate the postselection coefficients. The postselection coefficients are then used to fill in the residuals for fold \(k\). With the full sample, regressing the residuals for \(y\) on the residuals for \(d\) estimates \(\gamma\).

These steps explain the fold-specific output produced by **xporegress**.

We recommend using **xporegress** over **poregress** and **dsregress** because it has better large-sample properties and has better finite-sample properties. The cost is that **xporegress** takes longer than **poregress** and **dsregress** because of its fold-level computations.

__Selecting the lasso penalty parameter__

The inferential-lasso commands that implement PO, DS, and XPO estimators use the plug-in method to select the lasso penalty parameter (\(\lambda\)) by default. The value of \(\lambda\) specifies the importance of the penalty term in the objective function that lasso minimizes. When the lasso penalty parameter is zero, the lasso yields the ordinary least-squares estimates. The lasso includes fewer covariates as \(\lambda\) increases.

When the plug-in method is used to select \(\lambda\), the PO, DS, and XPO estimators have proven large-sample properties, as discussed by Belloni et al. (2012) and Belloni, Chernozhukov, and Wei (2016). The plug-in method tends to do a good job of finding the important covariates and does an excellent job of not including extra covariates whose coefficients are zero in the model that best approximates the true process.

The inferential-lasso commands allow you to use cross-validation (CV) or the adaptive lasso to select \(\lambda\). CV and the adaptive lasso tend to do an excellent job of finding the important covariates, but they tend to include extra covariates whose coefficients are zero in the model that best approximates the true process. Including these extra covariates can affect the reliability of the resulting inference, even though the point estimates do not change by that much.

The CV and the adaptive lasso can be used for sensitivity analysis that investigates whether reasonable changes in \(\lambda\) cause large changes in the point estimates. To illustrate, we compare the DS estimates obtained when \(\lambda\) is selected by the plug-in method with the DS estimates obtained when \(\lambda\) is selected by CV. We also compare the DS plug-in-based estimates with those obtained when \(\lambda\) is selected using the adaptive lasso.

In the output below, we quietly estimate the effect using **dsregress** when using CV and the adaptive lasso to select \(\lambda\). First, we use option **selection(cv)** to make **dsregress** use CV-based lassos. We use **estimates store** to store these results in memory under the name **dscv**. Second, we use option **selection(adaptive)** to make **dsregress** use the adaptive lasso for covariate selection. We use **estimates store** to store these results in memory under the name **dsadaptive**. We specify option **rseed()** to make the sample splits used by CV and by the adaptive lasso reproducible.

. quietly dsregress htime no2_class, controls(`ctrls') > selection(cv) rseed(12345) . estimates store dscv . quietly dsregress htime no2_class, controls(`ctrls') > selection(adaptive) rseed(12345) . estimates store dsadaptive

Now, we use **lassoinfo** to look at the numbers of covariates selected by each lasso in each of the three versions of **dsregress**.

. lassoinfo dsplug dscv dsadaptive Estimate: dsplug Command: dsregress ------------------------------------------------------ | No. of | Selection selected Variable | Model method lambda variables ------------+----------------------------------------- htime | linear plugin .1375306 5 no2_class | linear plugin .1375306 6 ------------------------------------------------------ Estimate: dscv Command: dsregress ----------------------------------------------------------------- | No. of | Selection Selection selected Variable | Model method criterion lambda variables ------------+---------------------------------------------------- htime | linear cv CV min. 9.129345 12 no2_class | linear cv CV min. .280125 25 ----------------------------------------------------------------- Estimate: dsadaptive Command: dsregress ----------------------------------------------------------------- | No. of | Selection Selection selected Variable | Model method criterion lambda variables ------------+---------------------------------------------------- htime | linear adaptive CV min. 11.90287 7 no2_class | linear adaptive CV min. .0185652 20 -----------------------------------------------------------------

We see that CV selected more covariates than the adaptive lasso and that the adaptive lasso selected more covariates than the the plug-in method. This result is typical.

We now use **estimates table** to display the point estimates produced by the three versions of the DS estimator.

. estimates table dsplug dscv dsadaptive, b se t ----------------------------------------------------- Variable | dsplug dscv dsadaptive -------------+--------------------------------------- no2_class | 2.3700223 2.5230818 2.4768917 | .48674624 .50743626 .50646957 | 4.87 4.97 4.89 ----------------------------------------------------- legend: b/se/t

The point estimates are similar across the different methods for selecting \(\lambda\). The sensitivity analysis found no instability in the plug-in-based estimates.

__Hand-specified sensitivity analysis__

In this section, we illustrate how to use a particular value for \(\lambda\) in a sensitivity analysis. We begin by using **estimates restore** to restore the **dscv** results that used CV-based lassos in computing the DS estimator.

. estimates restore dscv (results dscv are active now)

We now use **lassoknots** to display the knots table from doing CV in a lasso of **no2_class** on the potential controls.

. lassoknots, for(no2_class) ------------------------------------------------------------------------------- | No. of CV mean | | nonzero pred. | Variables (A)dded, (R)emoved, ID | lambda coef. error | or left (U)nchanged -------+-------------------------------+--------------------------------------- 2 | 4.159767 2 94.42282 | A ndvi_mn noise_sch 5 | 3.146711 3 83.02421 | A ppt 12 | 1.640698 4 70.46862 | A no2_home 15 | 1.241128 6 68.11599 | A sev_sch 1.grade#c.sev_sch 16 | 1.130869 7 67.06458 | A 0.smokep#c.ndvi_mn 21 | .710219 8 63.26906 | A 4.feduc4#c.sev_sch 23 | .5896363 10 62.51624 | A sev_home 1.feduc4#c.ndvi_mn 25 | .4895264 11 62.08823 | A 0.overwt_who#c.youngsibl 26 | .4460382 14 61.94206 | A 1.lbfeed#c.oldsibl | | 2.lbfeed#c.youngsibl | | 1.overwt_who#c.ppt 27 | .4064134 16 61.82037 | A 1.grade#c.oldsibl | | 0.overwt_who#c.sev_home 28 | .3703088 20 61.70179 | A age 1.sex#c.ppt | | 3.lbfeed#c.no2_home | | 1.overwt_who#c.youngsibl 30 | .3074368 22 61.57447 | A 3.feduc4#c.no2_home | | 1.feduc4#c.youngsibl * 31 | .280125 25 61.54342 | A 0.smokep#c.sev_sch | | 4.meduc4#c.ndvi_mn | | 1.meduc4#c.youngsibl 32 | .2552395 28 61.55544 | A 1.sex#c.no2_home | | 1.lbfeed#c.sev_sch | | 1.feduc4#c.oldsibl | | 1.smokep#c.no2_home | | 0.lbweight#c.sev_sch 32 | .2552395 28 61.55544 | R 0.smokep#c.sev_sch | | 0.smokep#c.ndvi_mn 33 | .2325647 32 61.64949 | A 2.grade#c.ppt | | 2.grade#c.no2_home | | 3.grade#c.youngsibl | | 1.meduc4#c.ppt | | 1.lbweight#c.youngsibl 33 | .2325647 32 61.64949 | R 0.lbweight#c.sev_sch 34 | .2119043 35 61.83715 | A 0.sex#c.youngsibl | | 2.feduc4#c.ppt | | 2.feduc4#c.youngsibl 35 | .1930793 38 62.03954 | A 1.sex#c.oldsibl | | 2.feduc4#c.ndvi_mn | | 1.meduc4#c.ndvi_mn ------------------------------------------------------------------------------- * lambda selected by cross-validation.

The \(\lambda\) selected by CV has **ID=31**. This \(\lambda\) value produced a CV mean prediction error of 61.5, and it caused the lasso to include 25 controls. The \(\lambda\) with **ID=23** would produce a CV mean prediction error of 62.5, and it would cause the lasso to include only 10 controls.

The \(\lambda\) with **ID=23** seems like a good candidate for sensitivity analysis. In the output below, we illustrate how to use **lassoselect** to specify that the \(\lambda\) with **ID=23** be the selected value for \(\lambda\) for the lasso of **no2_class** on the controls. We also illustrate how to store these results in memory under the name **hand**.

. lassoselect id = 23, for(no2_class) ID = 23 lambda = .5896363 selected . estimates store hand

Now, we use **dsregress** with option **reestimate** to estimate \(\gamma\) by DS using the hand-specified value for \(\lambda\).

. dsregress , reestimate Double-selection linear model Number of obs = 1,036 Number of controls = 252 Number of selected controls = 22 Wald chi2(1) = 23.09 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ | Robust htime | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- no2_class | 2.399296 .4993065 4.81 0.000 1.420674 3.377919 ------------------------------------------------------------------------------ Note: Chi-squared test is a Wald test of the coefficients of the variables of interest jointly equal to zero. Lassos select controls for model estimation. Type lassoinfo to see number of selected variables in each lasso.

The point estimate for \(\gamma\) does not change by much.

__Conclusion__

This post discussed the problems involved in estimating the coefficients of interest in a high-dimensional model. It also presented several methods implemented in Stata 16 for estimating these coefficients.

This post discussed only estimators for linear models with exogenous covariates. The Stata 16 LASSO manual discusses methods and commands for logit models, Poisson models, and linear models with endogenous covariates of interest.

**References**

Belloni, A., D. Chen, V. Chernozhukov, and C. Hansen. 2012. Sparse models and methods for optimal instruments with an application to eminent domain. *Econometrica* 80: 2369–2429.

Belloni, A., V. Chernozhukov, and C. Hansen. 2014. Inference on treatment effects after selection among high-dimensional controls. *Review of Economic Studies* 81: 608–650.

Belloni, A., V. Chernozhukov, and Y. Wei. 2016. Post-selection inference for generalized linear models with many controls. *Journal of Business & Economic Statistics* 34: 606–619.

Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, and J. Robins. 2018. Double/debiased machine learning for treatment and structural parameters. *Econometrics Journal* 21: C1–C68.

Chernozhukov, V., C. Hansen, and M. Spindler. 2015a. Post-selection and post-regularization inference in linear models with many controls and instruments. *American Economic Review* 105: 486–90.

——. 2015b. Valid post-selection and post-regularization inference: An elementary, general approach. *Annual Review of Economics* 7: 649–688.

Leeb, H., and B. M. Pötscher. 2008. Sparse estimators and the oracle property, or the return of Hodges estimator. *Journal of Econometrics* 142: 201–211.

Sunyer, J., E. Suades-González, R. García-Esteban, I. Rivas, J. Pujol, M. Alvarez-Pedrerol, J. Forns, X. Querol, and X. Basagaña. 2017. Traffic-related air pollution and attention in primary school children: Short-term association. *Epidemiology* 28: 181–189.

Wooldridge, J. M. 2020. *Introductory Econometrics: A Modern Approach*. 7th ed. Boston, MA: Cengage-Learning.