On the use of cross-validation for the calibration of the tuning parameter in the adaptive lasso

20 May 2020  ·  Nadim Ballout, Lola Etievant, Vivian Viallon ·

The adaptive lasso is a popular extension of the lasso, which was shown to generally enjoy better theoretical performance, at no additional computational cost in comparison to the lasso. The adaptive lasso relies on a weighted version of the $L_1$-norm penalty used in the lasso, where weights are typically derived from an initial estimate of the parameter vector... Irrespective of the method chosen to obtain this initial estimate, the performance of the corresponding version of the adaptive lasso critically depends on the value of the tuning parameter, which controls the magnitude of the weighted $L_1$-norm in the penalized criterion. In this article, we show that the standard cross-validation, although very popular in this context, has a severe defect when applied for the calibration of the tuning parameter in the adaptive lasso. We further propose a simple cross-validation scheme which corrects this defect. Empirical results from a simulation study confirms the superiority of our approach, in terms of both support recovery and prediction error. Although we focus on the adaptive lasso under linear regression models, our work likely extends to other regression models, as well as to the adaptive versions of other penalized approaches, including the group lasso, fused lasso, and data shared lasso read more

PDF Abstract
No code implementations yet. Submit your code now




  Add Datasets introduced or used in this paper