Probabilistic Best Subset Selection via Gradient-Based Optimization

11 Jun 2020  ·  Mingzhang Yin, Nhat Ho, Bowei Yan, Xiaoning Qian, Mingyuan Zhou ·

In high-dimensional statistics, variable selection recovers the latent sparse patterns from all possible covariate combinations. This paper proposes a novel optimization method to solve the exact L0-regularized regression problem, which is also known as the best subset selection. We reformulate the optimization problem from a discrete space to a continuous one via probabilistic reparameterization. The new objective function is differentiable but its gradient often cannot be computed in a closed form. Then we propose a family of unbiased gradient estimators to optimize the best subset selection objectives by the stochastic gradient descent. Within this family, we identify the estimator with uniformly minimum variance. Theoretically, we study the general conditions under which the method is guaranteed to converge to the ground truth in expectation. The proposed method can find the true regression model from thousands of covariates in seconds. In a wide variety of synthetic and semi-synthetic data, the proposed method outperforms existing variable selection tools based on the relaxed penalties, coordinate descent, and mixed integer optimization in both sparse pattern recovery and out-of-sample prediction.

PDF Abstract


  Add Datasets introduced or used in this paper