AI News Hub Logo

AI News Hub

Highly Adaptive Principal Component Regression

stat.ML updates on arXiv.org
Mingxun Wang, Alejandro Schuler, Mark van der Laan, Carlos Garc\'ia Meixide

arXiv:2602.10613v2 Announce Type: replace Abstract: The Highly Adaptive Lasso (HAL) is a nonparametric regression method that achieves almost dimension-free convergence rates under minimal smoothness assumptions, but its implementation can be computationally prohibitive in high dimensions due to the large design matrix it requires. The Highly Adaptive Ridge (HAR) has been proposed as a related ridge-regularized analogue. Building on both procedures, we introduce the Principal Component Highly Adaptive Lasso (PCHAL) and Principal Component Highly Adaptive Ridge (PCHAR). These estimators use an outcome-blind principal-component reduction of the HAL basis, offering substantial computational gains over HAL while achieving empirical performance comparable to HAL and HAR. We also describe an early-stopped gradient descent variant, which provides a convenient form of smooth spectral regularization without explicitly selecting a hard principal-component cutoff. Finally, we uncover that under special circumstances, the HAL kernel is identical to the covariance function of Brownian motion.