Statistical Learning with Sparsity: The Lasso and GeneralizationsDiscover New Methods for Dealing with High-Dimensional DataA sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underl |
Contents
1 | |
The Lasso for Linear Models | 7 |
Generalized Linear Models | 29 |
Generalizations of the Lasso Penalty | 55 |
Optimization Methods | 95 |
Statistical Inference | 139 |
Matrix Decompositions Approximations and Completion | 167 |
Sparse Multivariate Methods | 201 |
Graphs and Model Selection | 241 |
Signal Approximation and Compressed Sensing | 269 |
Theoretical Results for the Lasso | 289 |
315 | |
Back Cover | 337 |
Other editions - View all
Statistical Learning with Sparsity: The Lasso and Generalizations Trevor Hastie,Robert Tibshirani,Martin Wainwright No preview available - 2015 |
Statistical Learning with Sparsity: The Lasso and Generalizations Trevor Hastie,Robert Tibshirani,Martin Wainwright No preview available - 2020 |
Common terms and phrases
algorithm analysis approach approximation arg min Bayesian bootstrap bound Chapter clustering columns compute condition consider constraint convergence convex convex function coordinate descent correlation corresponding covariance criterion cross-validation defined discuss distribution entries Equation equivalent error example Exercise Figure Frobenius norm function f fused lasso Gaussian given gradient gradient descent graph graphical lasso graphical model group lasso Hastie Ising model iterative l₁ l2-norm Lagrangian lasso estimate lasso solution least-squares linear regression logistic regression loss function matrix completion matrix decomposition method minimize Netflix nonconvex nonzero coefficients nuclear norm objective function observations optimal solution optimization problem orthogonal p-values parameter penalized penalty predictors principal components procedure rank regularization right panel sample Section selection shows signal singular value decomposition singular values singular vectors soft-thresholding solving sparse sparsity Statistical subgradient subset Tibshirani update variables zero