Accelerated Parallel and PROXimal coordinate descent A P
- Slides: 44
Accelerated, Parallel and PROXimal coordinate descent A P PROX Peter Richtárik IPAM February 2014 (Joint work with Olivier Fercoq - ar. Xiv: 1312. 5799)
Contributions
Variants of Randomized Coordinate Descent Methods • Block • Parallel • General • Accelerated – can operate on “blocks” of coordinates – as opposed to just on individual coordinates – applies to “general” (=smooth convex) functions – as opposed to special ones such as quadratics • Proximal – admits a “nonsmooth regularizer” that is kept intact in solving subproblems – regularizer not smoothed, nor approximated – operates on multiple blocks / coordinates in parallel – as opposed to just 1 block / coordinate at a time – achieves O(1/k^2) convergence rate for convex functions – as opposed to O(1/k) • Efficient – avoids adding two full feature vectors
Brief History of Randomized Coordinate Descent Methods + new long stepsizes
Introduction
I. Block Structure II. Block Sampling III. Proximal Setup IV. Fast or Normal?
I. Block Structure
I. Block Structure
I. Block Structure
I. Block Structure
I. Block Structure
I. Block Structure N = # coordinates (variables) n = # blocks
II. Block Sampling Block sampling Average # blocks selected by the sampling
III. Proximal Setup Loss Convex & Smooth Regularizer Convex & Nonsmooth
III. Proximal Setup Loss Functions: Examples Quadratic loss Logistic loss Square hinge loss BKBG’ 11 RT’ 11 b TBRS’ 13 RT ’ 13 a L-infinity L 1 regression Exponential loss FR’ 13
III. Proximal Setup Regularizers: Examples e. g. , LASSO No regularizer Weighted L 1 norm Box constraints Weighted L 2 norm e. g. , SVM dual
The Algorithm
APPROX Olivier Fercoq and P. R. Accelerated, parallel and proximal coordinate descent, ar. Xiv: 1312. 5799, December 2013
Olivier Fercoq and P. R. Accelerated, parallel and proximal coordinate descent, ar. Xiv: 1312. 5799, Dec 2013 Part C RANDOMIZED COORDINATE DESCENT Part B GRADIENT METHODS B 1 GRADIENT DESCENT ISTA B 3 PROXIMAL GRADIENT DESCENT B 2 PROJECTED GRADIENT DESCENT new FISTA B 4 FAST PROXIMAL GRADIENT DESCENT C 1 PROXIMAL COORDINATE DESCENT C 2 PARALLEL COORDINATE DESCENT C 3 DISTRIBUTED COORDINATE DESCENT C 4 FAST PARALLEL COORDINATE DESCENT
PCDM P. R. and Martin Takac. Parallel coordinate descent methods for big data optimization, ar. Xiv: 1212. 0873, December 2012 IMA Fox Prize in Numerical Analysis, 2013
2 D Example
Convergence Rate
Convergence Rate Theorem [Fercoq & R. 12/2013] # iterations # blocks average # coordinates updated / iteration implies
Special Case: Fully Parallel Variant all blocks are updated in each iteration # iterations implies # normalized weights (summing to n)
New Stepsizes
Expected Separable Overapproximation (ESO): How to Choose Block Stepsizes? P. R. and Martin Takac. Parallel coordinate descent methods for big data optimization, ar. Xiv: 1212. 0873, December 2012 Olivier Fercoq and P. R. Smooth minimization of nonsmooth functions by parallel coordinate descent methods, ar. Xiv: 1309. 5885, September 2013 P. R. and Martin Takac. Distributed coordinate descent methods for learning with big data, ar. Xiv: 1310. 2059, October 2013 SPCDM
Assumptions: Function f (a) (b) (c) Example:
Visualizing Assumption (c)
New ESO Theorem (Fercoq & R. 12/2013) (ii)
Comparison with Other Stepsizes for Parallel Coordinate Descent Methods Example:
Complexity for New Stepsizes With the new stepsizes, we have: Average degree of separability “Average” of the Lipschitz constants
Work in 1 Iteration
Cost of 1 Iteration of APPROX Assume N = n (all blocks are of size 1) and that Then the average cost of 1 iteration of APPROX is Scalar function: derivative = O(1) Sparse matrix arithmetic ops = average # nonzeros in a column of A
Bottleneck: Computation of Partial Derivatives maintained
Preliminary Experiments
L 1 Regularized L 1 Regression Gradient Method Nesterov’s Accelerated Gradient Method SPCDM APPROX Dorothea dataset:
L 1 Regularized L 1 Regression
L 1 Regularized Least Squares (LASSO) PCDM APPROX KDDB dataset:
Training Linear SVMs Malicious URL dataset:
Importance Sampling
with Importance Sampling Zheng Qu and P. R. Accelerated coordinate descent with importance sampling, Manuscript 2014 P. R. and Martin Takac. On optimal probabilities in stochastic coordinate descent methods, a. Xiv: 1310. 3438, 2013
Convergence Rate Theorem [Qu & R. 2014]
Serial Case: Optimal Probabilities Nonuniform serial sampling: Uniform Probabilities Optimal Probabilities
Extra 40 Slides
- Coordinate indexing
- Coordinate covalent bond vs coordinate ionic
- Varvae
- Parallel structure with coordinating conjunctions
- Resultant of like parallel forces
- Parallel and non parallel structure
- Parallel structue
- Parallel and non parallel structure
- Parallel structures
- Proximal to distal
- Texas descent and distribution before 1993
- Superior and inferior
- Site:slidetodoc.com
- Distal and proximal
- Distal and proximal
- Medial and lateral
- Longitudinal plane anatomy
- Differentiate between geological and accelerated erosion
- Project arrow 204 sample test
- Accelerated reader questions and answers
- Ar test answers doc
- Difference between project arrow and accelerated math
- The focal point of fingerprint
- Serial in parallel out
- 321 riq
- The work done by gravity during the descent of a projectile
- John collins vertical
- Selective reabsorption in the proximal convoluted tubule
- In panic mode recovery of ll(1) parsing ___________
- Visual descent point
- Eliminate left recursion calculator
- Nephron function
- Body regions labeled
- Odysseus labyrinth
- Tubulo
- Arthrlgia
- Gradient descent equation
- Identical by descent vs identical by state
- Perception definition
- Epicondilite
- Recursive descent parser
- Recursive descent parsing
- Limitations of recursive descent parser
- Proximal processes
- Structure of proximal convoluted tubule