![Page 1: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/1.jpg)
Semi-Stochastic Gradient Descent Methods
Jakub KonečnýUniversity of Edinburgh
BASP Frontiers WorkshopJanuary 28, 2014
![Page 2: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/2.jpg)
Based on Basic Method: S2GD
Konečný and Richtárik. Semi-Stochastic Gradient Descent Methods, December 2013
Mini-batching (& proximal setting): mS2GD Konečný, Liu, Richtárik and Takáč. mS2GD:
mS2GD: Minibatch semi-stochastic gradient descent in the proximal setting, October 2014
Coordinate descent variant: S2CD Konečný, Qu and Richtárik. Semi-stochastic
coordinate descent, December 2014
![Page 3: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/3.jpg)
Introduction
![Page 4: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/4.jpg)
Large scale problem setting Problems are often structured
Frequently arising in machine learning
Structure – sum of functions is BIG
![Page 5: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/5.jpg)
Examples Linear regression (least squares)
Logistic regression (classification)
![Page 6: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/6.jpg)
Assumptions Lipschitz continuity of derivative of
Strong convexity of
![Page 7: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/7.jpg)
Gradient Descent (GD) Update rule
Fast convergence rate
Alternatively, for accuracy we need
iterations
Complexity of single iteration – (measured in gradient evaluations)
![Page 8: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/8.jpg)
Stochastic Gradient Descent (SGD) Update rule
Why it works
Slow convergence
Complexity of single iteration – (measured in gradient evaluations)
a step-size parameter
![Page 9: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/9.jpg)
Goal
GD SGD
Fast convergence
gradient evaluations in each iteration
Slow convergence
Complexity of iteration independent of
Combine in a single algorithm
![Page 10: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/10.jpg)
Semi-Stochastic Gradient Descent
S2GD
![Page 11: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/11.jpg)
Intuition
The gradient does not change drastically We could reuse the information from “old”
gradient
![Page 12: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/12.jpg)
Modifying “old” gradient Imagine someone gives us a “good” point
and
Gradient at point , near , can be expressed as
Approximation of the gradient
Already computed gradientGradient changeWe can try to estimate
![Page 13: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/13.jpg)
The S2GD Algorithm
Simplification; size of the inner loop is random, following a geometric rule
![Page 14: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/14.jpg)
Theorem
![Page 15: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/15.jpg)
Convergence rate
How to set the parameters ?
Can be made arbitrarily small, by decreasing
For any fixed , can be made arbitrarily small by increasing
![Page 16: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/16.jpg)
Setting the parameters
The accuracy is achieved by setting
Total complexity (in gradient evaluations)
# of epochs
full gradient evaluation cheap iterations
# of epochs
stepsize
# of iterations
Fix target accuracy
![Page 17: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/17.jpg)
Complexity S2GD complexity
GD complexity iterations complexity of a single iteration Total
![Page 18: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/18.jpg)
Related Methods SAG – Stochastic Average Gradient
(Mark Schmidt, Nicolas Le Roux, Francis Bach, 2013)
Refresh single stochastic gradient in each iteration Need to store gradients. Similar convergence rate Cumbersome analysis
SAGA (Aaron Defazio, Francis Bach, Simon Lacoste-Julien, 2014)
Refined analysis MISO - Minimization by Incremental Surrogate
Optimization (Julien Mairal, 2014)
Similar to SAG, more general Elegant analysis, not really applicable for sparse data Julien is here
![Page 19: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/19.jpg)
Related Methods SVRG – Stochastic Variance Reduced
Gradient(Rie Johnson, Tong Zhang, 2013)
Arises as a special case in S2GD Prox-SVRG
(Tong Zhang, Lin Xiao, 2014)
Extended to proximal setting
EMGD – Epoch Mixed Gradient Descent(Lijun Zhang, Mehrdad Mahdavi , Rong Jin, 2013)
Handles simple constraints, Worse convergence rate
![Page 20: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/20.jpg)
Experiment (logistic regression on: ijcnn, rcv, real-sim, url)
![Page 21: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/21.jpg)
Extensions
![Page 22: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/22.jpg)
Sparse data For linear/logistic regression, gradient copies
sparsity pattern of example.
But the update direction is fully dense
Can we do something about it?
DENSESPARSE
![Page 23: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/23.jpg)
Sparse data Yes we can! To compute , we only need coordinates
of corresponding to nonzero elements of
For each coordinate , remember when was it updated last time – Before computing in inner iteration
number , update required coordinates Step being Compute direction and make a single update
Number of iterations when the coordinate was not updatedThe “old gradient”
![Page 24: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/24.jpg)
Sparse data implementation
![Page 25: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/25.jpg)
High Probability Result The result holds only in expectation Can we say anything about the concentration
of the result in practice?
For any
we have:
Paying just logarithm of probabilityIndependent from other parameters
![Page 26: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/26.jpg)
Code Efficient implementation for logistic
regression -available at MLOSS
http://mloss.org/software/view/556/
![Page 27: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/27.jpg)
mS2GD (mini-batch S2GD) How does mini-batching influence the
algorithm? Replace
by
Provides two-fold speedup Provably less gradient evaluations are needed
(up to certain number of mini-batches) Easy possibility of parallelism
![Page 28: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/28.jpg)
S2CD (Semi-Stochastic Coordinate Descent) SGD type methods
Sampling rows (training examples) of data matrix Coordinate Descent type methods
Sampling columns (features) of data matrix
Question: Can we do both? Sample both columns and rows
![Page 29: Semi-Stochastic Gradient Descent Methods Jakub Konečný University of Edinburgh BASP Frontiers Workshop January 28, 2014](https://reader037.vdocuments.site/reader037/viewer/2022102906/56649c9d5503460f9495cb0a/html5/thumbnails/29.jpg)
S2CD (Semi-Stochastic Coordinate Descent)
Comlpexity
S2GD