SlideShare a Scribd company logo
Gradient descent method 
2013.11.10 
SanghyukChun 
Many contents are from 
Large Scale Optimization Lecture 4 & 5 by Caramanis& Sanghavi 
Convex Optimization Lecture 10 by Boyd & Vandenberghe 
Convex Optimization textbook Chapter 9 by Boyd & Vandenberghe 1
Contents 
•Introduction 
•Example code & Usage 
•Convergence Conditions 
•Methods & Examples 
•Summary 
2
Introduction 
Unconstraint minimization problem, Description, Pros and Cons 
3
Unconstrained minimization problems 
•Recall: Constrained minimization problems 
•From Lecture 1, the formation of a general constrained convex optimization problem is as follows 
•min푓푥푠.푡.푥∈χ 
•Where 푓:χ→Ris convex and smooth 
•From Lecture 1, the formation of an unconstrained optimization problem is as follows 
•min푓푥 
•Where 푓:푅푛→푅is convex and smooth 
•In this problem, the necessary and sufficient condition for optimal solution x0 is 
•훻푓푥=0푎푡푥=푥0 
4
Unconstrained minimization problems 
•Minimize f(x) 
•When f is differentiable and convex, a necessary and sufficient condition for a point 푥∗to be optimal is훻푓푥∗=0 
•Minimize f(x) is the same as fining solution of 훻푓푥∗=0 
•Min f(x): Analytically solving the optimality equation 
•훻푓푥∗=0: Usually be solved by an iterative algorithm 
5
Description of Gradient Descent Method 
•The idea relies on the fact that −훻푓(푥(푘))is a descent direction 
•푥(푘+1)=푥(푘)−η푘훻푓(푥(푘))푤푖푡ℎ푓푥푘+1<푓(푥푘) 
•Δ푥(푘)is the step, or search direction 
•η푘is the step size, or step length 
•Too small η푘will cause slow convergence 
•Too large η푘could cause overshoot the minima and diverge 
6
Description of Gradient Descent Method 
•Algorithm (Gradient Descent Method) 
•given a starting point 푥∈푑표푚푓 
•repeat 
1.Δ푥≔−훻푓푥 
2.Line search: Choose step size ηvia exact or backtracking line search 
3.Update 푥≔푥+ηΔ푥 
•untilstopping criterion is satisfied 
•Stopping criterion usually 훻푓(푥)2≤휖 
•Very simple, but often very slow; rarely used in practice 
7
Pros and Cons 
•Pros 
•Can be applied to every dimension and space (even possible to infinite dimension) 
•Easy to implement 
•Cons 
•Local optima problem 
•Relatively slow close to minimum 
•For non-differentiable functions, gradient methods are ill-defined 
8
Example Code & Usage 
Example Code, Usage, Questions 
9
Gradient Descent Example Code 
•http://mirlab.org/jang/matlab/toolbox/machineLearning/ 
10
Usage of Gradient Descent Method 
•Linear Regression 
•Find minimum loss function to choose best hypothesis 
11 
Example of Loss function: 
푑푎푡푎푝푟푒푑푖푐푡−푑푎푡푎표푏푠푒푟푣푒푑 2 
Find the hypothesis (function) which minimize the loss function
Usage of Gradient Descent Method 
•Neural Network 
•Back propagation 
•SVM (Support Vector Machine) 
•Graphical models 
•Least Mean Squared Filter 
…and many other applications! 
12
Questions 
•Does Gradient Descent Method always converge? 
•If not, what is condition for convergence? 
•How can make Gradient Descent Method faster? 
•What is proper value for step size η푘 
13
Convergence Conditions 
L-Lipschitzfunction, Strong Convexity, Condition number 
14
L-Lipschitzfunction 
•Definition 
•A function 푓:푅푛→푅is called L-Lipschitzif and only if 훻푓푥−훻푓푦2≤퐿푥−푦2,∀푥,푦∈푅푛 
•We denote this condition by 푓∈퐶퐿, where 퐶퐿is class of L-Lipschitzfunctions 
15
L-Lipschitzfunction 
•Lemma 4.1 
•퐼푓푓∈퐶퐿,푡ℎ푒푛푓푦−푓푥−훻푓푥,푦−푥≤ 퐿 2 푦−푥2 
•Theorem 4.2 
•퐼푓푓∈퐶퐿푎푛푑푓∗=min 푥 푓푥>−∞,푡ℎ푒푛푡ℎ푒푔푟푎푑푖푒푛푡푑푒푠푐푒푛푡 푎푙푔표푟푖푡ℎ푚푤푖푡ℎ푓푖푥푒푑푠푡푒푝푠푖푧푒푠푡푎푡푖푠푓푦푖푛푔η< 2 퐿 푤푖푙푙푐표푛푣푒푟푔푒푡표 푎푠푡푎푡푖표푛푎푟푦푝표푖푛푡 
16
Strong Convexity and implications 
•Definition 
•If there exist a constant m > 0 such that 훻2푓≻=푚퐼푓표푟∀푥∈푆, then the function f(x) is strongly convex function on S 
17
Strong Convexity and implications 
•Lemma 4.3 
•If f is strongly convex on S, we have the following inequality: 
•푓푦≥푓푥+<훻푓푥,푦−푥>+ 푚 2 푦−푥2푓표푟∀푥,푦∈푆 
•Proof 
18 
( ) 
useful as stopping criterion (if you know m)
Strong Convexity and implications 
19 
Proof
Upper Bound of 훻2푓(푥) 
•Lemma 4.3 implies that the sublevel sets contained in S are bounded, so in particular, S is bounded. Therefore the maximum eigenvalue of 훻2푓푥is bounded above on S 
•ThereexistsaconstantMsuchthat훻2푓푥=≺푀퐼푓표푟∀푥∈푆 
•Lemma 4.4 
•퐹표푟푎푛푦푥,푦∈푆,푖푓훻2푓푥=≺푀퐼푓표푟푎푙푙푥∈푆푡ℎ푒푛 푓푦≤푓푥+<훻푓푥,푦−푥>+ 푀 2 푦−푥2 
20
Condition Number 
•From Lemma 4.3 and 4.4 we have 
푚퐼=≺훻2푓푥=≺푀퐼푓표푟∀푥∈푆,푚>0,푀>0 
•The ratio k=M/m is thus an upper bound on the condition number of the matrix 훻2푓푥 
•When the ratio is close to 1, we call it well-conditioned 
•When the ratio is much larger than 1, we call it ill-conditioned 
•When the ratio is exactly 1, it is the best case that only one step will lead to the optimal solution (there is no wrong direction) 
21
Condition Number 
•Theorem 4.5 
•Gradient descent for a strongly convex function f and step sizeη= 1 푀 will converge as 
•푓푥∗−푓∗≤푐푘푓푥0−푓∗,푤ℎ푒푟푒푐≤1− 푚 푀 
•Rate of convergence c is known as linear convergence 
•Since we usually do not know the value of M, we do line search 
•For exact line search, 푐=1− 푚 푀 
•For backtracking line search, 푐=1−min2푚훼,2훽훼푚 푀 <1 
22
Methods & Examples 
Exact Line Search, Backtracking Line Search, Coordinate Descent Method, Steepest Descent Method 
23
Exact Line Search 
• The optimal line search method in which η is chosen to 
minimize f along the ray 푥 − η훻푓 푥 , as shown in below 
• Exact line search is used when the cost of minimization 
problem with one variable is low compared to the cost of 
computing the search direction itself. 
• It is not very practical 
24
Exact Line Search 
•Convergence Analysis 
• 
•푓푥푘−푓∗decreases by at least a constant factor in every iteration 
•Converging to 0 geometric fast. (linear convergence) 
25 
1
Backtracking Line Search 
•It depends on two constants 훼,훽푤푖푡ℎ0<훼<0.5,0<훽<1 
•It starts with unit step size and then reduces it by the factor 훽until the stopping condition 
푓(푥−η훻푓(푥))≤푓(푥)−훼η훻푓푥2 
•Since −훻푓푥is a descent direction and −훻푓푥2<0, so for small enough step size η, we have 
푓푥−η훻푓푥≈푓푥−η훻푓푥2<푓푥−훼η훻푓푥2 
•It shows that the backtracking line search eventually terminates 
•훼is typically chosen between 0.01 and 0.3 
•훽is often chosen to be between 0.1 and 0.8 
26
Backtracking Line Search 
• 
27
Backtracking Line Search 
•Convergence Analysis 
•Claim: η≤ 1 푀 always satisfies the stopping condition 
•Proof 
28
Backtracking Line Search 
•Proof (cont) 
29
Line search types 
•Slide from Optimization Lecture 10 by Boyd 30
Line search example 
•Slide from Optimization Lecture 10 by Boyd 31
Coordinate Descent Method 
• Coordinate descent belongs to the class of several non 
derivative methods used for minimizing differentiable 
functions. 
• Here, cost is minimized in one coordinate direction in each 
iteration. 
32
Coordinate Descent Method 
•Pros 
•It is well suited for parallel computation 
•Cons 
•May not reach the local minimum even for convex function 
33
Converge of Coordinate Descent 
•Lemma 5.4 
34
Coordinate Descent Method 
•Method of selecting the coordinate for next iteration 
•Cyclic Coordinate Descent 
•Greedy Coordinate Descent 
•(Uniform) Random Coordinate Descent 
35
Steepest Descent Method 
•The gradient descent method takes many iterations 
•Steepest Descent Method aims at choosing the best direction at each iteration 
•Normalized steepest descent direction 
•Δ푥푛푠푑=푎푟푔푚푖푛훻푓푥푇푣푣=1} 
•Interpretation: for small 푣,푓푥+푣≈푓푥+훻푓푥푇푣direction Δ푥푛푠푑 is unit-norm step with most negative directional derivative 
•Iteratively, the algorithm follows the following steps 
•Calculate direction of descent Δ푥푛푠푑 
•Calculate step size, t 
•푥+=푥+푡Δ푥푛푠푑 
36
Steepest Descent for various norms 
•The choice of norm used the steepest descent direction can be have dramatic effect on converge rate 
•푙2norm 
•The steepest descent direction is as follows 
•Δ푥푛푠푑= −훻푓(푥) 훻푓(푥)2 
•푙1norm 
•For 푥1= 푖푥푖,a descent direction is as follows, 
•Δ푥푛푑푠=−푠푖푔푛 휕푓푥 휕푥푖 ∗푒푖 ∗ 
•푖∗=푎푟푔min 푖 휕푓 휕푥푖 
•푙∞norm 
•For 푥∞=argmin 푖 푥푖,a descent direction is as follows 
•Δ푥푛푑푠=−푠푖푔푛−훻푓푥 
37
Steepest Descent for various norms 
38 
Quadratic Norm 
푙1-Norm
Steepest Descent for various norms 
•Example 
39
Steepest Descent Convergence Rate 
•Fact: Any norm can be bounded by ∙2, i.e., ∃훾, 훾∈(0,1] such that, 푥≥훾푥2푎푛푑푥∗≥훾푥2 
•Theorem 5.5 
•If f is strongly convex with respect to m and M, and ∙2has 훾, 훾as above then steepest decent with backtracking line search has linear convergence with rate 
•푐=1−2푚훼 훾2min1, 훽훾 푀 
•Proof: Will be proved in the lecture 6 
40
Summary 
41
Summary 
•Unconstrained Convex Optimization Problem 
•Gradient Descent Method 
•Step Size Trade-off between safety and speed 
•Convergence Conditions 
•L-LipschtizFunction 
•Strong Convexity 
•Condition Number 
42
Summary 
•Exact Line Search 
•Backtracking Line Search 
•Coordinate Descent Method 
•Good for parallel computation but not always converge 
•Steepest Descent Method 
•The choice of norm is important 
43
44 
END OF DOCUMENT

More Related Content

PPTX
An overview of gradient descent optimization algorithms
PPTX
Gradient descent method
PPTX
Gradient descent optimizer
PPTX
Optimization tutorial
PDF
Overview on Optimization algorithms in Deep Learning
PPTX
Optimization/Gradient Descent
PDF
Optimization in deep learning
PDF
Optimization for Deep Learning
An overview of gradient descent optimization algorithms
Gradient descent method
Gradient descent optimizer
Optimization tutorial
Overview on Optimization algorithms in Deep Learning
Optimization/Gradient Descent
Optimization in deep learning
Optimization for Deep Learning

What's hot (20)

PPTX
Feed forward ,back propagation,gradient descent
PDF
Loss Functions for Deep Learning - Javier Ruiz Hidalgo - UPC Barcelona 2018
PDF
Naive Bayes
PDF
PDF
Linear regression
ODP
Machine Learning With Logistic Regression
PDF
AI 7 | Constraint Satisfaction Problem
PDF
Decision trees in Machine Learning
PPTX
Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...
PDF
03 Machine Learning Linear Algebra
PPTX
Feedforward neural network
PPTX
Random forest
PDF
Support Vector Machines ( SVM )
PPTX
Introduction to dynamic programming
PPTX
Support Vector Machines- SVM
ODP
Machine Learning with Decision trees
PPT
backpropagation in neural networks
PDF
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
PDF
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
PPTX
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Feed forward ,back propagation,gradient descent
Loss Functions for Deep Learning - Javier Ruiz Hidalgo - UPC Barcelona 2018
Naive Bayes
Linear regression
Machine Learning With Logistic Regression
AI 7 | Constraint Satisfaction Problem
Decision trees in Machine Learning
Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...
03 Machine Learning Linear Algebra
Feedforward neural network
Random forest
Support Vector Machines ( SVM )
Introduction to dynamic programming
Support Vector Machines- SVM
Machine Learning with Decision trees
backpropagation in neural networks
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Ad

Viewers also liked (20)

PDF
CS571: Gradient Descent
PDF
Using Gradient Descent for Optimization and Learning
PDF
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
PPTX
07 logistic regression and stochastic gradient descent
PDF
Learning to learn by gradient descent by gradient descent
DOC
04.第四章用Matlab求偏导数
PPT
Neiwpcc2010.ppt
PDF
Stochastic gradient descent and its tuning
DOC
Art2
PDF
CS571: Sentiment Analysis
PDF
Adaptive filtersfinal
PDF
Reducting Power Dissipation in Fir Filter: an Analysis
PPT
Machine learning with Apache Hama
PDF
02.03 Artificial Intelligence: Search by Optimization
PPTX
A Multiple-Shooting Differential Dynamic Programming Algorithm
PDF
Statistical Learning and Text Classification with NLTK and scikit-learn
DOCX
Optimization of Cairo West Power Plant for Generation
PPTX
Optmization techniques
PPTX
Outdoor propagatiom model
CS571: Gradient Descent
Using Gradient Descent for Optimization and Learning
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
07 logistic regression and stochastic gradient descent
Learning to learn by gradient descent by gradient descent
04.第四章用Matlab求偏导数
Neiwpcc2010.ppt
Stochastic gradient descent and its tuning
Art2
CS571: Sentiment Analysis
Adaptive filtersfinal
Reducting Power Dissipation in Fir Filter: an Analysis
Machine learning with Apache Hama
02.03 Artificial Intelligence: Search by Optimization
A Multiple-Shooting Differential Dynamic Programming Algorithm
Statistical Learning and Text Classification with NLTK and scikit-learn
Optimization of Cairo West Power Plant for Generation
Optmization techniques
Outdoor propagatiom model
Ad

Similar to Gradient descent method (20)

PPTX
Lecture_3_Gradient_Descent.pptx
PDF
Gradient_Descent_Unconstrained.pdf
PDF
Methods for Non-Linear Least Squares Problems
PPTX
Gradient Descent in Machine Learning and
PDF
CI_L01_Optimization.pdf
PDF
Optim_methods.pdf
PDF
Strong convexity on gradient descent and newton's method
PPTX
Lecture_15_Proximal_Gradient_Descent.pptx
PPT
cos323_s06_lecture03_optimization.ppt
PDF
AOT3 Multivariable Optimization Algorithms.pdf
PDF
Regression_1.pdf
PDF
3.1. Linear Regression and Gradient Desent.pdf
PPTX
Optimization of mathematical function using gradient descent algorithm.pptx
PPT
Optimization
PPTX
2. Linear regression with one variable.pptx
PPTX
Steepest descent method
PPTX
Linear regression, costs & gradient descent
PPT
lecture6.ppt
PDF
Optimum engineering design - Day 5. Clasical optimization methods
PDF
Coordinate Descent method
Lecture_3_Gradient_Descent.pptx
Gradient_Descent_Unconstrained.pdf
Methods for Non-Linear Least Squares Problems
Gradient Descent in Machine Learning and
CI_L01_Optimization.pdf
Optim_methods.pdf
Strong convexity on gradient descent and newton's method
Lecture_15_Proximal_Gradient_Descent.pptx
cos323_s06_lecture03_optimization.ppt
AOT3 Multivariable Optimization Algorithms.pdf
Regression_1.pdf
3.1. Linear Regression and Gradient Desent.pdf
Optimization of mathematical function using gradient descent algorithm.pptx
Optimization
2. Linear regression with one variable.pptx
Steepest descent method
Linear regression, costs & gradient descent
lecture6.ppt
Optimum engineering design - Day 5. Clasical optimization methods
Coordinate Descent method

Recently uploaded (20)

PPTX
Machine Learning_overview_presentation.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Spectroscopy.pptx food analysis technology
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
1. Introduction to Computer Programming.pptx
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Tartificialntelligence_presentation.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Encapsulation theory and applications.pdf
PDF
Empathic Computing: Creating Shared Understanding
PPTX
A Presentation on Artificial Intelligence
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
Machine Learning_overview_presentation.pptx
Advanced methodologies resolving dimensionality complications for autism neur...
Spectroscopy.pptx food analysis technology
Accuracy of neural networks in brain wave diagnosis of schizophrenia
1. Introduction to Computer Programming.pptx
20250228 LYD VKU AI Blended-Learning.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Tartificialntelligence_presentation.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Network Security Unit 5.pdf for BCA BBA.
Building Integrated photovoltaic BIPV_UPV.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
“AI and Expert System Decision Support & Business Intelligence Systems”
Encapsulation theory and applications.pdf
Empathic Computing: Creating Shared Understanding
A Presentation on Artificial Intelligence
Spectral efficient network and resource selection model in 5G networks
Diabetes mellitus diagnosis method based random forest with bat algorithm

Gradient descent method

  • 1. Gradient descent method 2013.11.10 SanghyukChun Many contents are from Large Scale Optimization Lecture 4 & 5 by Caramanis& Sanghavi Convex Optimization Lecture 10 by Boyd & Vandenberghe Convex Optimization textbook Chapter 9 by Boyd & Vandenberghe 1
  • 2. Contents •Introduction •Example code & Usage •Convergence Conditions •Methods & Examples •Summary 2
  • 3. Introduction Unconstraint minimization problem, Description, Pros and Cons 3
  • 4. Unconstrained minimization problems •Recall: Constrained minimization problems •From Lecture 1, the formation of a general constrained convex optimization problem is as follows •min푓푥푠.푡.푥∈χ •Where 푓:χ→Ris convex and smooth •From Lecture 1, the formation of an unconstrained optimization problem is as follows •min푓푥 •Where 푓:푅푛→푅is convex and smooth •In this problem, the necessary and sufficient condition for optimal solution x0 is •훻푓푥=0푎푡푥=푥0 4
  • 5. Unconstrained minimization problems •Minimize f(x) •When f is differentiable and convex, a necessary and sufficient condition for a point 푥∗to be optimal is훻푓푥∗=0 •Minimize f(x) is the same as fining solution of 훻푓푥∗=0 •Min f(x): Analytically solving the optimality equation •훻푓푥∗=0: Usually be solved by an iterative algorithm 5
  • 6. Description of Gradient Descent Method •The idea relies on the fact that −훻푓(푥(푘))is a descent direction •푥(푘+1)=푥(푘)−η푘훻푓(푥(푘))푤푖푡ℎ푓푥푘+1<푓(푥푘) •Δ푥(푘)is the step, or search direction •η푘is the step size, or step length •Too small η푘will cause slow convergence •Too large η푘could cause overshoot the minima and diverge 6
  • 7. Description of Gradient Descent Method •Algorithm (Gradient Descent Method) •given a starting point 푥∈푑표푚푓 •repeat 1.Δ푥≔−훻푓푥 2.Line search: Choose step size ηvia exact or backtracking line search 3.Update 푥≔푥+ηΔ푥 •untilstopping criterion is satisfied •Stopping criterion usually 훻푓(푥)2≤휖 •Very simple, but often very slow; rarely used in practice 7
  • 8. Pros and Cons •Pros •Can be applied to every dimension and space (even possible to infinite dimension) •Easy to implement •Cons •Local optima problem •Relatively slow close to minimum •For non-differentiable functions, gradient methods are ill-defined 8
  • 9. Example Code & Usage Example Code, Usage, Questions 9
  • 10. Gradient Descent Example Code •http://mirlab.org/jang/matlab/toolbox/machineLearning/ 10
  • 11. Usage of Gradient Descent Method •Linear Regression •Find minimum loss function to choose best hypothesis 11 Example of Loss function: 푑푎푡푎푝푟푒푑푖푐푡−푑푎푡푎표푏푠푒푟푣푒푑 2 Find the hypothesis (function) which minimize the loss function
  • 12. Usage of Gradient Descent Method •Neural Network •Back propagation •SVM (Support Vector Machine) •Graphical models •Least Mean Squared Filter …and many other applications! 12
  • 13. Questions •Does Gradient Descent Method always converge? •If not, what is condition for convergence? •How can make Gradient Descent Method faster? •What is proper value for step size η푘 13
  • 14. Convergence Conditions L-Lipschitzfunction, Strong Convexity, Condition number 14
  • 15. L-Lipschitzfunction •Definition •A function 푓:푅푛→푅is called L-Lipschitzif and only if 훻푓푥−훻푓푦2≤퐿푥−푦2,∀푥,푦∈푅푛 •We denote this condition by 푓∈퐶퐿, where 퐶퐿is class of L-Lipschitzfunctions 15
  • 16. L-Lipschitzfunction •Lemma 4.1 •퐼푓푓∈퐶퐿,푡ℎ푒푛푓푦−푓푥−훻푓푥,푦−푥≤ 퐿 2 푦−푥2 •Theorem 4.2 •퐼푓푓∈퐶퐿푎푛푑푓∗=min 푥 푓푥>−∞,푡ℎ푒푛푡ℎ푒푔푟푎푑푖푒푛푡푑푒푠푐푒푛푡 푎푙푔표푟푖푡ℎ푚푤푖푡ℎ푓푖푥푒푑푠푡푒푝푠푖푧푒푠푡푎푡푖푠푓푦푖푛푔η< 2 퐿 푤푖푙푙푐표푛푣푒푟푔푒푡표 푎푠푡푎푡푖표푛푎푟푦푝표푖푛푡 16
  • 17. Strong Convexity and implications •Definition •If there exist a constant m > 0 such that 훻2푓≻=푚퐼푓표푟∀푥∈푆, then the function f(x) is strongly convex function on S 17
  • 18. Strong Convexity and implications •Lemma 4.3 •If f is strongly convex on S, we have the following inequality: •푓푦≥푓푥+<훻푓푥,푦−푥>+ 푚 2 푦−푥2푓표푟∀푥,푦∈푆 •Proof 18 ( ) useful as stopping criterion (if you know m)
  • 19. Strong Convexity and implications 19 Proof
  • 20. Upper Bound of 훻2푓(푥) •Lemma 4.3 implies that the sublevel sets contained in S are bounded, so in particular, S is bounded. Therefore the maximum eigenvalue of 훻2푓푥is bounded above on S •ThereexistsaconstantMsuchthat훻2푓푥=≺푀퐼푓표푟∀푥∈푆 •Lemma 4.4 •퐹표푟푎푛푦푥,푦∈푆,푖푓훻2푓푥=≺푀퐼푓표푟푎푙푙푥∈푆푡ℎ푒푛 푓푦≤푓푥+<훻푓푥,푦−푥>+ 푀 2 푦−푥2 20
  • 21. Condition Number •From Lemma 4.3 and 4.4 we have 푚퐼=≺훻2푓푥=≺푀퐼푓표푟∀푥∈푆,푚>0,푀>0 •The ratio k=M/m is thus an upper bound on the condition number of the matrix 훻2푓푥 •When the ratio is close to 1, we call it well-conditioned •When the ratio is much larger than 1, we call it ill-conditioned •When the ratio is exactly 1, it is the best case that only one step will lead to the optimal solution (there is no wrong direction) 21
  • 22. Condition Number •Theorem 4.5 •Gradient descent for a strongly convex function f and step sizeη= 1 푀 will converge as •푓푥∗−푓∗≤푐푘푓푥0−푓∗,푤ℎ푒푟푒푐≤1− 푚 푀 •Rate of convergence c is known as linear convergence •Since we usually do not know the value of M, we do line search •For exact line search, 푐=1− 푚 푀 •For backtracking line search, 푐=1−min2푚훼,2훽훼푚 푀 <1 22
  • 23. Methods & Examples Exact Line Search, Backtracking Line Search, Coordinate Descent Method, Steepest Descent Method 23
  • 24. Exact Line Search • The optimal line search method in which η is chosen to minimize f along the ray 푥 − η훻푓 푥 , as shown in below • Exact line search is used when the cost of minimization problem with one variable is low compared to the cost of computing the search direction itself. • It is not very practical 24
  • 25. Exact Line Search •Convergence Analysis • •푓푥푘−푓∗decreases by at least a constant factor in every iteration •Converging to 0 geometric fast. (linear convergence) 25 1
  • 26. Backtracking Line Search •It depends on two constants 훼,훽푤푖푡ℎ0<훼<0.5,0<훽<1 •It starts with unit step size and then reduces it by the factor 훽until the stopping condition 푓(푥−η훻푓(푥))≤푓(푥)−훼η훻푓푥2 •Since −훻푓푥is a descent direction and −훻푓푥2<0, so for small enough step size η, we have 푓푥−η훻푓푥≈푓푥−η훻푓푥2<푓푥−훼η훻푓푥2 •It shows that the backtracking line search eventually terminates •훼is typically chosen between 0.01 and 0.3 •훽is often chosen to be between 0.1 and 0.8 26
  • 28. Backtracking Line Search •Convergence Analysis •Claim: η≤ 1 푀 always satisfies the stopping condition •Proof 28
  • 29. Backtracking Line Search •Proof (cont) 29
  • 30. Line search types •Slide from Optimization Lecture 10 by Boyd 30
  • 31. Line search example •Slide from Optimization Lecture 10 by Boyd 31
  • 32. Coordinate Descent Method • Coordinate descent belongs to the class of several non derivative methods used for minimizing differentiable functions. • Here, cost is minimized in one coordinate direction in each iteration. 32
  • 33. Coordinate Descent Method •Pros •It is well suited for parallel computation •Cons •May not reach the local minimum even for convex function 33
  • 34. Converge of Coordinate Descent •Lemma 5.4 34
  • 35. Coordinate Descent Method •Method of selecting the coordinate for next iteration •Cyclic Coordinate Descent •Greedy Coordinate Descent •(Uniform) Random Coordinate Descent 35
  • 36. Steepest Descent Method •The gradient descent method takes many iterations •Steepest Descent Method aims at choosing the best direction at each iteration •Normalized steepest descent direction •Δ푥푛푠푑=푎푟푔푚푖푛훻푓푥푇푣푣=1} •Interpretation: for small 푣,푓푥+푣≈푓푥+훻푓푥푇푣direction Δ푥푛푠푑 is unit-norm step with most negative directional derivative •Iteratively, the algorithm follows the following steps •Calculate direction of descent Δ푥푛푠푑 •Calculate step size, t •푥+=푥+푡Δ푥푛푠푑 36
  • 37. Steepest Descent for various norms •The choice of norm used the steepest descent direction can be have dramatic effect on converge rate •푙2norm •The steepest descent direction is as follows •Δ푥푛푠푑= −훻푓(푥) 훻푓(푥)2 •푙1norm •For 푥1= 푖푥푖,a descent direction is as follows, •Δ푥푛푑푠=−푠푖푔푛 휕푓푥 휕푥푖 ∗푒푖 ∗ •푖∗=푎푟푔min 푖 휕푓 휕푥푖 •푙∞norm •For 푥∞=argmin 푖 푥푖,a descent direction is as follows •Δ푥푛푑푠=−푠푖푔푛−훻푓푥 37
  • 38. Steepest Descent for various norms 38 Quadratic Norm 푙1-Norm
  • 39. Steepest Descent for various norms •Example 39
  • 40. Steepest Descent Convergence Rate •Fact: Any norm can be bounded by ∙2, i.e., ∃훾, 훾∈(0,1] such that, 푥≥훾푥2푎푛푑푥∗≥훾푥2 •Theorem 5.5 •If f is strongly convex with respect to m and M, and ∙2has 훾, 훾as above then steepest decent with backtracking line search has linear convergence with rate •푐=1−2푚훼 훾2min1, 훽훾 푀 •Proof: Will be proved in the lecture 6 40
  • 42. Summary •Unconstrained Convex Optimization Problem •Gradient Descent Method •Step Size Trade-off between safety and speed •Convergence Conditions •L-LipschtizFunction •Strong Convexity •Condition Number 42
  • 43. Summary •Exact Line Search •Backtracking Line Search •Coordinate Descent Method •Good for parallel computation but not always converge •Steepest Descent Method •The choice of norm is important 43
  • 44. 44 END OF DOCUMENT