Gradient Descent and Convexity
Last lecture we proved that GD converges, but doesn’t guarantee it minimizes loss. So when can we say it minimizes loss? When is convex.
Convexity¶
Definition¶
To visualize graphically, a convex function is that if we draw a line segment between any two points in the graph of the function, that line segment will lie above the function.
Consider the function
0th Order Definition¶
1st Order Definition¶
A local linear approximation to it is always below it.
2nd Order Definition¶
Second derivative is always greater than 0. Or in terms of Hessian,
Which is equivalent to saying the Hessian matrix is positive semi-definite.
Good Properties¶
We want to optimize on convex function because it has this really nothing property that local minimum = global minimum.
There is a neat proof directly from the 1st order definition: take a point s.t. , so is a local minimum. From the definition, we have
So is also a global minimum.
-Strongly Convex¶
Definition¶
For a , we call the function is -Strongly Convex when
1st Order Definition¶
For this stronger convex, not only has to be greater than linear approximation, but also has to be greater than it plus a parabola with positive curvature .
2nd Order Definition¶
This is the most natural way to define it
Another Very Important Definition¶
This is very useful in ML because from this, we can say that any function is -strictly convex as long as it can be written in the form of a convex function plus a L2 regularizer.
PL property¶
If a function is of -strong convexity, it has this Polyak-Lojasiewicz property, which says ( is the global minimum)
This inequality says that our function’s gradient is only close to 0 when we are close to the global minimum.
GD Converge Linearly on PL Function¶
At the end of the proof, we had this inequality below
By making an assumption that , we concluded that GD converges at a rate proportional to and the norm of gradient. What happens when we have a PL function?
To simplify this inequality first, we get rid of this big term
With our PL property, we can replace the norm of gradient with global minimum.
Call this multiplicative factor , so we have
Since the left of the inequality is a positive number and the right is also a positive number, we must have is a positive number. From here, we can then safely recursively apply this inequality times and get
Recall and It holds true for all that , so we can rewrite the above inequality as below, where :
This shows that, for -strongly convex functions, gradient descent with a constant step size converges exponentially quickly to the optimum. This is sometimes called convergence at a linear rate (I know it’s confusing when we have exponential convergence rate but call it a “linear time”. This is just a name from numeric optimization)
What really interesting is¶
Now take a closer look at the value . To get the quickest convergence, we know from last time that we need to set , so we have
Set our goal as: for a given margin , by running GD times, we can output a prediction s.t. , so
We look at the one with an exponential term instead, so
To solve this, we have
In CS, when we see a term, we tend to say we can ignore it, so this inequality actually tells us that the number of iterations we need for GD to converge doesn’t depend on initialization or how accurate we want our solution to be. It depends instead only on .
Condition Number¶
We will call the condition number of our problem. The condition number encodes how hard a strongly convex problem is to solve. This ratio is invariant to scaling of the objective function . Recall the definition of and : is the upperbound of ’s second gradient and is the lowerbound.
The naming of condition number comes from numerical analysis where it describes the ratio between the largest singular value and the smallest singular value in the SVD of a matrix. This we have here is indeed the condition number of the Hessian matrix of our loss. Since a Hessian matrix is always semi-definite and positive, the condition number of a Hessian matrix is just the ratio between biggest and smallest eigenvalue.
The in-class demo showed an example where when we run GD on linear regression and it works horrible in raw data but works well on normalized data. One thing nice about linear regression is that we can actually calculate the exact Hessian matrix. Therefore, if we look at the Hessian, we understand when we have unnormalized data, we had a very large condition number (10^14) and it will take forever for GD to converge.
Time Complexity¶
We conclude that to run gradient descent, it will take steps and time (nd time to calculate each gradient descent step)