Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Adaptive learning rate methods are an optimization of gradient descent methods with the goal of minimizing the objective function of a network by using the gradient of the function and the parameters of the network.

 

 

Table of Contents
outlinetrue

 

Author: Johannes Kuhn

 

Gradient descent

Before the adaptive learning rate methods were introduced, the gradient descent algorithms including Batch Gradient Descent (BGD), Stochastic Gradient Descent (SGD) and mini-Batch Gradient Descent (mini-BGD, the mixture of BGD ans SGD) were state-of-the-art. In essence, these methods try to update the weights

LaTeX Unit
body\theta
of the network, with the help of a learning rate
LaTeX Unit
body\eta
, the objective function
LaTeX Unit
bodyJ(\theta)
and the gradient of it,
LaTeX Unit
body\nabla J(\theta)
. What all gradient descent algorithms and its improvements have in common, is the goal of minimizing
LaTeX Unit
bodyJ(\theta)
in order to find the optimal weights
LaTeX Unit
body\theta
.

The simplest of the three is the BGD.

LaTeX Math Inline
body\theta = \theta - \eta \cdot \nabla_{\theta} J(\theta)

It tries to reach the minimum of

LaTeX Unit
bodyJ\theta)
, by subtracting from
LaTeX Unit
body\theta
the gradient of
LaTeX Unit
bodyJ(\theta)
(refere to Figure 3 for a visualization). The algorithm always computes over the whole set of data, for each update. This makes the BGD the slowest and causes it to be unable to update online. Additionally, it performs redundant operates for big sets of data, computing similar examples at each update and it converges to the closeset minimum depending on the given data data, resulting in potential suboptimal results.

An often used algorithm is the SGD.

LaTeX Math Inline
body\theta = \theta - \eta \cdot \nabla_{\theta}J(\theta; x^{(i)}; y^{(i)})

Contrary to BGD, SGD updates for each training example

LaTeX Unit
body(x^{(i)}; y^{(i)})
, thus updating according to a single example step. Furthermore, this fluctuation enables the SGD to jump to minima farther away, potentially reaching a better minimum. But thanks to this fluctuation, SGD is also able to overshoot. This can be counteracted by slowly decreasing the learning rate. In the exemplary code shown in Figure 2, a shuffle function is additionally used in the SGD and mini-BGD algorithm, compared to the BGD. This is done, as it is often preferable to avoid meaningful order of the data and thereby avoid bias of optimization algorithm, although sometimes better results can be achieved with data in order. In this case the shuffle operation is to be removed.

Lastly, there is the mini-BGD.

LaTeX Math Inline
body\theta = \theta - \eta \cdot \nabla_{\theta} J(\theta;x^{(i:i+n)};y^{(i:i+n)})

The mini-BGD updates for every mini-batch of

LaTeX Unit
bodyn
training examples. This leads to a more stable convergence, by reducing the variance of the parameters. When people talk abput a SGD algorithm, they often refer to this version.

 

batch gradient descent (BGD)stochastic gradient descent (SGD)mini-batch gradient descent (mini-BGD)

for i in range (nb_epoches):

 

 

       params_grad = evaluate_gradient(loss_function, data, params

       params = params - learning_rate * params_grad

for i in range(np_epochs):

          np.random.shuffle(data)

          for example in data:

                params_grad = evaluate_gradient(loss_function, example, params

                params = params - learning_rate * params_grad         

 for i in range(np_epochs):

           np.random.shuffle(data)

           for batch in get_batches(data, batch_size=50):

                params_grad = evaluate_gradient(loss_function, batch, params

                params = params - learning_rate * params_grad   

Figure 2: (1) Pseudo code of the three gradient descent algorithms

 

 

 

 

 

 

Figure 1:(5) Local minima may occure in 

LaTeX Unit
bodyJ(\theta)
(here
LaTeX Unit
bodyJ(w)
), which may result in suboptimal solution for some gradient descent methodes.

 

 

Adaptive Learning Rate Method

As an improvement to traditional gradient descent algorithms, the adaptive gradient descent optimization algorithms or adaptive learning rate methods can be utilized. Several versions of these algorithms are described below.

Momentum can be seen as an evolution of the SGD.

LaTeX Math Inline
bodyv_t = \gamma v_{t-1} + \eta \nabla_{\theta}J(\theta)\\\theta = \theta - v_t

While SGD has problems with data having steep curves in one direction of the gradient, Momentum circumvents that by adding the update vector of the time step before multiplying it with a 

LaTeX Unit
body\gamma
, usually around 0.9 (1). As an analogy, one can think of a ball rolling down the gradient, gathering momentum (hence the name), while still being affected by the wind resistance (0<
LaTeX Unit
body\gamma
< 1).

 

Nesterov accelerated gradient can be seen as a further enhancement to momentum.

LaTeX Math Inline
bodyv_t = \gamma v_{t-1} + \eta \nabla_{\theta}J(\theta - \gamma v_{t-1})\\\theta = \theta - v_t

This algorithm adds a guess of the next step, in the form of the term

LaTeX Unit
body-\gamma v_{t-1}
as an input to the gradient of the cost function
LaTeX Unit
bodyJ(\theta)
. A comparison for the first two steps of Momentum and Nesterov accelerated gradient can be found in Figure 3. The additional term results in a consideration of the error of the previous step, accelerating the progress in comparison to momentum.

 

Contrary to the nesterov accelerated gradient, Adagrad adapts its learning rate 

LaTeX Unit
body\eta
during its run-time and it updates its parameters 
LaTeX Unit
body\theta_i
separately during each time step
LaTeX Unit
bodyt
. It has to do that, since
LaTeX Unit
body\eta
adapts for every
LaTeX Unit
body\theta_i
on its own.

LaTeX Math Inline
body\theta_{t+1,i} = \theta_{t,i} - \frac{\eta}{\sqrt{G_{t,ii}+ \epsilon}} \cdot\nabla_{\theta} J(\theta_i)

LaTeX Unit
bodyG_t
is a matrix containing the squared sum of the past gradients with regards to all
LaTeX Unit
body\theta
along its diagonal.

LaTeX Unit
body\epsilon
is correction term which is utilized to avoid dividing by 0 and is generally insignificantly small (~
LaTeX Unit
body10^{-8}
).

Due to the accumulation of the squared gradients in

LaTeX Unit
bodyG_t
the learning rate 
LaTeX Unit
body\frac{\eta}{\sqrt{G_{t,ii}+ \epsilon}}
gets smaller over time, finally leading to a significantly small rate, which causes the algorithm to obtain no new knowledge.

 

 

 

 

Figure 3:(1)  Visualization of the analogy for Momentum using a

LaTeX Unit
body\gamma = 0.9
.

Figure 4:(1) Comparison of Momentum (blue) and Nesterov accelerated gradient (green). The brown arrow, is the first prediction of the nesterov accelerated gradient, before the gradient is calculated. The green arrow is the final result of the nesterov accelerated gradient, now with the gradient taken into account.

Literature

Anchor
[1] Adaptive Learning Rate Source
[1] Adaptive Learning Rate Source
(1)Ruder, Sebastian. "An overview of gradient descent optimization algorithms." arXiv preprint arXiv:1609.04747 (2016).

Anchor
[2] Adaptive Learning Rate Source
[2] Adaptive Learning Rate Source
(2)Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive subgradient methods for online learning and stochastic optimization." Journal of Machine Learning Research 12.Jul (2011): 2121-2159.

Anchor
[3] Adaptive Learning Rate Source
[3] Adaptive Learning Rate Source
(3) http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf

Anchor
[4] Adaptive Learning Rate Source
[4] Adaptive Learning Rate Source
(4) https://www.quora.com/Why-do-we-need-adaptive-learning-rates-for-Deep-Learning  http://sebastianraschka.com/Articles/2015_singlelayer_neurons.html

 

Anchor
[5] Adaptive Learning Rate
[5] Adaptive Learning Rate
(5) http://sebastianraschka.com/Articles/2015_singlelayer_neurons.html