# Page History

## Key

• This line was removed.
• Formatting was changed.

Adaptive learning rate methods are an optimization of gradient descent methods with the goal of minimizing the objective function of a network by using the gradient of the function and the parameters of the network.

outline true

Author: Johannes Kuhn

Before the adaptive learning rate methods were introduced, the gradient descent algorithms including Batch Gradient Descent (BGD), Stochastic Gradient Descent (SGD) and mini-Batch Gradient Descent (mini-BGD, the mixture of BGD ans SGD) were state-of-the-art. In essence, these methods try to update the weights

LaTeX Unit
body \theta
of the network, with the help of a learning rate
LaTeX Unit
body \eta
, the objective function
LaTeX Unit
body J(\theta)
LaTeX Unit
body \nabla J(\theta)
. What all gradient descent algorithms and its improvements have in common, is the goal of minimizing
LaTeX Unit
body J(\theta)
in order to find the optimal weights
LaTeX Unit
body \theta
.

The simplest of the three is the BGD.

LaTeX Math Inline
body \theta = \theta - \eta \cdot \nabla_{\theta} J(\theta)

It tries to reach the minimum of

LaTeX Unit
body J\theta)
, by subtracting from
LaTeX Unit
body \theta
LaTeX Unit
body J(\theta)
(refere to Figure 3 for a visualization). The algorithm always computes over the whole set of data, for each update. This makes the BGD the slowest and causes it to be unable to update online. Additionally, it performs redundant operates for big sets of data, computing similar examples at each update and it converges to the closeset minimum depending on the given data data, resulting in potential suboptimal results.

An often used algorithm is the SGD.

LaTeX Math Inline
body \theta = \theta - \eta \cdot \nabla_{\theta}J(\theta; x^{(i)}; y^{(i)})

Contrary to BGD, SGD updates for each training example

LaTeX Unit
body (x^{(i)}; y^{(i)})
, thus updating according to a single example step. Furthermore, this fluctuation enables the SGD to jump to minima farther away, potentially reaching a better minimum. But thanks to this fluctuation, SGD is also able to overshoot. This can be counteracted by slowly decreasing the learning rate. In the exemplary code shown in Figure 2, a shuffle function is additionally used in the SGD and mini-BGD algorithm, compared to the BGD. This is done, as it is often preferable to avoid meaningful order of the data and thereby avoid bias of optimization algorithm, although sometimes better results can be achieved with data in order. In this case the shuffle operation is to be removed.

Lastly, there is the mini-BGD.

LaTeX Math Inline
body \theta = \theta - \eta \cdot \nabla_{\theta} J(\theta;x^{(i:i+n)};y^{(i:i+n)})

The mini-BGD updates for every mini-batch of

LaTeX Unit
body n
training examples. This leads to a more stable convergence, by reducing the variance of the parameters. When people talk abput a SGD algorithm, they often refer to this version.

for i in range (nb_epoches):

params = params - learning_rate * params_grad

for i in range(np_epochs):

np.random.shuffle(data)

for example in data:

params = params - learning_rate * params_grad

for i in range(np_epochs):

np.random.shuffle(data)

for batch in get_batches(data, batch_size=50):

params = params - learning_rate * params_grad

Figure 2: (1) Pseudo code of the three gradient descent algorithms Figure 1:(5) Local minima may occure in

LaTeX Unit
body J(\theta)
(here
LaTeX Unit
body J(w)
), which may result in suboptimal solution for some gradient descent methodes.

As an improvement to traditional gradient descent algorithms, the adaptive gradient descent optimization algorithms or adaptive learning rate methods can be utilized. Several versions of these algorithms are described below.

Momentum can be seen as an evolution of the SGD.

LaTeX Math Inline
body v_t = \gamma v_{t-1} + \eta \nabla_{\theta}J(\theta)\\\theta = \theta - v_t

While SGD has problems with data having steep curves in one direction of the gradient, Momentum circumvents that by adding the update vector of the time step before multiplying it with a

LaTeX Unit
body \gamma
, usually around 0.9 (1). As an analogy, one can think of a ball rolling down the gradient, gathering momentum (hence the name), while still being affected by the wind resistance (0<
LaTeX Unit
body \gamma
< 1).

Nesterov accelerated gradient can be seen as a further enhancement to momentum.

LaTeX Math Inline
body v_t = \gamma v_{t-1} + \eta \nabla_{\theta}J(\theta - \gamma v_{t-1})\\\theta = \theta - v_t

This algorithm adds a guess of the next step, in the form of the term

LaTeX Unit
body -\gamma v_{t-1}
as an input to the gradient of the cost function
LaTeX Unit
body J(\theta)
. A comparison for the first two steps of Momentum and Nesterov accelerated gradient can be found in Figure 3. The additional term results in a consideration of the error of the previous step, accelerating the progress in comparison to momentum.

LaTeX Unit
body \eta
during its run-time and it updates its parameters
LaTeX Unit
body \theta_i
separately during each time step
LaTeX Unit
body t
. It has to do that, since
LaTeX Unit
body \eta
LaTeX Unit
body \theta_i
on its own.

LaTeX Math Inline
body \theta_{t+1,i} = \theta_{t,i} - \frac{\eta}{\sqrt{G_{t,ii}+ \epsilon}} \cdot\nabla_{\theta} J(\theta_i)

LaTeX Unit
body G_t
is a matrix containing the squared sum of the past gradients with regards to all
LaTeX Unit
body \theta
along its diagonal.

LaTeX Unit
body \epsilon
is correction term which is utilized to avoid dividing by 0 and is generally insignificantly small (~
LaTeX Unit
body 10^{-8}
).

Due to the accumulation of the squared gradients in

LaTeX Unit
body G_t
the learning rate
LaTeX Unit
body \frac{\eta}{\sqrt{G_{t,ii}+ \epsilon}}
gets smaller over time, finally leading to a significantly small rate, which causes the algorithm to obtain no new knowledge. Figure 3:(1)  Visualization of the analogy for Momentum using a

LaTeX Unit
body \gamma = 0.9
. Figure 4:(1) Comparison of Momentum (blue) and Nesterov accelerated gradient (green). The brown arrow, is the first prediction of the nesterov accelerated gradient, before the gradient is calculated. The green arrow is the final result of the nesterov accelerated gradient, now with the gradient taken into account.

## Literature

Anchor
(1)Ruder, Sebastian. "An overview of gradient descent optimization algorithms." arXiv preprint arXiv:1609.04747 (2016).

Anchor
(2)Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive subgradient methods for online learning and stochastic optimization." Journal of Machine Learning Research 12.Jul (2011): 2121-2159.

Anchor