Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

optim_adagrad

Adagrad optimizer


Description

Usage

optim_adagrad(
  params,
  lr = 0.01,
  lr_decay = 0,
  weight_decay = 0,
  initial_accumulator_value = 0,
  eps = 1e-10
)

Arguments

params

(iterable): list of parameters to optimize or list parameter groups

lr

(float, optional): learning rate (default: 1e-2)

lr_decay

(float, optional): learning rate decay (default: 0)

weight_decay

(float, optional): weight decay (L2 penalty) (default: 0)

initial_accumulator_value

the initial value for the accumulator. (default: 0)

Adagrad is an especially good optimizer for sparse data. It individually modifies learning rate for every single parameter, dividing the original learning rate value by sum of the squares of the gradients. It causes that the rarely occurring features get greater learning rates. The main downside of this method is the fact that learning rate may be getting small too fast, so that at some point a model cannot learn anymore.

eps

(float, optional): term added to the denominator to improve numerical stability (default: 1e-10)

Note

Update rule:

θ_{t+1} = θ_{t} - \frac{η }{√{G_{t} + ε}} \odot g_{t}

The equation above and some remarks quoted after An overview of gradient descent optimization algorithms by Sebastian Ruder.


torch

Tensors and Neural Networks with 'GPU' Acceleration

v0.3.0
MIT + file LICENSE
Authors
Daniel Falbel [aut, cre, cph], Javier Luraschi [aut], Dmitriy Selivanov [ctb], Athos Damiani [ctb], Christophe Regouby [ctb], Krzysztof Joachimiak [ctb], RStudio [cph]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.