Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

optim_rmsprop

RMSprop optimizer


Description

Proposed by G. Hinton in his course

Usage

optim_rmsprop(
  params,
  lr = 0.01,
  alpha = 0.99,
  eps = 1e-08,
  weight_decay = 0,
  momentum = 0,
  centered = FALSE
)

Arguments

params

(iterable): iterable of parameters to optimize or list defining parameter groups

lr

(float, optional): learning rate (default: 1e-2)

alpha

(float, optional): smoothing constant (default: 0.99)

eps

(float, optional): term added to the denominator to improve numerical stability (default: 1e-8)

weight_decay

optional weight decay penalty. (default: 0)

momentum

(float, optional): momentum factor (default: 0)

centered

(bool, optional) : if TRUE, compute the centered RMSProp, the gradient is normalized by an estimation of its variance weight_decay (float, optional): weight decay (L2 penalty) (default: 0)

Note

The centered version first appears in Generating Sequences With Recurrent Neural Networks. The implementation here takes the square root of the gradient average before adding epsilon (note that TensorFlow interchanges these two operations). The effective learning rate is thus α/(√{v} + ε) where α is the scheduled learning rate and v is the weighted moving average of the squared gradient.

Update rule:

θ_{t+1} = θ_{t} - \frac{η }{√{{E[g^2]}_{t} + ε}} * g_{t}


torch

Tensors and Neural Networks with 'GPU' Acceleration

v0.3.0
MIT + file LICENSE
Authors
Daniel Falbel [aut, cre, cph], Javier Luraschi [aut], Dmitriy Selivanov [ctb], Athos Damiani [ctb], Christophe Regouby [ctb], Krzysztof Joachimiak [ctb], RStudio [cph]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.