RMSprop optimizer
Proposed by G. Hinton in his course
optim_rmsprop( params, lr = 0.01, alpha = 0.99, eps = 1e-08, weight_decay = 0, momentum = 0, centered = FALSE )
params |
(iterable): iterable of parameters to optimize or list defining parameter groups |
lr |
(float, optional): learning rate (default: 1e-2) |
alpha |
(float, optional): smoothing constant (default: 0.99) |
eps |
(float, optional): term added to the denominator to improve numerical stability (default: 1e-8) |
weight_decay |
optional weight decay penalty. (default: 0) |
momentum |
(float, optional): momentum factor (default: 0) |
centered |
(bool, optional) : if |
The centered version first appears in Generating Sequences With Recurrent Neural Networks. The implementation here takes the square root of the gradient average before adding epsilon (note that TensorFlow interchanges these two operations). The effective learning rate is thus α/(√{v} + ε) where α is the scheduled learning rate and v is the weighted moving average of the squared gradient.
Update rule:
θ_{t+1} = θ_{t} - \frac{η }{√{{E[g^2]}_{t} + ε}} * g_{t}
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.