Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

lr_step

Step learning rate decay


Description

Decays the learning rate of each parameter group by gamma every step_size epochs. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr.

Usage

lr_step(optimizer, step_size, gamma = 0.1, last_epoch = -1)

Arguments

optimizer

(Optimizer): Wrapped optimizer.

step_size

(int): Period of learning rate decay.

gamma

(float): Multiplicative factor of learning rate decay. Default: 0.1.

last_epoch

(int): The index of last epoch. Default: -1.

Examples

if (torch_is_installed()) {
## Not run: 
# Assuming optimizer uses lr = 0.05 for all groups
# lr = 0.05     if epoch < 30
# lr = 0.005    if 30 <= epoch < 60
# lr = 0.0005   if 60 <= epoch < 90
# ...
scheduler <- lr_step(optimizer, step_size=30, gamma=0.1)
for (epoch in 1:100) {
  train(...)
  validate(...)
  scheduler$step()
}

## End(Not run)

}

torch

Tensors and Neural Networks with 'GPU' Acceleration

v0.3.0
MIT + file LICENSE
Authors
Daniel Falbel [aut, cre, cph], Javier Luraschi [aut], Dmitriy Selivanov [ctb], Athos Damiani [ctb], Christophe Regouby [ctb], Krzysztof Joachimiak [ctb], RStudio [cph]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.