Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

nn_l1_loss

L1 loss


Description

Creates a criterion that measures the mean absolute error (MAE) between each element in the input x and target y.

Usage

nn_l1_loss(reduction = "mean")

Arguments

reduction

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'

Details

The unreduced (i.e. with reduction set to 'none') loss can be described as:

\ell(x, y) = L = \{l_1,…,l_N\}^\top, \quad l_n = ≤ft| x_n - y_n \right|,

where N is the batch size. If reduction is not 'none' (default 'mean'), then:

\ell(x, y) = \begin{array}{ll} \mbox{mean}(L), & \mbox{if reduction} = \mbox{'mean';}\\ \mbox{sum}(L), & \mbox{if reduction} = \mbox{'sum'.} \end{array}

x and y are tensors of arbitrary shapes with a total of n elements each.

The sum operation still operates over all the elements, and divides by n. The division by n can be avoided if one sets reduction = 'sum'.

Shape

  • Input: (N, *) where * means, any number of additional dimensions

  • Target: (N, *), same shape as the input

  • Output: scalar. If reduction is 'none', then (N, *), same shape as the input

Examples

if (torch_is_installed()) {
loss <- nn_l1_loss()
input <- torch_randn(3, 5, requires_grad=TRUE)
target <- torch_randn(3, 5)
output <- loss(input, target)
output$backward()

}

torch

Tensors and Neural Networks with 'GPU' Acceleration

v0.3.0
MIT + file LICENSE
Authors
Daniel Falbel [aut, cre, cph], Javier Luraschi [aut], Dmitriy Selivanov [ctb], Athos Damiani [ctb], Christophe Regouby [ctb], Krzysztof Joachimiak [ctb], RStudio [cph]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.