MSE loss
Creates a criterion that measures the mean squared error (squared L2 norm) between
each element in the input x and target y.
The unreduced (i.e. with reduction
set to 'none'
) loss can be described
as:
nn_mse_loss(reduction = "mean")
reduction |
(string, optional): Specifies the reduction to apply to the output:
|
\ell(x, y) = L = \{l_1,…,l_N\}^\top, \quad l_n = ≤ft( x_n - y_n \right)^2,
where N is the batch size. If reduction
is not 'none'
(default 'mean'
), then:
\ell(x, y) = \begin{array}{ll} \mbox{mean}(L), & \mbox{if reduction} = \mbox{'mean';}\\ \mbox{sum}(L), & \mbox{if reduction} = \mbox{'sum'.} \end{array}
x and y are tensors of arbitrary shapes with a total of n elements each.
The mean operation still operates over all the elements, and divides by n.
The division by n can be avoided if one sets reduction = 'sum'
.
Input: (N, *) where * means, any number of additional dimensions
Target: (N, *), same shape as the input
if (torch_is_installed()) { loss <- nn_mse_loss() input <- torch_randn(3, 5, requires_grad=TRUE) target <- torch_randn(3, 5) output <- loss(input, target) output$backward() }
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.