Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

nn_multilabel_soft_margin_loss

Multi label soft margin loss


Description

Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input x and target y of size (N, C).

Usage

nn_multilabel_soft_margin_loss(weight = NULL, reduction = "mean")

Arguments

weight

(Tensor, optional): a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.

reduction

(string, optional): Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'

Details

For each sample in the minibatch:

loss(x, y) = - \frac{1}{C} * ∑_i y[i] * \log((1 + \exp(-x[i]))^{-1}) + (1-y[i]) * \log≤ft(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right)

where i \in ≤ft\{0, \; \cdots , \; \mbox{x.nElement}() - 1\right\}, y[i] \in ≤ft\{0, \; 1\right\}.

Shape

  • Input: (N, C) where N is the batch size and C is the number of classes.

  • Target: (N, C), label targets padded by -1 ensuring same shape as the input.

  • Output: scalar. If reduction is 'none', then (N).


torch

Tensors and Neural Networks with 'GPU' Acceleration

v0.3.0
MIT + file LICENSE
Authors
Daniel Falbel [aut, cre, cph], Javier Luraschi [aut], Dmitriy Selivanov [ctb], Athos Damiani [ctb], Christophe Regouby [ctb], Krzysztof Joachimiak [ctb], RStudio [cph]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.