Group normalization
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization.
nn_group_norm(num_groups, num_channels, eps = 1e-05, affine = TRUE)
num_groups |
(int): number of groups to separate the channels into |
num_channels |
(int): number of channels expected in input |
eps |
a value added to the denominator for numerical stability. Default: 1e-5 |
affine |
a boolean value that when set to |
y = \frac{x - \mathrm{E}[x]}{ √{\mathrm{Var}[x] + ε}} * γ + β
The input channels are separated into num_groups
groups, each containing
num_channels / num_groups
channels. The mean and standard-deviation are calculated
separately over the each group. γ and β are learnable
per-channel affine transform parameter vectors of size num_channels
if
affine
is TRUE
.
The standard-deviation is calculated via the biased estimator, equivalent to
torch_var(input, unbiased=FALSE)
.
Input: (N, C, *) where C=\mbox{num\_channels}
Output: (N, C, *)' (same shape as input)
This layer uses statistics computed from input data in both training and evaluation modes.
if (torch_is_installed()) { input <- torch_randn(20, 6, 10, 10) # Separate 6 channels into 3 groups m <- nn_group_norm(3, 6) # Separate 6 channels into 6 groups (equivalent with [nn_instance_morm]) m <- nn_group_norm(6, 6) # Put all 6 channels into a single group (equivalent with [nn_layer_norm]) m <- nn_group_norm(1, 6) # Activating the module output <- m(input) }
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.