Build a Pytorch Multilayer Perceptron
Utility function to build an MLP with a choice of activation function and weight initialization with optional dropout and batch normalization.
build_pytorch_net( n_in, n_out, nodes = c(32, 32), activation = "relu", act_pars = list(), dropout = 0.1, bias = TRUE, batch_norm = TRUE, batch_pars = list(eps = 1e-05, momentum = 0.1, affine = TRUE), init = "uniform", init_pars = list() )
n_in |
|
n_out |
|
nodes |
|
activation |
|
act_pars |
|
dropout |
|
bias |
|
batch_norm |
|
batch_pars |
|
init |
|
init_pars |
|
This function is a helper for R users with less Python experience. Currently it is limited to simple MLPs. More advanced networks will require manual creation with reticulate.
if (requireNamespaces("reticulate")) {
build_pytorch_net(4L, 2L, nodes = c(32, 64, 32), activation = "selu")
# pass parameters to activation and initializer functions
build_pytorch_net(4L, 2L, activation = "elu", act_pars = list(alpha = 0.1),
init = "kaiming_uniform", init_pars = list(mode = "fan_out"))
}Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.