LASSO Kullback-Leibler divergence based regression
LASSO Kullback-Leibler divergence based regression.
lasso.klcompreg(y, x, alpha = 1, lambda = NULL, nlambda = 100, type = "grouped", xnew = NULL)
y |
A numerical matrix with compositional data. Zero values are allowed. |
x |
A numerical matrix containing the predictor variables. |
alpha |
The elastic net mixing parameter, with 0 ≤q α ≤q 1. The penalty is defined as a weighted combination of the ridge and of the Lasso regression. When α=1 LASSO is applied, while α=0 yields the ridge regression. |
lambda |
This information is taken from the package glmnet. A user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit. |
nlambda |
This information is taken from the package glmnet. The number of lambda values, default is 100. |
type |
This information is taken from the package glmnet.. If "grouped" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default in our case is "grouped". |
xnew |
If you have new data use it, otherwise leave it NULL. |
The function uses the glmnet package to perform LASSO penalised regression. For more details see the function in that package.
A list including:
mod |
We decided to keep the same list that is returned by glmnet. So, see the function in that package for more information. |
est |
If you supply a matrix in the "xnew" argument this will return an array of many matrices with the fitted values, where each matrix corresponds to each value of λ. |
Michail Tsagris and Abdulaziz Alenazi.
R implementation and documentation: Michail Tsagris mtsagris@uoc.gr and Abdulaziz Alenazi a.alenazi@nbu.edu.sa.
Aitchison J. (1986). The statistical analysis of compositional data. Chapman & Hall.
Friedman, J., Hastie, T. and Tibshirani, R. (2010) Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, Vol. 33(1), 1-22.
y <- as.matrix(iris[, 1:4]) y <- y / rowSums(y) x <- matrix( rnorm(150 * 30), ncol = 30 ) a <- lasso.klcompreg(y, x)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.