Multi Criterion Feature Selection Instance
Specifies a general feature selection scenario, including objective function
and archive for feature selection algorithms to act upon. This class stores
an ObjectiveFSelect object that encodes the black box objective function
which an FSelector has to optimize. It allows the basic operations of
querying the objective at feature subsets ($eval_batch()
), storing the
evaluations in the internal bbotk::Archive and accessing the final result
($result
).
Evaluations of feature subsets are performed in batches by calling
mlr3::benchmark()
internally. Before a batch is evaluated, the
bbotk::Terminator is queried for the remaining budget. If the available
budget is exhausted, an exception is raised, and no further evaluations can
be performed from this point on.
The FSelector is also supposed to store its final result, consisting
of the selected feature subsets and associated estimated performance values, by
calling the method instance$assign_result()
.
bbotk::OptimInstance
-> bbotk::OptimInstanceMultiCrit
-> FSelectInstanceMultiCrit
result_feature_set
(list()
of character()
)
Feature sets for task subsetting.
new()
Creates a new instance of this R6 class.
FSelectInstanceMultiCrit$new( task, learner, resampling, measures, terminator, store_models = FALSE, check_values = TRUE, store_benchmark_result = TRUE )
task
(mlr3::Task)
Task to operate on.
learner
resampling
(mlr3::Resampling)
Uninstantiated resamplings are instantiated during construction
so that all configurations are evaluated on the same data splits.
measures
(list of mlr3::Measure)
Measures to optimize.
If NULL
, mlr3's default measure is used.
terminator
store_models
(logical(1)
).
Store models in benchmark result?
check_values
(logical(1)
)
Check the parameters before the evaluation and the results for
validity?
store_benchmark_result
(logical(1)
)
Store benchmark result in archive?
assign_result()
The FSelector object writes the best found feature subsets and estimated performance values here. For internal use.
FSelectInstanceMultiCrit$assign_result(xdt, ydt)
xdt
(data.table::data.table()
)
x values as data.table
. Each row is one point. Contains the value in
the search space of the FSelectInstanceMultiCrit object. Can contain
additional columns for extra information.
ydt
(data.table::data.table()
)
Optimal outcomes, e.g. the Pareto front.
clone()
The objects of this class are cloneable with this method.
FSelectInstanceMultiCrit$clone(deep = FALSE)
deep
Whether to make a deep clone.
library(mlr3) library(data.table) # Objects required to define the performance evaluator task = tsk("iris") measures = msrs(c("classif.ce", "classif.acc")) learner = lrn("classif.rpart") resampling = rsmp("cv") terminator = trm("evals", n_evals = 8) inst = FSelectInstanceMultiCrit$new( task = task, learner = learner, resampling = resampling, measures = measures, terminator = terminator ) # Try some feature subsets xdt = data.table( Petal.Length = c(TRUE, FALSE), Petal.Width = c(FALSE, TRUE), Sepal.Length = c(TRUE, FALSE), Sepal.Width = c(FALSE, TRUE) ) inst$eval_batch(xdt) # Get archive data as.data.table(inst$archive)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.