Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

FSelectInstanceSingleCrit

Single Criterion Feature Selection Instance


Description

Specifies a general feature selection scenario, including objective function and archive for feature selection algorithms to act upon. This class stores an ObjectiveFSelect object that encodes the black box objective function which an FSelector has to optimize. It allows the basic operations of querying the objective at feature subsets ($eval_batch()), storing the evaluations in the internal bbotk::Archive and accessing the final result ($result).

Evaluations of feature subsets are performed in batches by calling mlr3::benchmark() internally. Before a batch is evaluated, the bbotk::Terminator is queried for the remaining budget. If the available budget is exhausted, an exception is raised, and no further evaluations can be performed from this point on.

The FSelector is also supposed to store its final result, consisting of a selected feature subset and associated estimated performance values, by calling the method instance$assign_result().

Super classes

bbotk::OptimInstance -> bbotk::OptimInstanceSingleCrit -> FSelectInstanceSingleCrit

Active bindings

result_feature_set

(character())
Feature set for task subsetting.

Methods

Public methods


Method new()

Creates a new instance of this R6 class.

Usage
FSelectInstanceSingleCrit$new(
  task,
  learner,
  resampling,
  measure,
  terminator,
  store_models = FALSE,
  check_values = TRUE,
  store_benchmark_result = TRUE
)
Arguments
task

(mlr3::Task)
Task to operate on.

learner

(mlr3::Learner).

resampling

(mlr3::Resampling)
Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits.

measure

(mlr3::Measure)
Measure to optimize.

terminator

(bbotk::Terminator).

store_models

(logical(1)). Store models in benchmark result?

check_values

(logical(1))
Check the parameters before the evaluation and the results for validity?

store_benchmark_result

(logical(1))
Store benchmark result in archive?


Method assign_result()

The FSelector writes the best found feature subset and estimated performance value here. For internal use.

Usage
FSelectInstanceSingleCrit$assign_result(xdt, y)
Arguments
xdt

(data.table::data.table())
x values as data.table. Each row is one point. Contains the value in the search space of the FSelectInstanceMultiCrit object. Can contain additional columns for extra information.

y

(numeric(1))
Optimal outcome.


Method clone()

The objects of this class are cloneable with this method.

Usage
FSelectInstanceSingleCrit$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

Examples

library(mlr3)
library(data.table)

# Objects required to define the objective function
task = tsk("iris")
measure = msr("classif.ce")
learner = lrn("classif.rpart")
resampling = rsmp("cv")

# Create instance
terminator = trm("evals", n_evals = 8)
inst = FSelectInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  terminator = terminator
)

# Try some feature subsets
xdt = data.table(
  Petal.Length = c(TRUE, FALSE),
  Petal.Width = c(FALSE, TRUE),
  Sepal.Length = c(TRUE, FALSE),
  Sepal.Width = c(FALSE, TRUE)
)

inst$eval_batch(xdt)

# Get archive data
as.data.table(inst$archive)

mlr3fselect

Feature Selection for 'mlr3'

v0.5.1
LGPL-3
Authors
Marc Becker [aut, cre] (<https://orcid.org/0000-0002-8115-0400>), Patrick Schratz [aut] (<https://orcid.org/0000-0003-0748-6624>), Michel Lang [aut] (<https://orcid.org/0000-0001-9754-0393>), Bernd Bischl [aut] (<https://orcid.org/0000-0001-6002-6980>)
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.