General Interface for CoBC model
Co-Training by Committee (CoBC) is a semi-supervised learning algorithm
with a co-training style. This algorithm trains N
classifiers with the learning
scheme defined in the learner
argument using a reduced set of labeled examples. For
each iteration, an unlabeled
example is labeled for a classifier if the most confident classifications assigned by the
other N-1
classifiers agree on the labeling proposed. The unlabeled examples
candidates are selected randomly from a pool of size u
.
The final prediction is the average of the estimates of the N regressors.
coBC(learner, N = 3, perc.full = 0.7, u = 100, max.iter = 50)
learner |
model from parsnip package for training a supervised base classifier using a set of instances. This model need to have probability predictions in classification mode |
N |
The number of classifiers used as committee members. All these classifiers
are trained using the |
perc.full |
A number between 0 and 1. If the percentage of new labeled examples reaches this value the self-labeling process is stopped. Default is 0.7. |
u |
Number of unlabeled instances in the pool. Default is 100. |
max.iter |
Maximum number of iterations to execute in the self-labeling process. Default is 50. |
For regression tasks, labeling data is very expensive computationally. Its so slow.
This method trains an ensemble of diverse classifiers. To promote the initial diversity
the classifiers are trained from the reduced set of labeled examples by Bagging.
The stopping criterion is defined through the fulfillment of one of the following
criteria: the algorithm reaches the number of iterations defined in the max.iter
parameter or the portion of unlabeled set, defined in the perc.full
parameter,
is moved to the enlarged labeled set of the classifiers.
(When model fit) A list object of class "coBC" containing:
The final N
base classifiers trained using the enlarged labeled set.
List of N
vectors of indexes related to the training instances
used per each classifier. These indexes are relative to the y
argument.
The indexes of all training instances used to
train the N
models. These indexes include the initial labeled instances
and the newly labeled instances. These indexes are relative to the y
argument.
List of three vectors with the same information in model.index
but the indexes are relative to instances.index
vector.
The levels of y
factor in classification.
The function provided in the pred
argument.
The list provided in the pred.pars
argument.
Avrim Blum and Tom Mitchell.
Combining labeled and unlabeled data with co-training.
In Eleventh Annual Conference on Computational Learning Theory, COLT’ 98, pages 92-100, New York, NY, USA, 1998. ACM.
ISBN 1-58113-057-0. doi: 10.1145/279943.279962.
Mohamed Farouk Abdel-Hady, Mohamed Farouk Abdel-Hady and Günther Palm.
Semi-supervised Learning for Regression with Cotraining by Committee
Institute of Neural Information Processing
University of Ulm
D-89069 Ulm, Germany
library(tidyverse) library(tidymodels) library(caret) library(SSLR) data(wine) set.seed(1) train.index <- createDataPartition(wine$Wine, p = .7, list = FALSE) train <- wine[ train.index,] test <- wine[-train.index,] cls <- which(colnames(wine) == "Wine") #% LABELED labeled.index <- createDataPartition(wine$Wine, p = .2, list = FALSE) train[-labeled.index,cls] <- NA #We need a model with probability predictions from parsnip #https://tidymodels.github.io/parsnip/articles/articles/Models.html #It should be with mode = classification #For example, with Random Forest rf <- rand_forest(trees = 100, mode = "classification") %>% set_engine("randomForest") m <- coBC(learner = rf,N = 3, perc.full = 0.7, u = 100, max.iter = 3) %>% fit(Wine ~ ., data = train) #Accuracy predict(m,test) %>% bind_cols(test) %>% metrics(truth = "Wine", estimate = .pred_class)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.