Kappa
Kappa is a similar measure to accuracy()
, but is normalized by
the accuracy that would be expected by chance alone and is very useful
when one or more classes have large frequency distributions.
kap(data, ...) ## S3 method for class 'data.frame' kap(data, truth, estimate, weighting = "none", na_rm = TRUE, ...) kap_vec(truth, estimate, weighting = "none", na_rm = TRUE, ...)
data |
Either a |
... |
Not currently used. |
truth |
The column identifier for the true class results
(that is a |
estimate |
The column identifier for the predicted class
results (that is also |
weighting |
A weighting to apply when computing the scores. One of:
In the binary case, all 3 weightings produce the same value, since it is only ever possible to be 1 unit away from the true value. |
na_rm |
A |
A tibble
with columns .metric
, .estimator
,
and .estimate
and 1 row of values.
For grouped data frames, the number of rows returned will be the same as the number of groups.
For kap_vec()
, a single numeric
value (or NA
).
Kappa extends naturally to multiclass scenarios. Because of this, macro and micro averaging are not implemented.
Max Kuhn
Jon Harmon
Cohen, J. (1960). "A coefficient of agreement for nominal scales". Educational and Psychological Measurement. 20 (1): 37-46.
Cohen, J. (1968). "Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit". Psychological Bulletin. 70 (4): 213-220.
library(dplyr) data("two_class_example") data("hpc_cv") # Two class kap(two_class_example, truth, predicted) # Multiclass # kap() has a natural multiclass extension hpc_cv %>% filter(Resample == "Fold01") %>% kap(obs, pred) # Groups are respected hpc_cv %>% group_by(Resample) %>% kap(obs, pred)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.