Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

forceplot

Additive force plots


Description

Visualize Shapley values with additive force style layouts from the Python shap package.

Usage

force_plot(object, ...)

## S3 method for class 'explain'
force_plot(
  object,
  baseline = NULL,
  feature_values = NULL,
  display = c("viewer", "html"),
  ...
)

Arguments

object

An object of class "explain".

...

Additional optional arguments. (Currently ignored.)

baseline

Numeric giving the average prediction across all the training observations. NOTE: It is recommended to provide this argument whenever object contains approximate Shapley values.

feature_values

A matrix-like R object (e.g., a data frame or matrix) containing the corresponding feature values for the explanations in object.

display

Character string specifying how to display the results. Current options are "viewer" (default) and "html". The latter is necessary for viewing the display inside of an rmarkdown document.

Details

The resulting plot shows how each feature contributes to push the model output from the baseline prediction (i.e., the average predicted outcome over the entire training set 'X') to the corresponding model output. Features pushing the prediction higher are shown in red, while those pushing the prediction lower are shown in blue.

Note

It should be noted that only exact Shapley explanations (i.e., calling fastshap::explain() with exact = TRUE) satisfy the so-called additivity property where the sum of the feature contributions for x must add up to the difference between the corresponding prediction for x and the average of all the training predictions (i.e., the baseline). Consequently, if you don't set adjust = TRUE in the call to explain before using fastshap::force_plot(), the output value displayed on the plot will not make much sense.

References

Lundberg, Scott M, Bala Nair, Monica S Vavilala, Mayumi Horibe, Michael J Eisses, Trevor Adams, David E Liston, et al. 2018. "Explainable Machine-Learning Predictions for the Prevention of Hypoxaemia During Surgery." Nature Biomedical Engineering 2 (10). Nature Publishing Group: 749.

Examples

## Not run: 
#
# A projection pursuit regression (PPR) example
#

# Load the sample data; see ?datasets::mtcars for details
data(mtcars)

# Fit a projection pursuit regression model
mtcars.ppr <- ppr(mpg ~ ., data = mtcars, nterms = 1)

# Compute approximate Shapley values using 10 Monte Carlo simulations
set.seed(101)  # for reproducibility
shap <- explain(mtcars.ppr, X = subset(mtcars, select = -mpg), nsim = 10, 
                pred_wrapper = predict, adjust = TRUE)

# Visualize first explanation
preds <- predict(mtcars.ppr, newdata = mtcars)
x <- subset(mtcars, select = -mpg)[1L, ]  # take first row of feature values
force_plot(shap[1L, ], baseline = mean(preds), feature_values = x)

## End(Not run)

fastshap

Fast Approximate Shapley Values

v0.0.5
GPL (>= 2)
Authors
Brandon Greenwell [aut, cre] (<https://orcid.org/0000-0002-8120-0084>)
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.