
An introduction to explainable AI with Shapley values — SHAP latest ...
We will take a practical hands-on approach, using the shap Python package to explain progressively more complex models. This is a living document, and serves as an introduction to the shap Python …
shap.Explainer — SHAP latest documentation
This is the primary explainer interface for the SHAP library. It takes any combination of a model and masker and returns a callable subclass object that implements the particular estimation algorithm …
shap.plots.force — SHAP latest documentation
For SHAP values, it should be the value of explainer.expected_value. However, it is recommended to pass in a SHAP Explanation object instead (shap_values is not necessary in this case).
Basic SHAP Interaction Value Example in XGBoost
This notebook shows how the SHAP interaction values for a very simple function are computed. We start with a simple linear function, and then add an interaction term to see how it changes the SHAP …
Image examples — SHAP latest documentation
Image examples These examples explain machine learning models applied to image data. They are all generated from Jupyter notebooks available on GitHub. Image classification Examples using …
Explaining quantitative measures of fairness — SHAP latest …
By using SHAP (a popular explainable AI tool) we can decompose measures of fairness and allocate responsibility for any observed disparity among each of the model’s input features.
shap.DeepExplainer — SHAP latest documentation
Meant to approximate SHAP values for deep learning models. This is an enhanced version of the DeepLIFT algorithm (Deep SHAP) where, similar to Kernel SHAP, we approximate the conditional …
Be careful when interpreting predictive models in search of causal ...
SHAP and other interpretability tools can be useful for causal inference, and SHAP is integrated into many causal inference packages, but those use cases are explicitly causal in nature.
Release notes — SHAP latest documentation
Nov 11, 2025 · This release incorporates many changes that were originally contributed by the SHAP community via @dsgibbons 's Community Fork, which has now been merged into the main shap …
scatter plot — SHAP latest documentation
The y-axis is the SHAP value for that feature (stored in explanation.values), which represents how much knowing that feature’s value changes the output of the model for that sample’s prediction.