SHAP (Shapley Additive Explanations) is a game-theoretic approach used to explain the output of machine learning models. It assigns each feature an importance value for a particular prediction, providing insights into how each feature contributes to the model's output. The method is based on the concept of Shapley values from cooperative game theory, which ensures a fair distribution of the “payout” (in this case, the prediction).