Skip to content

LIME vs SHAP

LIME

LIME, or Local Interpretable Model-agnostic Explanations, is a technique that generates local approximations to model predictions.

Example: In the process of predicting sentiments with a neural network, LIME highlights important words in a specific prediction.

SHAP

SHAP or SHapley Addictive exPlanations is a technique that is used to assign a value to each feature, indicating its contribution to a model’s output.

Example: Credit scoring is a good example as it utilizes SHAP to reveal the impact of variables like income and credit history on the final credit score.