SHAP for Credit Risk: Interpreting Machine Learning Black Box

Valoores in'Analytics
6 min readApr 20, 2021

--

How does SHAP analysis interpret a Black Box in a Credit Scoring problem?

VALOORES sees model interpretability as an important step in the data science workflow. Being able to explain how a model works serves many purposes, including building trust in the model’s output, satisfying regulatory requirements and verifying model safety.

Model Interpretation and its Business Need

The collaboration of finance and technology (fintech) makes use of different technologies in order to manage processes and provide efficient and reliable solutions. The growing mountain of Big Data and new Artificial Intelligence algorithms augmented by Machine Learning systems are used to eliminate risks, provide optimum output, increase the investment opportunities and analyze more data at greater speed and accuracy.

As machine learning black boxes are increasingly being deployed in domains such as fintech, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes.

Detecting default as rapidly as possible is essential for Financial Institutions. In a previous study, we used eXtreme Gradient Boosting (XGBoost) to detect the Probability of Default of 844000 unsecured loans, originated between 2012 and 2015 and labeled either Fully Paid or Charged-Off (defaulted). The results show that XGBoost can detect the defaulted loans with an Accuracy and AUC of 70 % and 73 % respectively.

The aim of this work is to determine the impact of the used features on the defaulted status by conducting a feature dependency analysis. SHAP (SHapley Additive exPlanation) is employed to interpret the results and analyze the importance of individual features. In this article, the reliability of a post hoc explanations techniques, SHAP, which rely on input perturbations, is demonstrated. We create an explanation model based on SHAP values to reveal the predominant features and to demonstrate the contribution of the new dataset. This explanation model is also applied to reveal what specific features trigger each class of loan.

SHAP Values Origins and Benefits

When advanced machine learning algorithms were firstly introduced, they outperformed all the traditional existing models. Weak interpretability of such black box models has caused investors to use parametric models, such as the logistic regression, because of their simplicity; regression coefficients, p-values and statistical tests were considered to be straightforward to learn.

SHAP or SHapley Additive exPlanations is a library bringing together several interpretability strategies that was published by Lundberg and Lee in 2017. It is fundamentally derived from a game theoretic approach — Shapley values — that explains the output of any machine learning model, parametric or not. By reverse-engineering the output of any predictive model, SHAP algorithm has revolutionized machine learning standards.

SHAP values calculate the impact of a feature X on the model’s prediction Y, by computing the output in absence and presence of every X over every possible combination of the chosen features.

For example, in a credit scoring task, if any model is used to predict the probability of default Y of a person depending on his Age, Gender and Job, the model will be trained on every combination of these features (Fig.1). For a specific person, the SHAP value of an X is mainly determined by the change in the predicted outputs between the absence and presence of this X.

Fig.1: SHAP algorithm schema

What makes the SHAP library more useful and practical is its ability to identify the features’ effect on an individual and global scale. It explains how much any of the Xs (Age, Gender, Job) of a specific person has contributed positively or negatively to his individual probability of default; in addition, it’s possible to know the impact of every X on the global dataset.

SHAP Plots

SHAP library has a built-in plotting tool that displays all the needed relations with nice visualizations and decisive interpretations compared to some existing tools. In this section we will be adopting the XGBoost model built in our previous article to visualize and interpret some of the key plots available in the SHAP library.

Individual — Force Plot

On individual level, Fig.2 shows the contribution of each feature in pushing the model’s output from the base value (initial guess of the target variable Y by a model with no features) to the final value f(x). Features increasing the prediction are shown in red while those decreasing it are in blue.

Fig.2: Individual Force Plot

Global Plots

It is important to mention that some of the following global plots could be generated on an individual level, but their main purpose is to reveal global relations not individual ones.

1. Bar and Summary plots

For all observations, the bar plot consists of taking the mean of |SHAP value| for each feature. It reveals how much every feature impact the predicted output in a decreasing order.
In addition to the bar plot, the summary plot uncovers in which way every feature impacts the predicted output; the color hue addresses the feature value (red high, blue low). This states for instance that having a high grade brings down the predicted probability of default (grade A is the highest and G is the lowest after encoding).

Fig.3: Bar and Summary Plots

2. Dependence Plot

The marginal impact that a feature has on the predicted result of the model is shown by the dependence plot. Accordingly, it includes another feature that the selected one interacts most with. Fig.4 shows a linear positive trend between the interest rate and the probability of default; in addition to that, the interest rate is conversely related with the grade.

Fig.4: Dependence Plot

What a great way to display such important information in an easily interpretable figure!

3.Decision Plot

Another way to to track the contribution of every feature over a selected pool of observations is the Decision plot. It shows the summary plot while keeping track of the evolution of the results related to the selected sample.
It could be displayed with as many observations as needed.

Fig.5: Decision Plot

4. Global Force plot

“La crème de la crème” is the global force plot of all the featured plots by the SHAP library. This force plot is an interactive dynamic HTML rendering any needed relation in its own way. Basically, it is constructed by taking many individual force plots such as the one shown in Fig.2, rotate them 90 degrees, then stack them horizontally, and accordingly relations for the entire dataset can be interpreted.

Fig.6: Global Force Plot

To go through all the available features provided by this force plot, feel free to download the corresponding HTML file.

Conclusion

People fear what they don’t know. In the digital transformation era, advanced machine learning techniques are still used in shadows or just for research purposes, because the professional/business industry is still averse to apply these techniques due to the lack of sound interpretability tools. With the rise of libraries such as SHAP, LIME, Anchor…, combined with the cloud computational power advancements, companies must open their eyes to harness their power and reach efficiently their business needs over any market field.

For more information about us, feel free to check our website.

With great power comes great responsibility.

--

--

Valoores in'Analytics

VALOORES BI & AI is an open Analytics platform that spans all aspects of the Analytics life cycle, from Data to Discovery to Deployment.