It’s time to get rid of the black box and cultivate trust in machine learning.

Imagine that you are a data scientist. In your free time, you will predict where your friends will go on vacation in the summer based on their Facebook and Twitter data. Now, if the predictions are accurate, your friends may be impressed and you can think of you as a magician who can see the future. If the prediction is wrong, it will not harm anyone except your reputation as a “data scientist”. Let us now say that this is not an interesting project and involves investment. Say, you want to invest in a place where your friends may be on vacation. What happens if the prediction of the model goes wrong? You will lose money. As long as the model does not have a significant impact, its interpretability is less important, but when model-based predictions involve impact,


Interpretable machine learning

Interpretation means interpretation or presentation in understandable terms. In the context of the ML system, interpretability is the ability to explain to humans or to present them in understandable terms [ Finale Doshi-Velez ]

https://easyai.tech/wp-content/uploads/2022/08/3d737-2019-04-10-123433.jpg
Source: Interpretable - ml-book

The machine learning model is called "black box" by many people. This means that while we can get accurate predictions from it, we cannot clearly explain or identify the logic behind these predictions. But how do we extract important insights from the model? What to remember and what features or tools do we need to implement? These are being proposedModel interpretabilityAn important question that will come to mind when the problem arises.

The importance of interpretability

The question that some people often ask is why we are not satisfied with the results of the model, why are we so obsessed with knowing why we made a specific decision? This is largely related to the possible impact of the model in the real world. For models that are only used to recommend movies, the impact is much less than the model created to predict drug outcomes.

The problem is that a single indicator, such as classification accuracy, is an incomplete description of most real-world tasks. "Doshi-Velez and Kim 2017

This is a big picture of machine learning that can be explained. To some extent, we capture the world by collecting raw data and use that data for further prediction. In essence, interpretability is just another layer on the model that helps people understand the process.

https://easyai.tech/wp-content/uploads/2022/08/bf009-2019-04-10-123520.jpg
A large picture of machine learning that can be explained.

InterpretableBring somebenefitYes:

  • reliability
  • debugging
  • Notification function engineering
  • Guide future data collection
  • Inform human decision making
  • establish trust

Model interpretability

Theory makes sense as long as we can put it into practice. If you want to really understand this topic, you can tryKaggleOfMachine learning interpretive crash course. It has the right amount of theory and code to put concepts into perspective and to help apply model interpretability concepts to real-world real-world problems.

Click on the screenshot below to go directly to the course page. If you want to get a brief overview of the content, you can continue reading.

Kaggle's machine learning interpretive crash course

Insights that can be extracted from the model

To explain the model, we need the following insights:

  • The features in the model are the most important.
  • For any single prediction from the model, the impact of each feature in the data on that particular prediction.
  • The impact of each feature on a large number of possible predictions

Let's discuss some tips that will help extract the above insights from the model:

1. Arrange importance

What characteristics does the model consider important? Which features may have a greater impact on model predictions than other features? This concept is called feature importance.Displacement importanceIt is a technique widely used to calculate the importance of features. It helps us see when our model produces counterintuitive results, and it helps to display other models when our model works the way we want it to.

The importance of permutation applies to many scikit-learn estimators. The idea is simple: randomly align or shuffle the individual columns in the validation dataset so that all other columns remain the same. This feature is considered "important" if the accuracy of the model drops a lot and causes an increase in error. On the other hand, if the reorganization of its values ​​does not affect the accuracy of the model, then the feature is considered "not important."

Work

Consider a model that predicts whether a football team will have "based on certain parameters." Gamer "The winner. Players who show the best performance will receive this award.

The displacement importance is calculated after fitting the model. So let's train and fit on the training data asMy_modelOfRandomForestClassifiermodel.Replacement importance

ELI5 libraryCalculation.

ELI5Is a Python library that allows you to visualize and debug various machine learning models using a unified API. It has built-in support for multiple ML frameworks and provides a way to interpret the black box model.

Calculate and display importance using the eli5 library:

(Hereval_X,val_yRepresenting the verification set separately)

import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(my_model, random_state=1).fit(val_X, val_y)
eli5.show_weights(perm, feature_names = val_X.columns.tolist())

Explanation

  • The top function is the most important, and the bottom has the least functionality. For this example, the goal of scoring is the most important feature.
  • ±Subsequent figures measure the performance from one reshuffle to the next.
  • Some weights are negative. This is because in those cases, the prediction of the shuffle data is found to be more accurate than the actual data.

practice

Now, for a complete example and testing your understanding, click on the link below to go toKaggle page


2. Partial dependency graph

Partial dependency graphs (short PDP or PD maps) show the marginal effects of one or two features on the predictions of machine learning models (JH Friedman 2001). The PDP shows how features affect prediction. The PDP can display the relationship between the target and the selected feature through an 1D or 2D map.

Work

The PDP is also calculated after the model is fitted. In the football question we discussed above, there are many functions, such as passing, shooting, scoring and so on. We first consider a single row. Suppose the row indicates that a team has 50% of the time, 100 passes, 10 scores and scores 1.

We continue to fit our model and calculate the probability that a team has a player who wins the "Game", which is our target variable. Next, we will select a variable and keep changing its value. For example, if the team scores 1, 2, 3, etc., we will calculate the result. Then plot all of these values ​​and we get a graph of the predicted results and the target scores. The library used to draw the PDP is called

Python partial dependence plot toolboxOr short

PDPbox.

from matplotlib import pyplot as plt
from pdpbox import pdp, get_dataset, info_plots
# Create the data that we will plot
pdp_goals = pdp.pdp_isolate(model=my_model, dataset=val_X, model_features=feature_names, feature='Goal Scored')
# plot it
pdp.pdp_plot(pdp_goals, 'Goal Scored')
plt.show()

Explanation

  • The Y-axis represents the predicted change and the predicted change at the baseline or the leftmost value.
  • Blue area indicates confidence interval
  • For the “Target Score” chart, we observe that the score goal will increase the likelihood of getting a “Game” reward, but will reach saturation over time.

We can also visualize the partial dependence of two features at once using the 2D partial map.

practice


3. SHAP value

SHAP representativeSH Apley A Dditive ex P Lanation helps to subdivide predictions to show the impact of each feature. It is based on the Shapley value, a technique used in game theory to determine the extent to which each player in a collaborative game contributes to their success.¹. In general, getting the right trade-off between accuracy and interpretability can be a difficult balancing act, but SHAP values ​​can provide both.

Work

Again, along with the football example, we want to predict the probability that the team will have a player who wins the "game player." The SHAP value explains the effect of having a particular value for a given feature, compared to the prediction we made when the feature used certain baseline values. SHAP value use

ShapLibrary calculations can be easily installed from PyPI or conda.

The Shap value shows how much a given feature changes our prediction (compared to our prediction at a certain baseline value of the feature). Let's say we want to know what the predictions are when the team scores an 3 ball instead of some fixed baseline. If we can answer this question, we can perform the same steps for other functions as follows:

sum(SHAP values for all features) = pred_for_team - pred_for_baseline_values

Therefore, the prediction can be broken down into the following diagram:

这 是a larger viewOflink

Explanation

The above description shows the features that each help to push the model output from the base value (the average model output on the training data set we pass) to the model output. The feature that pushes the predicted push up is shown in red, pushing the lower predicted feature to blue

  • The base_value here is 0.4979, and our predicted value is 0.7.
  • Goal Scored = 2 has the greatest impact on increasing forecasts, and
  • ball possession Features have the greatest impact on reducing forecasts.

practice

The SHAP value is deeper than the theory I am explaining here. Please be sure to passThis linkGet a complete understanding.


Advanced usage of 4. SHAP values

Summarizing many SHAP values ​​provides more detailed insight into the model.

  • SHAP summary chart

To outline which features are most important to the model, we can plot the SHAP value for each feature of each sample. The summary graph shows which features are most important and how they affect the data set.

Summary map

For each point:

  • Vertical position shows the function it depicts
  • The color shows whether the feature is high or low for the data set of the row
  • The horizontal position shows whether the effect of this value results in a higher or lower prediction.

The point in the upper left corner is a team with very few goals, reducing the predicted value by 0.25.

  • SHAP dependent contribution graph

Although the SHAP summary graph summarizes each feature, the SHAP dependency graph shows how the model output varies with eigenvalues. SHAP-dependent contribution graphs provide insights similar to PDPs, but they add more detail.

Dependency contribution graph

The above-mentioned dependency contribution plot shows that having a ball increases the chances of the team getting the player to win the prize. But if they only get one goal, then the trend will be reversed and if they score so little, the referee may punish them for scoring the ball.


in conclusion

Machine learning is no longer a black box. If we can't explain the results to others, then what is the use of a good model. Interpretability is just as important as creating a model. In order to gain wider acceptance in the population, it is important that machine learning systems provide a satisfactory interpretation of their decisions. Albert Einstein said, " If you can'tSimplyExplanationit,youI can't understand it."

This article was transferred from awardsdatascience,Original address