Skip to main content

EXPLANATION

Using PQL PREDICT, you have the option to add the clause WITH EXPLANATION to generate explanations for the model's predictions. Currently, we support explanations at the feature level and at the token level when a model includes a text feature(s). In the future we plan on adding explanations for image and audio features as well!

Example

PREDICT Survived WITH EXPLANATION
GIVEN SELECT * FROM titanic

This query will return explanations that show the relative importance of each feature. By default, we use the integrated gradients algorithm however you can also specify other available algorithms such as SHAP. A way to think about the these feature importance values is to imagine trying to fairly split the value of a reward between a group of contributors. It's like dividing up a pie so that each contributor gets a fair share based on how much they contributed. In this case, the pie is the prediction and the contributors are the features.

Views

We have 3 different views to show these explanation values at a feature level:

Summary Chart

explain-summary.png

The summary chart shows the relative feature importance of every feature used in the model. The features are sorted by their importance, with the most important feature at the top. The chart shows the feature name, the feature value, and the effect (importance) of that feature.

Explanation Table

explain-table.png

The explanation table shows the effect (importance) of every feature in a tabular format.

Force Plot

explain-force.png

The force plot shows the effect (importance) of every feature in a force directed graph. The features are organized by their importance, with the most important features closest to the center of the force plot.

We also have one view to show the explanations at a token level specifically for text features:

Text Explanation

explain-text.png

The text explanation shows the effect (importance) of every token in the text feature. The sum of the token effects will equal the corresponding text feature's effect. If you hover over a token, you will see the token's individual effect.

Explanation Algorithms

The explanations returned are generated using one of the following algorithms which can be specified with the algo key.

  • ig explanations with the Integrated Gradients algorithm (default).
  • shap explanations with the SHAP algorithm.
  • gbm explanations for GBM models using feature importance scores of type "gain" (default for GBM models).

Here is an example of how to generate explanations using the SHAP algorithm:

PREDICT is_fraud WITH EXPLANATION ( algo='shap' )
GIVEN SELECT * FROM titanic