Explainable AI in Stardog

Jun 4, 2020, 8 minute read
Stardog Newsletter

Get the latest in your inbox

As data tools become increasingly complex, it is getting more difficult for human users to understand the results of their queries. It, correspondingly, is becoming more critical that intelligent systems can explain their own conclusions.

Human understanding is important for several reasons:

  • Trust how can users trust a system when they do not understand the results? This is particularly true in domains like medicine for which the price of errors is high.
  • Tuning and debugging being able to explain conclusions makes it easier to detect and understand errors.
  • Human-Computer Interaction explanations support a meaningful dialogue between the human user and the system. Understanding how the system drew a certain conclusion may influence the next query posed.

This topic has been receiving increased attention in fields like knowledge representation, reasoning, and ML. Let’s take a look at how Stardog supports https://www.stardog.com/trainings/reasoning/ via its Inference Engine and ML models.

Inference vs statistical prediction in Stardog

Stardog is unique in offering a combination of both inference (also called reasoning or logical reasoning) and statistical prediction. Both services are directly accessible via the standard SPARQL query language and have access to all the data in the Knowledge Graph, including virtual graphs and data transformed from unstructured data sources.

Although SPARQL-based integration makes the use of both very similar, the distinction between inference and statistical prediction is rather fundamental in explainable AI.

Inference produces conclusions, which necessarily follow from the graph data given its schema. These facts can be explicitly added into the graph and do not result in changes to any query’s results.

By contrast, conclusions from ML models are only statistically true, and hence, are subject to varying degrees of uncertainty and even falsity. Statistical predictions are, in essence, educated guesses produced by a statistical model, and their accuracy is dependent on the quality of the training data and applicability of the model to the task at hand.

Despite these differences, for both kinds of reasoning, explanations attempt to demonstrate the connections between the premises and the conclusions. They also allow for two high-level approaches to obtaining explanations:

  1. By using specific properties of the logical schema or the ML model, and
  2. By using the schema or model as a black-box

The first, so-called glass-box explanations, requires insight into how the reasoner or the model draws a specific conclusion. In the inference world, a typical example would be showing a proof, i.e. the sequence of steps that were taken to produce the result. This makes it especially easy to verify that the conclusion indeed follows. Stardog uses properties of its query rewriting reasoning algorithm to present explanations of inferences in a tree-like form so the user can imagine a proof of the conclusion from the premises.

Since a statistical model does not provide the same degree of transparency, it is treated as an opaque function (i.e. a black box) that explains its results by answering a series of specific follow-up queries. This is how explanations for conclusions from ML models can be obtained in Stardog (more on this later). The advantage here is that the same explanation algorithm is applicable to many different kinds of models, e.g. regression models and neural networks.

Explanations of inferences

The Inference Engine performs reasoning on the graph

Stardog uses expressive ontology languages to support graph schemas. The expressivity allows knowledge engineers to describe complex domains, such as medicine, in which multiple facts, axioms, and rules interact with each other to infer new facts. These new facts can appear in query results and graph validation reports. In both cases, Stardog can explain them using proof trees.

Explanations of query results with reasoning

Proof trees are a hierarchical way of presenting explanations from base assertions to final conclusions. For example, if a query returns the result that :Alice has the type :Employee which is not explicitly in the graph, it must be an inference. Below is a proof tree explaining this inference based on the fact that she supervises :Bob:

INFERRED :Alice rdf:type :Employee
    ASSERTED :Manager rdfs:subClassOf :Employee
    INFERRED :Alice rdf:type :Manager
        ASSERTED :Alice :supervises :Bob
        ASSERTED :supervises rdfs:domain :Manager

The explanation has a hierarchical structure where each statement follows from the statements directly under it in the hierarchy: Alice is an employee because she’s a Manager and managers are employees. She’s inferred to be a Manager because she supervises Bob and everyone who supervises someone is a Manager.

Explanations of integrity constraint violations

An important aspect of working with knowledge graphs is validation, i.e. verifying and enforcing integrity constraints. Understanding constraint violation reports is key for modeling them effectively and thus for data quality. There are two important aspects to it:

  1. Understanding which graph statements violate which constraints, and
  2. Distinguishing inferred facts from explicitly stored facts in violation reports so that the former can be explained with proof trees.

When reasoning is enabled during the constraint validation process, Stardog enforces constraints on the inferred graph. To the user the result looks exactly the same as if all inferences had been explicitly put into graph, and then the graph was validated. However, since inferred statements are not explicitly in the graph but still impact the validation outcome, they need to be explained similarly to query results, that is, using proof trees.

Interpreting statistical predictions

Machine Learning

In contrast to inferences, there are no well-established formal definitions for explanations from ML models.

Instead, an informal property that is desired for ML models is often called interpretability. It is sometimes formulated as the degree to which a human can understand the causes of particular model predictions or the degree to which a human can predict the model’s result for a particular input. Similar to explainable logical deductions, interpretable ML predictions are more trusted by human users and help tuning models.

As briefly mentioned in the introduction, there are two principal approaches to interpreting predictions of ML models. One is restricting attention to the so-called interpretable models. The prime example of this is a decision tree where every conclusion can be easily traced back through all the decision nodes, thus explaining how the conclusion was drawn. The other includes decision rule systems and, to a certain extent, linear regression models.

Model-agnostic approaches to interpretations, on the other hand, treat ML models as black-boxes, similar to black-box explanations used for inferences. The main advantage here is that they are not specific to a particular kind of ML model, which makes them attractive for systems like Stardog that support multiple ML models. This approach means more ML model types may be added in the future and that there are no restrictions on the model’s internal complexity motivated by the interpretability requirement. Finally, supporting multiple model-agnostic interpretations enables the user to see why the model made a particular prediction from different perspectives, something which is typically not possible by using only glass-box interpretations supported by the model itself.

Partial Dependence Plots

Today, Stardog does not include implementations of model-agnostic methods out-of-the-box. However, the seamless integration of ML models makes it easy to generate interpretations by feeding SPARQL query results into the model.

One such interpretation method is Partial Dependence Plots (PDP). The PDP shows the effects of a particular feature on the predicted outcome. Given a set of values for the given (numerical or categorical) feature, the goal is to compute a model prediction which is characteristic for that value. The easiest way to do this is to assign the given value to all data points while keeping their other features intact, compute predictions, and then average them. The result is a model prediction as a function of a single feature characterizing the weight of the feature in the model. Describing the model as a collection of single argument weight functions — one per feature — is a way to interpret the model and its predictions.

Consider an example: a box office-earnings prediction model where predictions are made with the following SPARQL query:

SELECT * WHERE {
  graph spa:model {
      :myModel  spa:arguments (?director ?year ?budget) ;
                spa:predict ?boxoffice .
  }

  :TheGodfather :directedBy ?director ;
                :year ?year ;
                :budget ?budget .
}

Each data point (a movie) has three features: director, year, and budget. We can show the effect of a particular feature on a predicted outcome using PDP.

Suppose there are 100 film directors in the knowledge graph and 10,000 movies. For each director, PDP would compute the impact by assigning this director to every movie (while keeping other features fixed), feeding these new data points to the model, computing box office predictions, and averaging them. The same would be done for similar numerical features which require discretization. As a result, the model is interpreted using relations between features and predictions which are often visually plotted for the user.

The advantage of this approach is its simplicity and clear interpretation: it shows how predictions change (on average) when feature values change. It also is very easy to implement in Stardog, since all inputs to the model can be generated with simple SPARQL queries similar to the one above.

The main weakness of PDP is that it assumes there’s no correlation between features. If there are, then the newly generated data points may be meaningless, essentially outliers. For example, if a particular director works only on movies with large budgets, it would be illogical to assign it to data points with low budget as prediction for such a film could be arbitrary.

There are similar methods for constructing functions and relating features to predictions, such as Accumulated Local Effects that address these issues. As with PDPs, Stardog’s Knowledge Graph makes them easy to implement by leveraging graph queries to generate inputs for ML models so the returned results demonstrate connections between premises and conclusions.

download our free e-guide

Knowledge Graphs 101

How to Overcome a Major Enterprise Liability and Unleash Massive Potential

Download for free
ebook