NEC orchestrating a brighter world
NEC Laboratories Europe

Home
Understanding Gradient Rollback

Understanding Gradient Rollback

 

For many, including scientific researchers, artificial intelligence (AI) is a mystery – its reasoning opaque.  AI systems and models are often referred to as “black boxes”; we do not understand the logic of what they do.

Neural networks are powerful artificial intelligence tools trained to recognize meaningful data relationships and predict new knowledge. Nonetheless, it is not commonly understood how neural networks function or arrive at predictions. When AI systems affect our lives we need to ensure their predictions and decisions are reasonable.

NEC Laboratories Europe has recently achieved a milestone in explainable AI research (XAI) by developing the method Gradient Rollback; this opens neural “black box” models and explains their predictions. Gradient Rollback reveals the training data that has the greatest influence on a prediction. Users can ascertain how plausible a prediction is by viewing its explanation (the training instances with the highest influence). The more plausible a prediction is the greater the likelihood that it will be trusted – a key factor in AI user adoption.

Creating AI systems that are human understandable
AI neural network systems take data as input and run computations that produce an output. Input and desired output depend on the task of interest. For example, input could be information about a user and output could be movie recommendations.

Typically, the computation that an AI neural network system applies to input data is arrived at using machine-learning techniques and training data. Training data is a set of input-output instances, collected for the required task. Training a neural network is achieved by iterating over the different training instances and optimizing computations of the network. Applying the optimized computations to one input will produce the correct output with a high probability.

Once the training process is completed, the neural model is ready and we can give the AI system new, previously unseen input to produce novel predictions that have a high probability of accuracy. Still, a prediction can be wrong. If we want to use predictions that have a real-life impact, it is important we understand how the prediction was made.

Understanding how training instances affect a neural network
Gradient Rollback can explain the prediction of a neural network by keeping track of how each training instance changes a model during training. Based on this information, Gradient Rollback can then explain a neural model prediction by identifying the training instance that made the prediction likely.

We apply Gradient Rollback to a specific class of neural networks called neural matrix factorization models. These are widely used in different industries as the basis for online recommender and knowledge base completion (KBC) systems. Common examples of recommender systems include news websites and aggregators, social media platforms such as LinkedIn and e-commerce portals like Amazon. Knowledge base completion automatically derives new knowledge based upon a given set of known knowledge. Possible applications of KBC are wide-ranging and include investigating medical drug interactions or identifying insurance fraud. Gradient Rollback improves the current application of neural matrix factorization models by providing an explanation of why a prediction is likely.

The importance of Gradient Rollback in generating explanations: mathematically provable bounds on its approximation error
For knowledge base completion, training data refers to training instances of triples that consist of subject, relation and object. The figure below depicts a simple neural model used to predict how likely the embassy of one county will be present in another country. The figure contains several triples, for example: “China”, “has treaties with” and “Egypt.” “China” is the subject, “has treaties with” the relation and “Egypt” the object. Overall, this triple informs us that China has treaties with Egypt.

Figure: Geopolitical knowledge graph revealing simple relationships between three countries

In this type of example, a neural model will map each country and each relation to a vector representation (a series of numbers that describes a subject, relation or object; in this case the country or relation). With a scoring function, the vector representations of each triple can be combined into a numerical score that represents how likely a triple is to be true. A simple yet powerful scoring function computes the inner product (mathematical multiplication) of the triple’s three vectors.

Given a set of known triples, the AI model learns good vector representations by increasing the score of the known triples. Once training is complete, the AI model can then infer new relationships that are also likely to be true. In the figure above, the system predicts that China is likely to have an embassy in Egypt (depicted by dashed line). Other models will only return this prediction. In contrast, by using Gradient Rollback the prediction is explained by revealing the training instances (countries connected by solid lines) that made that prediction likely. For example, the figure tells us that China likely has an embassy in Egypt because China has treaties with Egypt.

Applied to neural matrix factorization models, Gradient Rollback has a mathematically proven bound on its approximation error. We can also show empirically that Gradient Rollback’s explanations are more faithful (correct) and it derives explanations faster than previous methods and baselines.

Advancing Drug-Drug Interaction Discovery
NEC research applies neural matrix factorization models to great effect. With Gradient Rollback we can enrich these applications by providing explanations alongside predictions to help answer some of technology’s most challenging research questions.

One application example is drug-drug interaction (DDI) discovery.  A complicated medical question is, “What happens if somebody takes two pharmaceutical drugs at the same time: how will these two drugs interact with each other?” Testing the effect of a drug-drug combination in a physical environment is impractical; there are many combinations and variables that makes experimenting and testing different drug combinations prohibitively expensive. A far more efficient process is training an AI model to make novel predictions about what occurs when a patient takes two drugs at the same time. Biomedical researchers can then focus the physical tests needed to explore promising combinations.

Using NEC technology, biomedical researchers now know not just which drug-drug combinations to test but also why. This allows the researchers to narrow down the selection of combinations further lowering the cost of DDI experiments.

With Gradient Rollback, neural models can now provide not just a prediction but also an explanation. This offers the potential to enhance any type of deep predictive analysis that uses this type of AI, such as climate change modelling and pandemic infection modelling.

The research about Gradient Rollback was presented at the 35th AAAI Conference on Artificial Intelligence. You can read more about Gradient Rollback in the early release version of the research paper, “Explaining Neural Matrix Factorization with Gradient Rollback (C. Lawrence et al.)

Dr. Carolin Lawerene

Top of this page