Axiomatic Local Interpretability of Deep Neural Networks beyond Euclidean Data

Title: Axiomatic Local Interpretability of Deep Neural Networks beyond Euclidean Data

 

Type: Master Thesis

Student: Christina Aigner

Supervisor: Sebastian Pölsterl, Christian Wachinger

Status: Finished on 15.04.2020

 

Abstract:

Deep Neural Networks (DNNs) have an enormous potential to learn from complex biomedical data. In particular, DNNs have been used to seamlessly fuse heterogenous information from neuroanatomy, genetics, biomarkers, and neuropsychological tests for highly accurate Alzheimer’s disease diagnosis. On the other hand, their black-box nature is still a barrier for the adoption of such a system in the clinic, where interpretability is absolutely essential. This thesis proposes Shapley Value Explanation of Heterogenous Neural Networks (SVEHNN), a novel approach for explaining Alzheimer’s diagnosis made by a DNN from neuroanatomical 3D point clouds and tabular biomarkers. The explanations are based on the Shapley value, which is the unique method that satisfies all fundamental axioms for local explanations previously established in the literature. Thus, SVEHNN has many desirable characteristics that previous work is lacking. To avoid the exponential time complexity of the Shapley value, a given DNN is transformed into a Lightweight Probabilistic Deep Network without re-training, thus achieving a complexity only quadratic in the number of features. Experiments on synthetic and real data show, that we can closely approximate the exact Shapley value with a dramatically reduced runtime and can reveal the hidden knowledge the network has learned from the data. A medical evaluation confirms that explanations are plausible and predictions are likely trustworthy.