Elsevier

Computers & Graphics

Volume 102, February 2022, Pages 502-520
Computers & Graphics

Survey Paper
A survey of visual analytics for Explainable Artificial Intelligence methods

https://doi.org/10.1016/j.cag.2021.09.002Get rights and content

Highlights

  • A comprehensive survey for visual analytics, particularly those that adopted explainable artificial intelligence (XAI) methods, in interpreting Neural Networks is conducted.

  • We reviewed the literature based on model usage and visual approaches.

  • We concluded some visual approaches commonly used to support the illustration of XAI methods for various types of data and machine learning models; however, a generic approach is needed for the field.

  • We listed several future work including data manipulations, scalability, and bias in data representation, and generalizable real-time visualizations integrating XAI.

Abstract

Deep learning (DL) models have achieved impressive performance in various domains such as medicine, finance, and autonomous vehicle systems with advances in computing power and technologies. However, due to the black-box structure of DL models, the decisions of these learning models often need to be explained to end-users. Explainable Artificial Intelligence (XAI) provides explanations of black-box models to reveal the behavior and underlying decision-making mechanisms of the models through tools, techniques, and algorithms. Visualization techniques help to present model and prediction explanations in a more understandable, explainable, and interpretable way. This survey paper aims to review current trends and challenges of visual analytics in interpreting DL models by adopting XAI methods and present future research directions in this area. We reviewed literature based on two different aspects, model usage and visual approaches. We addressed several research questions based on our findings and then discussed missing points, research gaps, and potential future research directions. This survey provides guidelines to develop a better interpretation of neural networks through XAI methods in the field of visual analytics.

Introduction

Machine learning (ML) techniques have achieved impressive performance in various domains such as medicine, finance, and autonomous vehicle systems with advances in computing power and technologies [1], [2]. Neural Network (NN), as a sub-branch of ML, has become a powerful technique in finding complex patterns in high-dimensional datasets and providing high prediction accuracy in many domains [3]. However, NN-based models have a complex structure, which makes it difficult for them to be interpreted and understood. NNs are considered black-box models since their inner working and decision-making mechanisms are not understandable by a human. This reveals one of the most important issues in black-box models: transparency and explainability [4].

End-users often want to understand how a classifier makes predictions, particularly in sensitive domains, such as healthcare, transportation, defense, and finance, where decision making often has a critical impact. Explaining how predictions are made by ML models by clarifying their working mechanisms would increase trustworthy of ML models. To address this important need, interpretable ML algorithms has been developed rapidly in understanding the inner working mechanisms of black-box models [5]. One of the most important efforts is the development of a re-emerging field in eXplainable Artificial Intelligence (XAI) [6]. According to Defense Advanced Research Projects Agency (DARPA) technical report [6], XAI is defined as “a suite of machine learning techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”. Although interpretability and explainability are often used interchangeably by the ML community, there are slight differences in the definition of interpretable ML and explainable AI. Miller [7] defines interpretability as “the degree to which an observer can understand the cause of a decision” and equates interpretability and explainability definitions. From the ML context, interpretability can be defined as understanding how the decision/prediction is given by machine learning algorithms with reasoning. The term explainability is more related to the internal working mechanisms of black-box models. Therefore, XAI reveals the internal functioning of black-box models and the rationale behind the decisions through various methods. While domain experts who are inexperienced in ML often want to understand through reasoning and cause–effect relationship why a certain decision has been made, ML scientists focus on the internal working mechanisms of ML models and try to understand how their components contribute to certain predictions. XAI aims to help end-users and domain experts to gain insight into how black-box models make predictions. It also helps ML scientists with the model development process by explaining the decision-making process of the black-box models.

Visual analytics (VA) is inherent way to represent the data/ model understandably, particularly to those who are inexperienced in ML. VA has been often used in providing interpretable ML models by understanding [1], [5], diagnosing [8], [9], and steering [10] the model and underlying data through an interactive visual interface. Combining the techniques of VA with XAI algorithms would present an ideal platform to clarify the black box structure of ML. However, there are only a few recent works combining VA with the current stage of XAI methods to provide explainable ML models to humans. So, we target the audience of this review to following groups: (1) VA scientists who would like to adopt XAI methods to interpret NN, (2) ML scientists, particularly in the field of XAI, who may need VA to interpret their work, and (3) end-users/domain experts who use NN for data classification and predications. This survey paper aims to find current trends and challenges of VA in interpreting black-box models by adopting XAI methods and present future research directions in this area. Within the study, we would like to discover and present how VA can support a better interpretation of NN models with XAI methods.

The explainability and interpretability of black-box models are recent hot topics, and many studies have been done in the field. Most of the studies have been focused on the interpretation of NNs due to their state-of-the-art performance in various domains. Therefore, we limit our study by reviewing papers that focus on the interpretation of NNs among black-box models. There are several literature reviews of XAI methods [4], [11], [12], [13], [14], and visual analytics on interpretable machine learning [2], [15], [16], [17], [18], [19], [20] respectively. However, to our knowledge, there is no literature review that focus on VA research combined with XAI methods. Such review will help to analyze potential future research directions to develop a better interpretation of neural networks through XAI methods in the field of visual analytics. Therefore, we reviewed 55 papers that contributed to the interpretation of NN models via visual analytics with and without XAI methods in terms of model usage and visual approach. Model usage refers to techniques that are used to explain NN models in the fields of VA and XAI respectively. The visual approach mainly focuses on analyzing how visualization techniques are used in data and architecture representations, performance analysis, and local and global explanations. Our main contributions are as follows:

  • We present a review of VA research in interpreting deep learning with a focus on with and without adopting XAI methods.

  • We reviewed the literature based on (1) model usage in visual interpretation and XAI algorithms respectively, and (2) visual approach where commonly used visual approaches are summarized.

  • We highlight the current trends and limitations, and discuss future research directions of VA that adopts XAI for NN models.

The rest of the paper is organized as follows: Section 2 provides theoretical background about black-box models and XAI methods. Section 3 shows the methodology of this review. Section 4 reviews visual interpretation papers and Section 5 reviews visual-based XAI papers based on the model usage and visual approaches, respectively. Section 6 states the current trends and discusses future directions of VA for XAI. Section 7 concludes the paper.

Section snippets

Theoretical background

This section provides basic information about black-box models and definitions, concepts, and techniques related to XAI. The section emphasizes the need for explanations of black-box models through XAI methods.

Methodology

This section presents the paper selection process and defines the strategies to classify the papers for our review.

Visual interpretation

This section focuses on techniques and visualization approaches to explain NN models without adopting XAI.

Visual-based XAI (vXAI)

vXAI is an emerging area of research in the field of VA. Comparing with visual interpretation based studies, vXAI papers are very limited This section summarizes how visual approaches and model usage are used in visual-based XAI papers to make NN models more transparent by adopting XAI techniques. The subsections are created based on the paper categorization scheme as shown in Fig. 4.

Discussion, opportunities and future work

In the field of VA, we found there are very few research studies focusing on adopting XAI methods to explain ML. Therefore, we developed several research questions to address research needs and directions in this area. This section presents the current trends, research challenges, and opportunities for future work in vXAI, through some predetermined research questions by this survey.

1. How can VA systems be utilized to support the interpretation of NNs through XAI techniques?

Scalability in data

Conclusion

To gain the trustworthy on the decisions of the black-box models, XAI research has been growing rapidly. Since then, many XAI methods have been proposed to provide understandable results of AI to humans. This survey summarized the current state, challenges, and future directions of developing better visual analytics for XAI methods in interpreting neural networks. We have reviewed the interpretability of VA with and without involving XAI methods in both model usage and visual approach.

The main

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (99)

  • KahngM. et al.

    ActiVis: Visual exploration of industry-scale deep neural network models

    IEEE Trans Vis Comput Graph

    (2018)
  • ChatzimparmpasA. et al.

    The state of the art in enhancing trust in machine learning models with the use of visualizations

    Comput Graph Forum

    (2020)
  • DaglarliE.

    Explainable artificial intelligence (XAI) approaches and deep meta-learning models

  • LiuM. et al.

    Towards better analysis of deep convolutional neural networks

    IEEE Trans Vis Comput Graph

    (2017)
  • GunningD. et al.

    DARPA’s explainable artificial intelligence (XAI) program

    AI Mag

    (2019)
  • StrobeltH. et al.

    LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks

    IEEE Trans Vis Comput Graph

    (2018)
  • Chung S, Suh S, Park C, Kang K, Choo J, Kwon BC. ReVACNN: Real-Time visual analytics for convolutional neural network....
  • MingY. et al.

    ProtoSteer: Steering deep sequence model with prototypes

    IEEE Trans Vis Comput Graph

    (2020)
  • XuF. et al.

    Explainable AI: A brief survey on history, research areas, approaches and challenges

  • Emmert-StreibF. et al.

    Explainable artificial intelligence and machine learning: A reality rooted perspective

    (2020)
  • DasA. et al.

    Opportunities and challenges in explainable artificial intelligence (XAI): A survey

    (2020)
  • ChatzimparmpasA. et al.

    A survey of surveys on the use of visualization for interpreting machine learning models

    Inf Vis

    (2020)
  • HohmanF. et al.

    Visual analytics in deep learning: An interrogative survey for the next frontiers

    IEEE Trans Vis Comput Graph

    (2019)
  • DasS. et al.

    Taxonomy and survey of interpretable machine learning method

  • AdadiA. et al.

    Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)

    IEEE Access

    (2018)
  • ChooJ. et al.

    Visual analytics for explainable deep learning

    IEEE Comput Graph Appl

    (2018)
  • RipleyB.D.

    Pattern recognition and neural networks

    (2007)
  • Publication-ready NN architecture schematics

    (2016)
  • RaiA.

    Explainable AI: from black box to glass box

    J Acad Mark Sci

    (2020)
  • RodríguezN. et al.

    Accessible cultural heritage through explainable artificial intelligence

  • MoradiM. et al.

    Post-hoc explanation of black-box classifiers using confident itemsets

    Expert Syst Appl

    (2021)
  • ArrietaA. et al.

    Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities, and challenges toward responsible AI

    Inf Fusion

    (2019)
  • SchoenbornJ.M. et al.

    Recent trends in XAI: A broad overview on current approaches

    Methodol Interact ICCBR Workshops.

    (2019)
  • RibeiroM.T. et al.

    ‘Why should i trust you?’ Explaining the predictions of any classifier

  • SelvarajuR.R. et al.

    Grad-CAM: Visual explanations from deep networks via gradient-based localization

    Int J Comput Vis

    (2020)
  • BachS. et al.

    On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation

    PLoS One

    (2015)
  • DragoniM. et al.

    Explainable AI meets persuasiveness: Translating reasoning results into behavioral change advice

    Artif Intell Med

    (2020)
  • LethamB. et al.

    Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model

    Ann Appl Stat

    (2015)
  • CaruanaR. et al.

    Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission

  • TanS. et al.

    Distill-and-Compare: auditing black-box models using transparent model distillation

  • LundbergS.M. et al.

    A unified approach to interpreting model predictions

  • ShrikumarA. et al.

    Learning important features through propagating activation differences

  • BreimanL.

    Manual on setting up, using, and understanding random forests v3

    Tech Rep

    (2002)
  • SundararajanM. et al.

    Axiomatic attribution for deep networks

  • SimonyanK. et al.

    Deep inside convolutional networks: Visualising image classification models and saliency maps

  • RibeiroM.T. et al.

    Anchors: High-precision model-agnostic explanations

  • ZhouB. et al.

    Learning deep features for discriminative localization

  • BrooksM. et al.

    FeatureInsight: Visual support for error-driven feature ideation in text classification

  • KrauseJ. et al.

    INFUSE: Interactive feature selection for predictive modeling of high dimensional data

    IEEE Trans Vis Comput Graphics

    (2014)
  • Cited by (96)

    View all citing articles on Scopus
    View full text