Survey PaperA survey of visual analytics for Explainable Artificial Intelligence methods
Graphical abstract
Introduction
Machine learning (ML) techniques have achieved impressive performance in various domains such as medicine, finance, and autonomous vehicle systems with advances in computing power and technologies [1], [2]. Neural Network (NN), as a sub-branch of ML, has become a powerful technique in finding complex patterns in high-dimensional datasets and providing high prediction accuracy in many domains [3]. However, NN-based models have a complex structure, which makes it difficult for them to be interpreted and understood. NNs are considered black-box models since their inner working and decision-making mechanisms are not understandable by a human. This reveals one of the most important issues in black-box models: transparency and explainability [4].
End-users often want to understand how a classifier makes predictions, particularly in sensitive domains, such as healthcare, transportation, defense, and finance, where decision making often has a critical impact. Explaining how predictions are made by ML models by clarifying their working mechanisms would increase trustworthy of ML models. To address this important need, interpretable ML algorithms has been developed rapidly in understanding the inner working mechanisms of black-box models [5]. One of the most important efforts is the development of a re-emerging field in eXplainable Artificial Intelligence (XAI) [6]. According to Defense Advanced Research Projects Agency (DARPA) technical report [6], XAI is defined as “a suite of machine learning techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”. Although interpretability and explainability are often used interchangeably by the ML community, there are slight differences in the definition of interpretable ML and explainable AI. Miller [7] defines interpretability as “the degree to which an observer can understand the cause of a decision” and equates interpretability and explainability definitions. From the ML context, interpretability can be defined as understanding how the decision/prediction is given by machine learning algorithms with reasoning. The term explainability is more related to the internal working mechanisms of black-box models. Therefore, XAI reveals the internal functioning of black-box models and the rationale behind the decisions through various methods. While domain experts who are inexperienced in ML often want to understand through reasoning and cause–effect relationship why a certain decision has been made, ML scientists focus on the internal working mechanisms of ML models and try to understand how their components contribute to certain predictions. XAI aims to help end-users and domain experts to gain insight into how black-box models make predictions. It also helps ML scientists with the model development process by explaining the decision-making process of the black-box models.
Visual analytics (VA) is inherent way to represent the data/ model understandably, particularly to those who are inexperienced in ML. VA has been often used in providing interpretable ML models by understanding [1], [5], diagnosing [8], [9], and steering [10] the model and underlying data through an interactive visual interface. Combining the techniques of VA with XAI algorithms would present an ideal platform to clarify the black box structure of ML. However, there are only a few recent works combining VA with the current stage of XAI methods to provide explainable ML models to humans. So, we target the audience of this review to following groups: (1) VA scientists who would like to adopt XAI methods to interpret NN, (2) ML scientists, particularly in the field of XAI, who may need VA to interpret their work, and (3) end-users/domain experts who use NN for data classification and predications. This survey paper aims to find current trends and challenges of VA in interpreting black-box models by adopting XAI methods and present future research directions in this area. Within the study, we would like to discover and present how VA can support a better interpretation of NN models with XAI methods.
The explainability and interpretability of black-box models are recent hot topics, and many studies have been done in the field. Most of the studies have been focused on the interpretation of NNs due to their state-of-the-art performance in various domains. Therefore, we limit our study by reviewing papers that focus on the interpretation of NNs among black-box models. There are several literature reviews of XAI methods [4], [11], [12], [13], [14], and visual analytics on interpretable machine learning [2], [15], [16], [17], [18], [19], [20] respectively. However, to our knowledge, there is no literature review that focus on VA research combined with XAI methods. Such review will help to analyze potential future research directions to develop a better interpretation of neural networks through XAI methods in the field of visual analytics. Therefore, we reviewed 55 papers that contributed to the interpretation of NN models via visual analytics with and without XAI methods in terms of model usage and visual approach. Model usage refers to techniques that are used to explain NN models in the fields of VA and XAI respectively. The visual approach mainly focuses on analyzing how visualization techniques are used in data and architecture representations, performance analysis, and local and global explanations. Our main contributions are as follows:
- •
We present a review of VA research in interpreting deep learning with a focus on with and without adopting XAI methods.
- •
We reviewed the literature based on (1) model usage in visual interpretation and XAI algorithms respectively, and (2) visual approach where commonly used visual approaches are summarized.
- •
We highlight the current trends and limitations, and discuss future research directions of VA that adopts XAI for NN models.
The rest of the paper is organized as follows: Section 2 provides theoretical background about black-box models and XAI methods. Section 3 shows the methodology of this review. Section 4 reviews visual interpretation papers and Section 5 reviews visual-based XAI papers based on the model usage and visual approaches, respectively. Section 6 states the current trends and discusses future directions of VA for XAI. Section 7 concludes the paper.
Section snippets
Theoretical background
This section provides basic information about black-box models and definitions, concepts, and techniques related to XAI. The section emphasizes the need for explanations of black-box models through XAI methods.
Methodology
This section presents the paper selection process and defines the strategies to classify the papers for our review.
Visual interpretation
This section focuses on techniques and visualization approaches to explain NN models without adopting XAI.
Visual-based XAI (vXAI)
vXAI is an emerging area of research in the field of VA. Comparing with visual interpretation based studies, vXAI papers are very limited This section summarizes how visual approaches and model usage are used in visual-based XAI papers to make NN models more transparent by adopting XAI techniques. The subsections are created based on the paper categorization scheme as shown in Fig. 4.
Discussion, opportunities and future work
In the field of VA, we found there are very few research studies focusing on adopting XAI methods to explain ML. Therefore, we developed several research questions to address research needs and directions in this area. This section presents the current trends, research challenges, and opportunities for future work in vXAI, through some predetermined research questions by this survey.
1. How can VA systems be utilized to support the interpretation of NNs through XAI techniques?
Scalability in data
Conclusion
To gain the trustworthy on the decisions of the black-box models, XAI research has been growing rapidly. Since then, many XAI methods have been proposed to provide understandable results of AI to humans. This survey summarized the current state, challenges, and future directions of developing better visual analytics for XAI methods in interpreting neural networks. We have reviewed the interpretability of VA with and without involving XAI methods in both model usage and visual approach.
The main
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References (99)
- et al.
Opening the black box: interpretable machine learning for geneticists
Trends Genet
(2020) Explanation in artificial intelligence: Insights from the social sciences
Artif Intell
(2019)- et al.
Explaining the black-box model: A survey of local interpretation methods for deep neural networks
Neurocomputing
(2021) - et al.
A task-and-technique centered survey on visual analytics for deep learning model engineering
Comput Graph
(2018) - et al.
FeatureExplorer: Interactive feature selection and exploration of regression models for hyperspectral images
- et al.
CrossVis: A visual analytics system for exploring heterogeneous multivariate data with applications to materials and climate sciences
Graph Vis Comput
(2020) - et al.
A visual analytics system for multi-model comparison on clinical data predictions
Vis Inf
(2020) - et al.
Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information
Decis Support Syst
(2020) - et al.
Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach
Artif Intell Med
(2019) - et al.
Towards explainable deep neural networks (xDNN)
Neural Netw
(2020)
ActiVis: Visual exploration of industry-scale deep neural network models
IEEE Trans Vis Comput Graph
The state of the art in enhancing trust in machine learning models with the use of visualizations
Comput Graph Forum
Explainable artificial intelligence (XAI) approaches and deep meta-learning models
Towards better analysis of deep convolutional neural networks
IEEE Trans Vis Comput Graph
DARPA’s explainable artificial intelligence (XAI) program
AI Mag
LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks
IEEE Trans Vis Comput Graph
ProtoSteer: Steering deep sequence model with prototypes
IEEE Trans Vis Comput Graph
Explainable AI: A brief survey on history, research areas, approaches and challenges
Explainable artificial intelligence and machine learning: A reality rooted perspective
Opportunities and challenges in explainable artificial intelligence (XAI): A survey
A survey of surveys on the use of visualization for interpreting machine learning models
Inf Vis
Visual analytics in deep learning: An interrogative survey for the next frontiers
IEEE Trans Vis Comput Graph
Taxonomy and survey of interpretable machine learning method
Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)
IEEE Access
Visual analytics for explainable deep learning
IEEE Comput Graph Appl
Pattern recognition and neural networks
Publication-ready NN architecture schematics
Explainable AI: from black box to glass box
J Acad Mark Sci
Accessible cultural heritage through explainable artificial intelligence
Post-hoc explanation of black-box classifiers using confident itemsets
Expert Syst Appl
Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities, and challenges toward responsible AI
Inf Fusion
Recent trends in XAI: A broad overview on current approaches
Methodol Interact ICCBR Workshops.
‘Why should i trust you?’ Explaining the predictions of any classifier
Grad-CAM: Visual explanations from deep networks via gradient-based localization
Int J Comput Vis
On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation
PLoS One
Explainable AI meets persuasiveness: Translating reasoning results into behavioral change advice
Artif Intell Med
Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model
Ann Appl Stat
Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission
Distill-and-Compare: auditing black-box models using transparent model distillation
A unified approach to interpreting model predictions
Learning important features through propagating activation differences
Manual on setting up, using, and understanding random forests v3
Tech Rep
Axiomatic attribution for deep networks
Deep inside convolutional networks: Visualising image classification models and saliency maps
Anchors: High-precision model-agnostic explanations
Learning deep features for discriminative localization
FeatureInsight: Visual support for error-driven feature ideation in text classification
INFUSE: Interactive feature selection for predictive modeling of high dimensional data
IEEE Trans Vis Comput Graphics
Cited by (96)
Advances in medical image analysis with vision Transformers: A comprehensive review
2024, Medical Image AnalysisShallow and deep learning classifiers in medical image analysis
2024, European Radiology Experimental