Original Research
Interpreting a recurrent neural network’s predictions of ICU mortality risk

https://doi.org/10.1016/j.jbi.2021.103672Get rights and content
Under an Elsevier user license
open archive

Highlights

  • Introduce Learned Binary Masks (LBM) to interpret an RNN’s ICU mortality predictions.

  • Attribute individual RNN predictions to their input features using LBM & KernelSHAP.

  • Aggregate attributions from KernelSHAP and LBM to interpret the RNN at various scales.

  • Introduce a patient data representation that facilitates use of LBM and KernelSHAP.

Abstract

Deep learning has demonstrated success in many applications; however, their use in healthcare has been limited due to the lack of transparency into how they generate predictions. Algorithms such as Recurrent Neural Networks (RNNs) when applied to Electronic Medical Records (EMR) introduce additional barriers to transparency because of the sequential processing of the RNN and the multi-modal nature of EMR data. This work seeks to improve transparency by: 1) introducing Learned Binary Masks (LBM) as a method for identifying which EMR variables contributed to an RNN model’s risk of mortality (ROM) predictions for critically ill children; and 2) applying KernelSHAP for the same purpose. Given an individual patient, LBM and KernelSHAP both generate an attribution matrix that shows the contribution of each input feature to the RNN’s sequence of predictions for that patient. Attribution matrices can be aggregated in many ways to facilitate different levels of analysis of the RNN model and its predictions. Presented are three methods of aggregations and analyses: 1) over volatile time periods within individual patient predictions, 2) over populations of ICU patients sharing specific diagnoses, and 3) across the general population of critically ill children.

Keywords

Model interpretation
Recurrent neural networks
Feature importance
Feature attribution
Electronic medical records
Deep learning

Cited by (0)