About this item:

54 Views | 18 Downloads

Author Notes:

C.A. Ellis, Tri-institutional Center for Translational Research in Neuroimaging and Data Science: Georgia State University, Georgia Institute of Technology, Emory University, 55 Park Pl NE, Atlanta, GA, 30303, United States. Email: cae

We thank those who collected the FBIRN dataset. Funding for this study was provided by NIH R01MH118695 and NSF 2112455.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Subjects:

Keywords:

  • Clinical decision support systems
  • Explainable artificial intelligence
  • Gradient-based explainability methods
  • Monte Carlo batch normalization
  • Monte Carlo dropout
  • Neuroimaging

Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches

Tools:

Journal Title:

Informatics in Medicine Unlocked

Volume:

Volume 37

Publisher:

Type of Work:

Article | Post-print: After Peer Review

Abstract:

The field of neuroimaging has increasingly sought to develop artificial intelligence-based models for neurological and neuropsychiatric disorder automated diagnosis and clinical decision support. However, if these models are to be implemented in a clinical setting, transparency will be vital. Two aspects of transparency are (1) confidence estimation and (2) explainability. Confidence estimation approaches indicate confidence in individual predictions. Explainability methods give insight into the importance of features to model predictions. In this study, we integrate confidence estimation and explainability approaches for the first time. We demonstrate their viability for schizophrenia diagnosis using resting state functional magnetic resonance imaging (rs-fMRI) dynamic functional network connectivity (dFNC) data. We compare two confidence estimation approaches: Monte Carlo dropout (MCD) and MC batch normalization (MCBN). We combine them with two gradient-based explainability approaches, saliency and layer-wise relevance propagation (LRP), and examine their effects upon explanations. We find that MCD often adversely affects model gradients, making it ill-suited for integration with gradient-based explainability methods. In contrast, MCBN does not affect model gradients. Additionally, we find many participant-level differences between regular explanations and the distributions of explanations for combined explainability and confidence estimation approaches. This suggests that a similar confidence estimation approach used in a clinical context with explanations only output for the regular model would likely not yield adequate explanations. We hope that our findings will provide a starting point for the integration of the two fields, provide useful guidance for future studies, and accelerate the development of transparent neuroimaging clinical decision support systems.

Copyright information:

This is an Open Access work distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (https://creativecommons.org/licenses/by-nc-nd/4.0/).
Export to EndNote