Publication

Validation of machine learning models to detect amyloid pathologies across institutions

Downloadable Content

Persistent URL
Last modified
  • 05/15/2025
Type of Material
Authors
    Juan C. Vizcarra, Georgia Institute of TechnologyMarla Gearing, Emory UniversityMichael J. Keiser, University of California San FranciscoJonathan Glass, Emory UniversityBrittany N. Dugger, University of California DavisDavid Gutman, Emory University
Language
  • English
Date
  • 2020-04-28
Publisher
  • BMC Publishing
Publication Version
Copyright Statement
  • © The Author(s). 2020.
License
Final Published Version (URL)
Title of Journal or Parent Work
Volume
  • 8
Issue
  • 1
Start Page
  • 59
End Page
  • 59
Grant/Funding Information
  • The work was funded by NIH grants: AG025688 (Goizueta Alzheimer’s disease center at Emory University), CA194362 (U24 Gutman PI), AG010129 (UC-Davis Alzheimer’s disease research center), AG062517 (R01 Dugger PI), by grant number 2018–191905 from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation (MJK), and a research grant from the University of California office of the president (MRI-19-599956- Dugger PI).
Supplemental Material (URL)
Abstract
  • Semi-quantitative scoring schemes like the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) are the most commonly used method in Alzheimer's disease (AD) neuropathology practice. Computational approaches based on machine learning have recently generated quantitative scores for whole slide images (WSIs) that are highly correlated with human derived semi-quantitative scores, such as those of CERAD, for Alzheimer's disease pathology. However, the robustness of such models have yet to be tested in different cohorts. To validate previously published machine learning algorithms using convolutional neural networks (CNNs) and determine if pathological heterogeneity may alter algorithm derived measures, 40 cases from the Goizueta Emory Alzheimer's Disease Center brain bank displaying an array of pathological diagnoses (including AD with and without Lewy body disease (LBD), and/or TDP-43-positive inclusions) and levels of A pathologies were evaluated. Furthermore, to provide deeper phenotyping, amyloid burden in gray matter vs whole tissue were compared, and quantitative CNN scores for both correlated significantly to CERAD-like scores. Quantitative scores also show clear stratification based on AD pathologies with or without additional diagnoses (including LBD and TDP-43 inclusions) vs cases with no significant neurodegeneration (control cases) as well as NIA Reagan scoring criteria. Specifically, the concomitant diagnosis group of AD + TDP-43 showed significantly greater CNN-score for cored plaques than the AD group. Finally, we report that whole tissue computational scores correlate better with CERAD-like categories than focusing on computational scores from a field of view with densest pathology, which is the standard of practice in neuropathological assessment per CERAD guidelines. Together these findings validate and expand CNN models to be robust to cohort variations and provide additional proof-of-concept for future studies to incorporate machine learning algorithms into neuropathological practice.
Author Notes
Keywords
Research Categories
  • Biology, Neuroscience
  • Chemistry, Pharmaceutical
  • Engineering, Biomedical
  • Health Sciences, Pathology

Tools

Relations

In Collection:

Items