Vagus nerve stimulation (VNS) is a potential treatment option for gastrointestinal (GI) diseases. The present study aimed to understand the physiological effects of VNS on gastrointestinal (GI) function, which is crucial for developing more effective adaptive closed-loop VNS therapies for GI diseases. Electrogastrography (EGG), which measures gastric electrical activities (GEAs) as a proxy to quantify GI functions, was employed in our investigation. We introduced a recording schema that allowed us to simultaneously induce electrical VNS and record EGG. While this setup created a unique model for studying the effects of VNS on the GI function and provided an excellent testbed for designing advanced neuromodulation therapies, the resulting data was noisy, heterogeneous, and required specialized analysis tools. The current study aimed at formulating a systematic and interpretable approach to quantify the physiological effects of electrical VNS on GEAs in ferrets by using signal processing and machine learning techniques. Our analysis pipeline included pre-processing steps, feature extraction from both time and frequency domains, a voting algorithm for selecting features, and model training and validation. Our results indicated that the electrophysiological changes induced by VNS were optimally characterized by a distinct set of features for each classification scenario. Additionally, our findings demonstrated that the process of feature selection enhanced classification performance and facilitated representation learning.
Research data warehouses integrate research and patient data from one or more sources into a single data model that is designed for research. Typically, institutions update their warehouse by fully reloading it periodically. The alternative is to update the warehouse incrementally with new, changed and/or deleted data. Full reloads avoid having to correct and add to a live system, but they can render the data outdated for clinical trial accrual. They place a substantial burden on source systems, involve intermittent work that is challenging to resource, and may involve tight coordination across IT and informatics units. We have implemented daily incremental updating for our i2b2 data warehouse. Incremental updating requires substantial up-front development, and it can expose provisional data to investigators. However, it may support more use cases, it may be a better fit for academic healthcare IT organizational structures, and ongoing support needs appear to be similar or lower.
Glioblastoma (GBM) is the most malignant form of primary brain tumor, and GBM stem-like cells (GSCs) contribute to the rapid growth, therapeutic resistance, and clinical recurrence of these fatal tumors. STAT3 signaling supports the maintenance and proliferation of GSCs, yet regulatory mechanisms are not completely understood. Here, we report that tri-partite motif-containing protein 8 (TRIM8) activates STAT3 signaling to maintain stemness and self-renewing capabilities of GSCs. TRIM8 (also known as 'glioblastoma-expressed ring finger protein') is expressed equally in GBM and normal brain tissues, despite its hemizygous deletion in the large majority of GBMs, and its expression is highly correlated with stem cell markers. Experimental knockdown of TRIM8 reduced GSC self-renewal and expression of SOX2, NESTIN, and p-STAT3, and promoted glial differentiation. Overexpression of TRIM8 led to higher expression of p-STAT3, c-MYC, SOX2, NESTIN, and CD133, and enhanced GSC self-renewal. We found that TRIM8 activates STAT3 by suppressing the expression of PIAS3, an inhibitor of STAT3, most likely through E3-mediated ubiquitination and proteasomal degradation. Interestingly, we also found that STAT3 activation upregulates TRIM8, providing a mechanism for normalized TRIM8 expression in the setting of hemizygous gene deletion. These data demonstrate that bidirectional TRIM8-STAT3 signaling regulates stemness in GSC.
Temporal abstraction, a method for specifying and detecting temporal patterns in clinical databases, is very expressive and performs well, but it is difficult for clinical investigators and data analysts to understand. Such patterns are critical in phenotyping patients using their medical records in research and quality improvement. We have previously developed the Analytic Information Warehouse (AIW), which computes such phenotypes using temporal abstraction but requires software engineers to use. We have extended the AIW’s web user interface, Eureka! Clinical Analytics, to support specifying phenotypes using an alternative model that we developed with clinical stakeholders. The software converts phenotypes from this model to that of temporal abstraction prior to data processing. The model can represent all phenotypes in a quality improvement project and a growing set of phenotypes in a multi-site research study. Phenotyping that is accessible to investigators and IT personnel may enable its broader adoption.
Clinical and Translational Science Award (CTSA) recipients have a need to create research data marts from their clinical data warehouses, through research data networks and the use of i2b2 and SHRINE technologies. These data marts may have different data requirements and representations, thus necessitating separate extract, transform and load (ETL) processes for populating each mart. Maintaining duplicative procedural logic for each ETL process is onerous. We have created an entirely metadata-driven ETL process that can be customized for different data marts through separate configurations, each stored in an extension of i2b2 's ontology database schema. We extended our previously reported and open source Eureka! Clinical Analytics software with this capability. The same software has created i2b2 data marts for several projects, the largest being the nascent Accrual for Clinical Trials (ACT) network, for which it has loaded over 147 million facts about 1.2 million patients.
by
Sophie E Ack;
Shamelia Y Loiseau;
Guneeti Sharma;
Joshua N Goldstein;
India A Lissak;
Sarah M Duffy;
Edilberto Amorim;
Paul Vespa;
J. Randall Moorman;
Xiao Hu;
Gilles Clermont;
Soojin Park;
Rishikesan Kamaleswaran;
Brandon P Foreman;
Eric S Rosenthal
Background: We evaluated the feasibility and discriminability of recently proposed Clinical Performance Measures for Neurocritical Care (Neurocritical Care Society) and Quality Indicators for Traumatic Brain Injury (Collaborative European NeuroTrauma Effectiveness Research in TBI; CENTER-TBI) extracted from electronic health record (EHR) flowsheet data. Methods: At three centers within the Collaborative Hospital Repository Uniting Standards (CHoRUS) for Equitable AI consortium, we examined consecutive neurocritical care admissions exceeding 24 h (03/2015–02/2020) and evaluated the feasibility, discriminability, and site-specific variation of five clinical performance measures and quality indicators: (1) intracranial pressure (ICP) monitoring (ICPM) within 24 h when indicated, (2) ICPM latency when initiated within 24 h, (3) frequency of nurse-documented neurologic assessments, (4) intermittent pneumatic compression device (IPCd) initiation within 24 h, and (5) latency to IPCd application. We additionally explored associations between delayed IPCd initiation and codes for venous thromboembolism documented using the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) system. Median (interquartile range) statistics are reported. Kruskal–Wallis tests were measured for differences across centers, and Dunn statistics were reported for between-center differences. Results: A total of 14,985 admissions met inclusion criteria. ICPM was documented in 1514 (10.1%), neurologic assessments in 14,635 (91.1%), and IPCd application in 14,175 (88.5%). ICPM began within 24 h for 1267 (83.7%), with site-specific latency differences among sites 1–3, respectively, (0.54 h [2.82], 0.58 h [1.68], and 2.36 h [4.60]; p < 0.001). The frequency of nurse-documented neurologic assessments also varied by site (17.4 per day [5.97], 8.4 per day [3.12], and 15.3 per day [8.34]; p < 0.001) and diurnally (6.90 per day during daytime hours vs. 5.67 per day at night, p < 0.001). IPCds were applied within 24 h for 12,863 (90.7%) patients meeting clinical eligibility (excluding those with EHR documentation of limiting injuries, actively documented as ambulating, or refusing prophylaxis). In-hospital venous thromboembolism varied by site (1.23%, 1.55%, and 5.18%; p < 0.001) and was associated with increased IPCd latency (overall, 1.02 h [10.4] vs. 0.97 h [5.98], p = 0.479; site 1, 2.25 h [10.27] vs. 1.82 h [7.39], p = 0.713; site 2, 1.38 h [5.90] vs. 0.80 h [0.53], p = 0.216; site 3, 0.40 h [16.3] vs. 0.35 h [11.5], p = 0.036). Conclusions: Electronic health record–derived reporting of neurocritical care performance measures is feasible and demonstrates site-specific variation. Future efforts should examine whether performance or documentation drives these measures, what outcomes are associated with performance, and whether EHR-derived measures of performance measures and quality indicators are modifiable.
by
Andrea Sikora;
Tianyi Zhang;
David J Murphy;
Susan E. Smith;
Brian Murray;
Rishi Kamaleswaran;
Xianyan Chen;
Mitchell S. Buckley;
Sandra Rowe;
John W. Devlin
Fluid overload, while common in the ICU and associated with serious sequelae, is hard to predict and may be influenced by ICU medication use. Machine learning (ML) approaches may offer advantages over traditional regression techniques to predict it. We compared the ability of traditional regression techniques and different ML-based modeling approaches to identify clinically meaningful fluid overload predictors. This was a retrospective, observational cohort study of adult patients admitted to an ICU ≥ 72 h between 10/1/2015 and 10/31/2020 with available fluid balance data. Models to predict fluid overload (a positive fluid balance ≥ 10% of the admission body weight) in the 48–72 h after ICU admission were created. Potential patient and medication fluid overload predictor variables (n = 28) were collected at either baseline or 24 h after ICU admission. The optimal traditional logistic regression model was created using backward selection. Supervised, classification-based ML models were trained and optimized, including a meta-modeling approach. Area under the receiver operating characteristic (AUROC), positive predictive value (PPV), and negative predictive value (NPV) were compared between the traditional and ML fluid prediction models. A total of 49 of the 391 (12.5%) patients developed fluid overload. Among the ML models, the XGBoost model had the highest performance (AUROC 0.78, PPV 0.27, NPV 0.94) for fluid overload prediction. The XGBoost model performed similarly to the final traditional logistic regression model (AUROC 0.70; PPV 0.20, NPV 0.94). Feature importance analysis revealed severity of illness scores and medication-related data were the most important predictors of fluid overload. In the context of our study, ML and traditional models appear to perform similarly to predict fluid overload in the ICU. Baseline severity of illness and ICU medication regimen complexity are important predictors of fluid overload.
Chimpanzees (Pan troglodytes) are, along with bonobos, humans’ closest living relatives. The advent of diffusion MRI tractography in recent years has allowed a resurgence of comparative neuroanatomical studies in humans and other primate species. Here we offer, in comparative perspective, the first chimpanzee white matter atlas, constructed from in vivo chimpanzee diffusion-weighted scans. Comparative white matter atlases provide a useful tool for identifying neuroanatomical differences and similarities between humans and other primate species. Until now, comprehensive fascicular atlases have been created for humans (Homo sapiens), rhesus macaques (Macaca mulatta), and several other nonhuman primate species, but never in a nonhuman ape. Information on chimpanzee neuroanatomy is essential for understanding the anatomical specializations of white matter organization that are unique to the human lineage.
Obstructive sleep apnea (OSA) is a disorder characterized by repeated pauses in breathing during sleep, which leads to deoxygenation and voiced chokes at the end of each episode. OSA is associated by daytime sleepiness and an increased risk of serious conditions such as cardiovascular disease, diabetes, and stroke. Between 2 and 7% of the adult population globally has OSA, but it is estimated that up to 90% of those are undiagnosed and untreated. Diagnosis of OSA requires expensive and cumbersome screening. Audio offers a potential non-contact alternative, particularly with the ubiquity of excellent signal processing on every phone. Previous studies have focused on the classification of snoring and apneic chokes. However, such approaches require accurate identification of events. This leads to limited accuracy and small study populations. In this work, we propose an alternative approach which uses multiscale entropy (MSE) coefficients presented to a classifier to identify disorder in vocal patterns indicative of sleep apnea. A database of 858 patients was used, the largest reported in this domain. Apneic choke, snore, and noise events encoded with speech analysis features were input into a linear classifier. Coefficients of MSE derived from the first 4 h of each recording were used to train and test a random forest to classify patients as apneic or not. Standard speech analysis approaches for event classification achieved an out-of-sample accuracy (Ac) of 76.9% with a sensitivity (Se) of 29.2% and a specificity (Sp) of 88.7% but high variance. For OSA severity classification, MSE provided an out-of-sample Ac of 79.9%, Se of 66.0%, and Sp = 88.8%. Including demographic information improved the MSE-based classification performance to Ac = 80.5%, Se = 69.2%, and Sp = 87.9%. These results indicate that audio recordings could be used in screening for OSA, but are generally under-sensitive.
In a critical care setting, shock and resuscitation endpoints are often defined based on arterial blood pressure values. Patient-specific fluctuations and interactions between heart rate (HR) and blood pressure (BP), however, may provide additional prognostic value to stratify individual patients' risks for adverse outcomes at different blood pressure targets. In this work, we use the switching autoregressive (SVAR) dynamics inferred from the multivariate vital sign time series to stratify mortality risks of intensive care units (ICUs) patients receiving vasopressor treatment. We model vital sign observations as generated from latent states from an autoregressive Hidden Markov Model (AR-HMM) process, and use the proportion of time patients stayed in different latent states to predict outcome. We evaluate the performance of our approach using minute-by-minute HR and mean arterial BP (MAP) of an ICU patient cohort while on vasopressor treatment. Our results indicate that the bivariate HR/MAP dynamics (AUC 0.74 [0.64, 0.84]) contain additional prognostic information beyond the MAP values (AUC 0.53 [0.42, 0.63]) in mortality prediction. Further, HR/MAP dynamics achieved better performance among a subgroup of patients in a low MAP range (median MAP < 65 mmHg) while on pressors. A realtime implementation of our approach may provide clinicians a tool to quantify the effectiveness of interventions and to inform treatment decisions.