Introduction
Accurate identification of venous thromboembolism (VTE) is critical to develop replicable epidemiological studies and rigorous predictions models. Traditionally, VTE studies have relied on international classification of diseases (ICD) codes which are inaccurate – leading to misclassification bias. Here, we developed ClotCatcher, a novel deep learning model that uses natural language processing to detect VTE from radiology reports.
Methods
Radiology reports to detect VTE were obtained from patients admitted to Emory University Hospital (EUH) and Grady Memorial Hospital (GMH). Data augmentation was performed using the Google PEGASUS paraphraser. This data was then used to fine-tune ClotCatcher, a novel deep learning model. ClotCatcher was validated on both the EUH dataset alone and GMH dataset alone.
Results
The dataset contained 1358 studies from EUH and 915 studies from GMH (n = 2273). The dataset contained 1506 ultrasound studies with 528 (35.1%) studies positive for VTE, and 767 CT studies with 91 (11.9%) positive for VTE. When validated on the EUH dataset, ClotCatcher performed best (AUC = 0.980) when trained on both EUH and GMH dataset without paraphrasing. When validated on the GMH dataset, ClotCatcher performed best (AUC = 0.995) when trained on both EUH and GMH dataset with paraphrasing.
Conclusion
ClotCatcher, a novel deep learning model with data augmentation rapidly and accurately adjudicated the presence of VTE from radiology reports. Applying ClotCatcher to large databases would allow for rapid and accurate adjudication of incident VTE. This would reduce misclassification bias and form the foundation for future studies to estimate individual risk for patient to develop incident VTE.
by
Peta MA Alexander;
Rebecca A Aslakson;
Erin F Barreto;
Jan Hau Lee;
Heather H Meissen;
Brenda M Morrow;
Lama Nazer;
Richard D Branson;
Kirby P Mayer;
Natalie Napolitano;
Meghan B Lane-Fall;
Andrea Sikora;
Preeti R John;
R Phillip Dellinger;
Margaret Parker;
Andrew Argent;
Adjoa Boateng;
Thomas P Green;
Sapna R Kudchadkar;
David M Maslove;
Megan A Rech;
Lauren R Sorce;
Robert C Tasker;
Tim Buchman;
Paul A Checchia
The Society of Critical Care Medicine (SCCM) Reviewer Academy seeks to train and establish a community of trusted, reliable, and skilled peer reviewers with diverse backgrounds and interests to promote high-quality reviews for each of the SCCM journals. Goals of the Academy include building accessible resources to highlight qualities of excellent manuscript reviews; educating and mentoring a diverse group of healthcare professionals; and establishing and upholding standards for insightful and informative reviews. This manuscript will map the mission of the Reviewer Academy with a succinct summary of the importance of peer review, process of reviewing a manuscript, and the expected ethical standards of reviewers. We will equip readers to target concise, thoughtful feedback as peer reviewers, advance their understanding of the editorial process and inspire readers to integrate medical journalism into diverse professional careers.
Background: Each year, 200,000 patients undergo an in-hospital cardiac arrest (IHCA), with approximately 15-20% surviving to discharge. Little is known, however, about the long-term prognosis of these patients after discharge. Previous efforts to describe out-of-hospital survival of IHCA patients have been limited by small sample sizes and narrow patient populations. Methods: A single institution matched cohort study was undertaken to describe mortality following IHCA. Patients surviving to discharge following an IHCA between 2008 and 2010 were matched on age, sex, race and hospital admission criteria with non-IHCA hospital controls and follow-up between 9 and 45 months. Kaplan-Meier curves and Cox PH models assessed differences in survival. Results: Of the 1262 IHCAs, 20% survived to hospital discharge. Of those discharged, survival at 1 year post-discharge was 59% for IHCA patients and 82% for controls (p < 0.0001). Hazard ratios (IHCA vs. controls) for mortality were greatest within the 90 days following discharge (HR = 2.90, p < 0.0001) and decreased linearly thereafter, with those surviving to one year post-discharge having an HR for mortality below 1.0. Survival after discharge varied amongst IHCA survivors. When grouped by discharge destination, out of hospital survival varied; in fact, IHCA patients discharged home without services demonstrated no survival difference compared to their non-IHCA controls (HR 1.10, p = 0.72). IHCA patients discharged to long-term hospital care or hospice, however, had a significantly higher mortality compared to matched controls (HR 3.91 and 20.3, respectively; p < 0.0001). Conclusion: Among IHCA patients who survive to hospital discharge, the highest risk of death is within the first 90 days after discharge. Additionally, IHCA survivors overall have increased long-term mortality vs. controls. Survival rates were varied widely with different discharge destinations, and those discharged to home, skilled nursing facilities or to rehabilitation services had survival rates no different than controls. Thus, increased mortality was primarily driven by patients discharged to long-term care or hospice.
Objective: To investigate the significance of functional polymorphisms of inflammatory response genes by analysis of a large population of patients, both with and without severe sepsis, and representative of the diverse populations (geographic diversity, physician diversity, clinical treatment diversity) that would be encountered in critical care clinical practice.
Design: Collaborative case-control study conducted from July 2001 to December 2005.
Setting: A heterogeneous population of patients from 12 USA intensive care units (ICUs) represented by the Genetic Predisposition to Severe Sepsis (GenPSS) archive.
Patients: Eight hundred and fifty-four patients with severe sepsis and an equal number of mortality, age, gender, and race-matched patients also admitted to the ICU without evidence of any infection (matched nonseptic controls).
Measurements and Main Results: We developed assays for six functional single nucleotide polymorphisms (SNPs) present before the first codon of TNF at −308, IL1B at −511, IL6 at −174, IL10 at −819, and CD14 at −159, and in the first intron of LTA (also known as TNF-β) at +252 (LTA(+252)). The Project IMPACT™ critical care clinical database information management system developed by the Society of Critical Care Medicine and managed by Tri-Analytics, Inc. and Cerner Corporation was utilized. Template-directed dye-terminator incorporation assay with fluorescence polarization detection was used as a high-throughput genotyping strategy. Fifty-three percent of the patients were male with 87.3 % and 6.4 % of Caucasian and African American racial types, respectively. Overall mortality was 35.1 % in both severe sepsis (SS) and matched nonseptic control (MC) patients group. Average ages (SD) of the SS and MC patients were 63.0 (16.05) and 65.0 (15.58) years old, respectively. Among the 6 SNPs, LTA(+252) was most over-represented in the septic patient group (% severe sepsis; AA 45.6: AG 51.1: GG 56.7, P = .005). Moreover, the genetic risk effect was most pronounced in males, age > 60 yrs (P = .005).
Conclusions: LTA(+252) may influence predisposition to severe sepsis, a predisposition that is modulated by gender and age. Although the genetic influences can be overwhelmed by both comorbid factors and acute illness in individual cases, population studies suggest that this is an influential biological pathway modulating risk of critical illnesses.
Introduction: Use of nurse practitioners and physician assistants ("affiliates") is increasing significantly in the intensive care unit (ICU). Despite this, few data exist on how affiliates allocate their time in the ICU. The purpose of this study was to understand the allocation of affiliate time into patient-care and non-patient-care activity, further dividing the time devoted to patient care into billable service and equally important but nonbillable care.Methods: We conducted a quasi experimental study in seven ICUs in an academic hospital and a hybrid academic/community hospital. After a period of self-reporting, a one-time monetary incentive of $2,500 was offered to 39 affiliates in each ICU in which every affiliate documented greater than 75% of their time devoted to patient care over a 6-month period in an effort to understand how affiliates allocated their time throughout a shift. Documentation included billable time (critical care, evaluation and management, procedures) and a new category ("zero charge time"), which facilitated record keeping of other patient-care activities.Results: At baseline, no ICUs had documentation of 75% patient-care time by all of its affiliates. In the 6 months in which reporting was tied to a group incentive, six of seven ICUs had every affiliate document greater than 75% of their time. Individual time documentation increased from 53% to 84%. Zero-charge time accounted for an average of 21% of each shift. The most common reason was rounding, which accounted for nearly half of all zero-charge time. Sign out, chart review, and teaching were the next most common zero-charge activities. Documentation of time spent on billable activities also increased from 53% of an affiliate's shift to 63%. Time documentation was similar regardless of during which shift an affiliate worked.Conclusions: Approximately two thirds of an affiliate's shift is spent providing billable services to patients. Greater than 20% of each shift is spent providing equally important but not reimbursable patient care. Understanding how affiliates spend their time and what proportion of time is spent in billable activities can be used to plan the financial impact of staffing ICUs with affiliates.
The non-equilibrium fluctuation dissipation theorem is applied to predict how critically ill patients respond to treatment, based upon data currently collected by standard hospital monitoring devices. This framework is demonstrated on a common procedure in critical care: the spontaneous breathing trial. It is shown that the responses of groups of similar patients to the spontaneous breathing trial can be predicted by the non-equilibrium fluctuation dissipation approach. This mathematical framework, when fully formed and applied to other clinical interventions, may serve as part of the basis for personalized critical care.
This paper examines several different queuing models for intensive care units (ICU) and the effects on wait times, utilization, return rates, mortalities, and number of patients served. Five separate intensive care units at an urban hospital are analyzed and distributions are fitted for arrivals and service durations. A system-based simulation model is built to capture all possible cases of patient flow after ICU admission. These include mortalities and returns before and after hospital exits. Patients are grouped into 9 different classes that are categorized by severity and length of stay (LOS).
Each queuing model varies by the policies that are permitted and by the order the patients are admitted. The first set of models does not prioritize patients, but examines the advantages of smoothing the operating schedule for elective surgeries. The second set analyzes the differences between prioritizing admissions by expected LOS or patient severity. The last set permits early ICU discharges and conservative and aggressive bumping policies are contrasted. It was found that prioritizing patients by severity considerably reduced delays for critical cases, but also increased the average waiting time for all patients. Aggressive bumping significantly raised the return and mortality rates, but more conservative methods balance quality and efficiency with lowered wait times without serious consequences.
OBJECTIVES: Sepsis is among the leading causes of morbidity, mortality, and cost overruns in critically ill patients. Early intervention with antibiotics improves survival in septic patients. However, no clinically validated system exists for real-time prediction of sepsis onset. We aimed to develop and validate an Artificial Intelligence Sepsis Expert algorithm for early prediction of sepsis. DESIGN: Observational cohort study. SETTING: Academic medical center from January 2013 to December 2015. PATIENTS: Over 31,000 admissions to the ICUs at two Emory University hospitals (development cohort), in addition to over 52,000 ICU patients from the publicly available Medical Information Mart for Intensive Care-III ICU database (validation cohort). Patients who met the Third International Consensus Definitions for Sepsis (Sepsis-3) prior to or within 4 hours of their ICU admission were excluded, resulting in roughly 27,000 and 42,000 patients within our development and validation cohorts, respectively.None. MEASUREMENTS AND MAIN RESULTS: High-resolution vital signs time series and electronic medical record data were extracted. A set of 65 features (variables) were calculated on hourly basis and passed to the Artificial Intelligence Sepsis Expert algorithm to predict onset of sepsis in the proceeding T hours (where T = 12, 8, 6, or 4). Artificial Intelligence Sepsis Expert was used to predict onset of sepsis in the proceeding T hours and to produce a list of the most significant contributing factors. For the 12-, 8-, 6-, and 4-hour ahead prediction of sepsis, Artificial Intelligence Sepsis Expert achieved area under the receiver operating characteristic in the range of 0.83-0.85. Performance of the Artificial Intelligence Sepsis Expert on the development and validation cohorts was indistinguishable. CONCLUSIONS: Using data available in the ICU in real-time, Artificial Intelligence Sepsis Expert can accurately predict the onset of sepsis in an ICU patient 4-12 hours prior to clinical recognition. A prospective study is necessary to determine the clinical utility of the proposed sepsis prediction model.
COVID-19 has had a profound impact on the critical care community, as one of the front-line areas in the ongoing pandemic, and on its journals. In response to the COVID-19 emergency, ‘observational research’ is being produced at an unprecedented rate. Although randomized trials were initially in the minority, recently more than 500 clinical trials have been formally registered. Consequently, medical journals have been overwhelmed with manuscripts of all types, mostly observational, and often anecdotal and in short format. All critical care journals have recorded a huge increase in the number of submissions in the first quarter of this year compared with the same period in 2019. Clinicians, justifiably, have been eager to get information about this frightening new disease, while the lay press has put COVID-19 developments in the spotlight, often without distinguishing between fake news, anecdotes and solid science. The various social media, as usual, have acted as an amplifier of what seems to be a phenomenon unprecedented in the history of modern medicine.
Introduction: Extracorporeal life support (ECLS) can temporarily support cardiopulmonary function, and is occasionally used in resuscitation. Multi-scale entropy (MSE) derived from heart rate variability (HRV) is a powerful tool in outcome prediction of patients with cardiovascular diseases. Multi-scale symbolic entropy analysis (MSsE), a new method derived from MSE, mitigates the effect of arrhythmia on analysis. The objective is to evaluate the prognostic value of MSsE in patients receiving ECLS. The primary outcome is death or urgent transplantation during the index admission.
Methods: Fifty-seven patients receiving ECLS less than 24 hours and 23 control subjects were enrolled. Digital 24-hour Holter electrocardiograms were recorded and three MSsE parameters (slope 5, Area 6-20, Area 6-40) associated with the multiscale correlation and complexity of heart beat fluctuation were calculated.
Results: Patients receiving ECLS had significantly lower value of slope 5, area 6 to 20, and area 6 to 40 than control subjects. During the follow-up period, 29 patients met primary outcome. Age, slope 5, Area 6 to 20, Area 6 to 40, acute physiology and chronic health evaluation II score, multiple organ dysfunction score (MODS), logistic organ dysfunction score (LODS), and myocardial infarction history were significantly associated with primary outcome. Slope 5 showed the greatest discriminatory power. In a net reclassification improvement model, slope 5 significantly improved the predictive power of LODS; Area 6 to 20 and Area 6 to 40 significantly improved the predictive power in MODS. In an integrated discrimination improvement model, slope 5 added significantly to the prediction power of each clinical parameter. Area 6 to 20 and Area 6 to 40 significantly improved the predictive power in sequential organ failure assessment.
Conclusions: MSsE provides additional prognostic information in patients receiving ECLS.