OBJECTIVES::
Measurements of extravascular lung water (EVLW) correlate to the degree of pulmonary edema and have substantial prognostic information in critically ill patients. Prior studies using single indicator thermodilution have reported that 21% to 35% of patients with clinical acute respiratory distress syndrome (ARDS) have normal EVLW (<10 mL/kg). Given that lung size is independent of actual body weight, we sought to determine whether indexing EVLW to predicted or adjusted body weight affects the frequency of increased EVLW in patients with ARDS. DESIGN:: Prospective, observational cohort study. SETTING:: Medical and surgical intensive care units at two academic hospitals. PATIENTS:: Thirty patients within 72 hrs of meeting American-European Consensus Conference definition of ARDS and 14 severe sepsis patients without ARDS. INTERVENTIONS:: None. MEASUREMENT AND MAIN RESULTS:: EVLW was measured for 7 days by PiCCO transpulmonary thermodilution; 225 measurements of EVLW indexed to actual body weight (ActBW) were compared with EVLW indexed to predicted body weight (PBW) and adjusted body weight (AdjBW). Mean EVLW indexed to ActBW was 12.7 mg/kg for ARDS patients and 7.8 mg/kg for non-ARDS sepsis patients (p < .0001). In all patients, EVLW increased an average of 1.1 ± 2.1 mL/kg when indexed to AdjBW and 2.0 ± 4.1 mL/kg when indexed to PBW. Indexing EVLW to PBW or AdjBW increased the proportion of ARDS patients with elevated EVLW (each p < .05) without increasing the frequency of elevated EVLW in non-ARDS patients. EVLW indexed to PBW had a stronger correlation to Lung Injury Score (r = .39 vs. r = .17) and Pao2/Fio2 ratio (r = .25 vs. r = .10) than did EVLW indexed to ActBW. CONCLUSIONS:: Indexing EVLW to PBW or AdjBW reduces the number of ARDS patients with normal EVLW and correlates better to Lung Injury Score and oxygenation than using ActBW. Studies are needed to confirm the presumed superiority of this method for diagnosing ARDS and to determine the clinical treatment implications.
Study Design: Retrospective case series. Objectives: An increasing focus has been placed on removing implicit (unconscious) bias from the surgical selection process. In spine surgery, there is the potential for implicit bias to affect the decision to either operate on a patient or not, given lack of definitive surgical indications for many elective procedures. The objective of this study was to analyze the surgical decision making of a single spine surgeon in an effort to understand surgical decision-making trends based on certain demographic factors. Methods: This was a retrospective study of 484 patients who had undergone a corrective procedure for cervical myelopathy by an orthopedic spine surgeon at our institution. The preoperative modified Japanese Orthopaedic Association score served as the metric of severity of disease for cervical myelopathy. The factors that have been associated with implicit bias that were evaluated were smoking status, narcotic use status, gender, body mass index, and age. Results: Multivariate linear regression analysis showed that even after controlling for comorbidities and confounders, the only variable which predicted likelihood to operate on a patient of a milder symptomology was age (odds ratio [OR] = −0.138; (confidence interval [CI] = −0.034 to −0.006). The other factors (smoking status, narcotic use status, gender, and body mass index) were not associated with surgical decision making. Conclusions: Our study demonstrates absence of association between commonly studied areas of implicit bias and the decision to operate on a patient with milder symptomology at initial presentation of cervical spondylotic myelopathy.
Background: Road traffic injuries (RTIs) are the eighth leading cause of death worldwide, with an estimated 90% of RTIs occurring in low- and middle-income countries (LMICs) like Brazil. There has been minimal research in evaluation of delays in transport of RTI patients to trauma centers in LMICs. The objective of this study is to determine specific causes of delays in prehospital transport of road traffic injury patients to designated trauma centers in Maringá, Brazil. Methods: A qualitative method was used based on the Consolidated Criteria for Reporting Qualitative Research (COREQ) approach. Eleven health care providers employed at prehospital or hospital settings were interviewed with questions specific to delays in care for RTI patients. A thematic analysis was conducted. Results: Responses to primary causes of delay in treatment to RTI patients fell into the following categories: 1) lack of public education, 2) traffic, 3) insufficient personnel/ambulances, 4) bureaucracy, and 5) poor location of stations. Suggestions for improvement in delays fell into the categories of 1) need for centralized station/avoid traffic, 2) improving public education, 3) Increase personnel, 4) increase ambulances, 5) proper extrication/rapid treatment. Conclusion: Our study found varied responses between hospital and SAMU providers regarding specific causes of delay for RTI patients; SAMU providers cited primarily traffic, bureaucracy, and poor location as primary factors while hospital employees focused more on public health aspects. These results mirror prehospital system challenges in other developing countries, but also provide solutions for improvement with better infrastructure and public health campaigns.
by
Jesmin Pervin;
Allisyn Moran;
Monjur Rahman;
Abdur Razzaque;
Lynn Sibley;
Peter K. Streatfield;
Laura J. Reichenbach;
Marge Koblinsky;
Daniel Hruschka;
Anisur Rahman
Background: Antenatal Care (ANC) during pregnancy can play an important role in the uptake of evidence-based services vital to the health of women and their infants. Studies report positive effects of ANC on use of facility-based delivery and perinatal mortality. However, most existing studies are limited to cross-sectional surveys with long recall periods, and generally do not include population-based samples.
Methods: This study was conducted within the Health and Demographic Surveillance System (HDSS) of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b) in Matlab, Bangladesh. The HDSS area is divided into an icddr,b service area (SA) where women and children receive care from icddr,b health facilities, and a government SA where people receive care from government facilities. In 2007, a new Maternal, Neonatal, and Child Health (MNCH) program was initiated in the icddr,b SA that strengthened the ongoing maternal and child health services including ANC. We estimated the association of ANC with facility delivery and perinatal mortality using prospectively collected data from 2005 to 2009. Using a before-after study design, we also determined the role of ANC services on reduction of perinatal mortality between the periods before (2005 - 2006) and after (2008-2009) implementation of the MNCH program.
Results: Antenatal care visits were associated with increased facility-based delivery in the icddr,b and government SAs. In the icddr,b SA, the adjusted odds of perinatal mortality was about 2-times higher (odds ratio (OR) 1.91; 95% confidence intervals (CI): 1.50, 2.42) among women who received ≤1 ANC compared to women who received ≥3 ANC visits. No such association was observed in the government SA. Controlling for ANC visits substantially reduced the observed effect of the intervention on perinatal mortality (OR 0.64; 95% CI: 0.52, 0.78) to non-significance (OR 0.81; 95% CI: 0.65, 1.01), when comparing cohorts before and after the MNCH program initiation (Sobel test of mediation P < 0.001).
Conclusions: ANC visits are associated with increased uptake of facility-based delivery and improved perinatal survival in the icddr,b SA. Further testing of the icddr,b approach to simultaneously improving quality of ANC and facility delivery care is needed in the existing health system in Bangladesh and in other low-income countries to maximize health benefits to mothers and newborns.
Introduction: Patients with distributive shock who require high dose vasopressors have a high mortality. Angiotensin II (ATII) may prove useful in patients who remain hypotensive despite catecholamine and vasopressin therapy. The appropriate dose of parenteral angiotensin II for shock is unknown. Methods: In total, 20 patients with distributive shock and a cardiovascular Sequential Organ Failure Assessment score of 4 were randomized to either ATII infusion (N =10) or placebo (N =10) plus standard of care. ATII was started at a dose of 20 ng/kg/min, and titrated for a goal of maintaining a mean arterial pressure (MAP) of 65 mmHg. The infusion (either ATII or placebo) was continued for 6 hours then titrated off. The primary endpoint was the effect of ATII on the standing dose of norepinephrine required to maintain a MAP of 65 mmHg. Results: ATII resulted in marked reduction in norepinephrine dosing in all patients. The mean hour 1 norepinephrine dose for the placebo cohort was 27.6 ± 29.3 mcg/min versus 7.4 ± 12.4 mcg/min for the ATII cohort (P =0.06). The most common adverse event attributable to ATII was hypertension, which occurred in 20% of patients receiving ATII. 30-day mortality for the ATII cohort and the placebo cohort was similar (50% versus 60%, P =1.00). Conclusion: Angiotensin II is an effective rescue vasopressor agent in patients with distributive shock requiring multiple vasopressors. The initial dose range of ATII that appears to be appropriate for patients with distributive shock is 2 to 10 ng/kg/min. Trial registration: Clinicaltrials.gov NCT01393782. Registered 12 July 2011.
Objective: This report describes three patients with Ebola virus disease who were treated in the United States and developed for severe critical illness and multiple organ failure secondary to Ebola virus infection. The patients received mechanical ventilation, renal replacement therapy, invasive monitoring, vasopressor support, and investigational therapies for Ebola virus disease.
Data Sources: Patient medical records from three tertiary care centers (Emory University Hospital, University of Nebraska Medical Center, and Texas Health Presbyterian Dallas Hospital).
Study Selection: Not applicable.
Data Extraction: Not applicable.
Data Synthesis: Not applicable.
Conclusion: In the severe form, patients with Ebola virus disease may require life-sustaining therapy, including mechanical ventilation and renal replacement therapy. In conjunction with other reported cases, this series suggests that respiratory and renal failure may occur in severe Ebola virus disease, especially in patients burdened with high viral loads. Ebola virus disease complicated by multiple organ failure can be survivable with the application of advanced life support measures. This collective, multicenter experience is presented with the hope that it may inform future treatment of patients with Ebola virus disease requiring critical care treatment.
by
Sean van Diepen;
Saket Girotra;
Benjamin S. Abella;
Lance B. Becker;
Bentley J. Bobrow;
Paul S. Chan;
Carol Fahrenbruch;
Christopher B. Granger;
James G. Jollis;
Bryan McNally;
Lindsay White;
Demetris Yannopoulos;
Thomas D. Rea
BACKGROUND: The HeartRescue Project is a multistate public health initiative focused on establishing statewide out-of-hospital cardiac arrest (OHCA) systems of care to improve case capture and OHCA care in the community, by emergency medical services (EMS), and at hospital level. METHODS AND RESULTS: From 2011 to 2015 in the 5 original HeartRescue states, all adults with EMS-treated OHCA due to a presumed cardiac cause were included. In an adult population of 32.8 million, a total of 64 988 OHCAs-including 10 046 patients with a bystander-witnessed OHCA with a shockable rhythm-were treated by 330 EMS agencies. From 2011 to 2015, the case-capture rate for all-rhythm OHCA increased from an estimated 39.0% (n=6762) to 89.2% (n=16 103; P<0.001 for trend). Overall survival to hospital discharge was 11.4% for all rhythms and 34.0% in the subgroup with bystander-witnessed OHCA with a shockable rhythm. We observed modest temporal increases in bystander cardiopulmonary resuscitation (41.8-43.5%, P<0.001 for trend) and bystander automated external defibrillator application (3.2-5.6%, P<0.001 for trend) in the all-rhythm group, although there were no temporal changes in survival. There were marked all-rhythm survival differences across the 5 states (8.0-16.1%, P<0.001) and across participating EMS agencies (2.7-26.5%, P<0.001). CONCLUSIONS: In the initial 5 years, the HeartRescue Project developed a population-based OHCA registry and improved statewide case-capture rates and some processes of care, although there were no early temporal changes in survival. The observed survival variation across states and EMS systems presents a future challenge to elucidate the characteristics of high-performing systems with the goal of improving OHCA care and survival.
Introduction Rapid growth of the older adult population requires greater epidemiologic characterization of dementia. We developed national prevalence estimates of diagnosed dementia and subtypes in the highest risk United States (US) population. Methods We analyzed Centers for Medicare & Medicaid administrative enrollment and claims data for 100% of Medicare fee-for-service beneficiaries enrolled during 2011–2013 and age ≥68 years as of December 31, 2013 (n = 21.6 million). Results Over 3.1 million (14.4%) beneficiaries had a claim for a service and/or treatment for any dementia subtype. Dementia not otherwise specified was the most common diagnosis (present in 92.9%). The most common subtype was Alzheimer's (43.5%), followed by vascular (14.5%), Lewy body (5.4%), frontotemporal (1.0%), and alcohol induced (0.7%). The prevalence of other types of diagnosed dementia was 0.2%. Discussion This study is the first to document concurrent prevalence of primary dementia subtypes among this US population. The findings can assist in prioritizing dementia research, clinical services, and caregiving resources.
Objective: The interpretation of critical care electroencephalography (EEG) studies is challenging because of the presence of many periodic and rhythmic patterns of uncertain clinical significance. Defining the clinical significance of these patterns requires standardized terminology with high interrater agreement (IRA). We sought to evaluate IRA for the final, published American Clinical Neurophysiology Society (ACNS)- approved version of the critical care EEG terminology (2012 version). Our evaluation included terms not assessed previously and incorporated raters with a broad range of EEG reading experience.
Methods: After reviewing a set of training slides, 49 readers independently completed a Web-based test consisting of 11 identical questions for each of 37 EEG samples (407 questions). Questions assessed whether a pattern was an electrographic seizure; pattern location (main term 1), pattern type (main term 2); and presence and classification of eight other key features ("plus" modifiers, sharpness, absolute and relative amplitude, frequency, number of phases, fluctuation/evolution, and the presence of "triphasic" morphology).
Results: IRA statistics (j values) were almost perfect (90-100%) for seizures, main terms 1 and 2, the +S modifier (superimposed spikes/sharp waves or sharply contoured rhythmic delta activity), sharpness, absolute amplitude, frequency, and number of phases. Agreement was substantial for the +F (superimposed fast activity) and +R (superimposed rhythmic delta activity) modifiers (66% and 67%, respectively), moderate for triphasic morphology (58%), and fair for evolution (21%).
Significance: IRA for most terms in the ACNS critical care EEG terminology is high. These terms are suitable for multicenter research on the clinical significance of critical care EEG patterns.
Objective: cEEG is an emerging technology for which there are no clear guidelines for patient selection or length of monitoring. The purpose of this study was to identify subgroups of pediatric patients with high incidence of seizures. Study Design: We conducted a retrospective study on 517 children monitored by cEEG in the intensive care unit (ICU) of a children's hospital. The children were stratified using an age threshold selection method. Using regression modeling, we analyzed significant risk factors for increased seizure risk in younger and older children. Using two alternative correction procedures, we also considered a relevant comparison group to mitigate selection bias and to provide a perspective for our findings. Results: We discovered an approximate risk threshold of 14 months: below this threshold, the seizure risk increases dramatically. The older children had an overall seizure rate of 18%, and previous seizures were the only significant risk factor. In contrast, the younger children had an overall seizure rate of 45%, and the seizures were significantly associated with hypoxic-ischemic encephalopathy (HIE; p = 0.007), intracranial hemorrhage (ICH; p = 0.005), and central nervous system (CNS) infection (p = 0.02). Children with HIE, ICH, or CNS infection accounted for 61% of all seizure patients diagnosed through cEEG under 14 months. Conclusions: An extremely high incidence of seizures prevails among critically ill children under 14 months, particularly those with HIE, ICH, or CNS infection.