Background: Cortical spreading depolarization (SD) is a propagating depolarization wave of neurons and glial cells in the cerebral gray matter. SD occurs in all forms of severe acute brain injury, as documented by using invasive detection methods. Based on many experimental studies of mechanical brain deformation and concussion, the occurrence of SDs in human concussion has often been hypothesized. However, this hypothesis cannot be confirmed in humans, as SDs can only be detected with invasive detection methods that would require either a craniotomy or a burr hole to be performed on athletes. Typical electroencephalography electrodes, placed on the scalp, can help detect the possible presence of SD but have not been able to accurately and reliably identify SDs. Methods: To explore the possibility of a noninvasive method to resolve this hurdle, we developed a finite element numerical model that simulates scalp voltage changes that are induced by a brain surface SD. We then compared our simulation results with retrospectively evaluated data in patients with aneurysmal subarachnoid hemorrhage from Drenckhahn et al. (Brain 135:853, 2012). Results: The ratio of peak scalp to simulated peak cortical voltage, Vscalp/Vcortex, was 0.0735, whereas the ratio from the retrospectively evaluated data was 0.0316 (0.0221, 0.0527) (median [1st quartile, 3rd quartile], n = 161, p < 0.001, one sample Wilcoxon signed-rank test). These differing values provide validation because their differences can be attributed to differences in shape between concussive SDs and aneurysmal subarachnoid hemorrhage SDs, as well as the inherent limitations in human study voltage measurements. This simulated scalp surface potential was used to design a virtual scalp detection array. Error analysis and visual reconstruction showed that 1 cm is the optimal electrode spacing to visually identify the propagating scalp voltage from a cortical SD. Electrode spacings of 2 cm and above produce distorted images and high errors in the reconstructed image. Conclusions: Our analysis suggests that concussive (and other) SDs can be detected from the scalp, which could confirm SD occurrence in human concussion, provide concussion diagnosis on the basis of an underlying physiological mechanism, and lead to noninvasive SD detection in the setting of severe acute brain injury
Background: After stroke, increases in contralesional primary motor cortex (M1CL) activity and excitability have been reported. In pre-clinical studies, M1CL reorganization is related to the extent of ipsilesional M1 (M1IL) injury, but this has yet to be tested clinically. Objectives: We tested the hypothesis that the extent of damage to the ipsilesional M1 and/or its corticospinal tract (CST) determines the magnitude of M1CL reorganization and its relationship to affected hand function in humans recovering from stroke. Methods: Thirty-five participants with a single subacute ischemic stroke affecting M1 or CST and hand paresis underwent MRI scans of the brain to measure lesion volume and CST lesion load. Transcranial magnetic stimulation (TMS) of M1IL was used to determine the presence of an electromyographic response (motor evoked potential (MEP+ and MEP−)). M1CL reorganization was determined by TMS applied to M1CL at increasing intensities. Hand function was quantified with the Jebsen Taylor Hand Function Test. Results: The extent of M1CL reorganization was related to greater lesion volume in the MEP− group, but not in the MEP+ group. Greater M1CL reorganization was associated with more impaired hand function in MEP− but not MEP+ participants. Absence of an MEP (MEP−), larger lesion volumes and higher lesion loads in CST, particularly in CST fibers originating in M1 were associated with greater impairment of hand function. Conclusions: In the subacute post-stroke period, stroke volume and M1IL output determine the extent of M1CL reorganization and its relationship to affected hand function, consistent with pre-clinical evidence. ClinicalTrials.gov Identifier: NCT02544503.
Background: Resistance to malaria infection may be conferred by erythrocyte genetic variations including glucose-6-phosphate dehydrogenase (G6PD) deficiency and lack of Duffy antigens. In red blood cell (RBC) transfusion, G6PD deficiency may shorten transfusion survival. Because Duffy-null units are commonly transfused in sickle cell disease (SCD) due to antigen matching protocols, we examined whether Duffy-null donor RBC units have a higher prevalence of G6PD deficiency. Materials and methods: Pediatric patients with SCD on chronic transfusion therapy were followed prospectively for multiple transfusions. RBC unit segments were collected to measure G6PD activity and RBC genotyping. The decline in donor hemoglobin (ΔHbA) following transfusion was assessed from immediate posttransfusion estimates and HbA measurements approximately 1 month later. Results: Of 564 evaluable RBC units, 59 (10.5%) were G6PD deficient (23 severe, 36 moderate deficiency); 202 (37.6%) units were Duffy-null. G6PD deficiency occurred in 40 (19.8%) Duffy-null units versus 15 (4.5%) Duffy-positive units (p <.0001). In univariate analysis, the fraction of Duffy-null RBC units per transfusion was associated with greater decline in HbA (p =.038); however, in multivariate analysis, severe G6PD deficiency (p =.0238) but not Duffy-null RBC (p =.0139) were associated with ΔHbA. Conclusion: Selection of Duffy-null RBC units may result in shorter in vivo survival of transfused RBCs due to a higher likelihood of transfusing units from G6PD deficient donors.
Background. Maintenance immunosuppression with belatacept following kidney transplantation results in improved long-term graft function as compared with calcineurin inhibitors. However, broad application of belatacept has been limited, in part related to logistical barriers surrounding a monthly (q1m) infusion requirement. Methods. To determine whether every 2-mo (q2m) belatacept is noninferior to standard q1m maintenance, we conducted a prospective, single-center randomized trial in low-immunologic-risk, stable renal transplant recipients. Here, post hoc analysis of 3-y outcomes, including renal function and adverse events, are reported. Results. One hundred sixty-three patients received treatment in the q1m control group (n = 82) or q2m study group (n = 81). Renal allograft function as measured by baseline-adjusted estimated glomerular filtration rate was not significantly different between groups (time-averaged mean difference of 0.2 mL/min/1.73 m2; 95% confidence interval: -2.5, 2.9). There were no statistically significant differences in time to death or graft loss, freedom from rejection, or freedom from donor-specific antibodies (DSAs). During the extended 12- to 36-mo follow-up, 3 deaths, 1 graft loss occurred in the q1m group, compared with 2 deaths, and 2 graft losses in the q2m group. In the q1m group, 1 patient developed DSAs and acute rejection. In the q2m group, 3 patients developed DSAs and 2 associated with acute rejection. Conclusions. Based on the similar renal function and survival at 36 mo compared with q1m, q2m belatacept is a potentially viable maintenance immunosuppressive strategy in low immunologic risk kidney transplant recipients that may facilitate increased clinical utilization of costimulation blockade-based immunosuppression.
HIV-1 protease inhibitors (PIs) exhibit different protein binding affinities and achieve variable plasma and tissue concentrations. Degree of plasma protein binding may impact central nervous system penetration. This cross-sectional study assessed cerebrospinal fluid (CSF) unbound PI concentrations, HIV-1 RNA, and neopterin levels in subjects receiving either ritonavir-boosted darunavir (DRV), 95% plasma protein bound, or atazanavir (ATV), 86% bound. Unbound PI trough concentrations were measured using rapid equilibrium dialysis and liquid chromatography/tandem mass spectrometry. Plasma and CSF HIV-1 RNA and neopterin were measured by Ampliprep/COBAS® Taqman® 2.0 assay (Roche) and enzyme-linked immunosorbent assay (ALPCO), respectively. CSF/plasma unbound drug concentration ratio was higher for ATV, 0.09 [95% confidence interval (CI) 0.06-0.12] than DRV, 0.04 (95%CI 0.03-0.06). Unbound CSF concentrations were lower than protein adjusted wild-type inhibitory concentration-50 (IC50) in all ATV and 1 DRV-treated subjects (P<0.001). CSF HIV-1 RNA was detected in 2/15 ATV and 4/15 DRV subjects (P=0.65). CSF neopterin levels were low and similar between arms. ATV relative to DRV had higher CSF/plasma unbound drug ratio. Low CSF HIV-1 RNA and neopterin suggest that both regimens resulted in CSF virologic suppression and controlled inflammation.
Background and Aim: To develop and evaluate a culture-specific nutrient intake assessment tool for use in adults with pulmonary tuberculosis (TB) in Tbilisi, Georgia.
Methods: We developed an instrument to measure food intake over 3 consecutive days using a questionnaire format. The tool was then compared to 24 hour food recalls. Food intake data from 31 subjects with TB were analyzed using the Nutrient Database System for Research (NDS-R) dietary analysis program. Paired t-tests, Pearson correlations and intraclass correlation coefficients (ICC) were used to assess the agreement between the two methods of dietary intake for calculated nutrient intakes.
Results: The Pearson correlation coefficient for mean daily caloric intake between the 2 methods was 0.37 (P = 0.04) with a mean difference of 171 kcals/day (p = 0.34). The ICC was 0.38 (95% CI: 0.03 to 0.64) suggesting the within-patient variability may be larger than between-patient variability. Results for mean daily intake of total fat, total carbohydrate, total protein, retinol, vitamins D and E, thiamine, calcium, sodium, iron, selenium, copper, and zinc between the two assessment methods were also similar.
Conclusions: This novel nutrient intake assessment tool provided quantitative nutrient intake data from TB patients. These pilot data can inform larger studies in similar populations.
Background
Immune mediated changes in circulating α-1-acid glycoprotein (AAG), a type 1 acute phase protein, which binds protease inhibitors (PI), may alter protein binding and contribute to PI's pharmacokinetic (PK) variability.
Methods
In a prospective, 2-phase intensive PK study on antiretroviral naive human immunodeficiency virus (HIV)-infected subjects treated with a lopinavir-/ritonavir-based regimen, steady state PK sampling and AAG assays were performed at weeks 2 and 16 of treatment.
Results
Median entry age was 43 years (n = 16). Median plasma log10 HIV-1 RNA, CD4 T-cell counts, and AAG were 5.16 copies/mL, 28 cells/μL, and 143 mg/dL, respectively.The total lopinavir area under the concentration time curve (AUC12_total) and maximum concentration (Cmax_total) changed linearly with AAG at mean rates of 16±7 mg*hr/L (slope ± SE); P = .04, and 1.6 ± 0.6 mg/L, P = .02, per 100 mg/dL increase in AAG levels, respectively (n = 15).A 29% drop in AAG levels between week 2 and week 16 was associated with 14% (geometric mean ratio [GMR] = 0.86; 90% confidence interval [CI] = 0.74-0.98) and 13% (GMR = 0.87; 90% CI = 0.79-0.95) reduction in AUC12_total and Cmax_total, respectively. Neither free lopinavir PK parameters nor antiviral activity (HIV-1 RNA average AUC minus baseline) was affected by change in plasma AAG.
Conclusions
Changes in plasma AAG levels alter total lopinavir concentrations, but not the free lopinavir exposure or antiviral activity. This observation may have implications in therapeutic drug monitoring.
The transfusion-transmitted cytomegalovirus (TT-CMV) can cause serious morbidity and mortality in low-birth weight infants (LBWIs). Transfusion-transmitted cytomegalovirus can be minimized in LBWIs born to cytomegalovirus (CMV)-seronegative mothers with the use of CMV-seronegative blood components. Despite evidence that has independently shown that either leukoreduction or the use of CMV-seronegative components mitigates TT-CMV, the potential efficacy of combining these 2 strategies has not been substantiated in very LBWIs (<1500 g) born to either CMV-seronegative or CMV-seropositive mothers. Nonetheless, the serious risks of CMV infection posed by allogeneic transfusions and the broad implementation of universal leukoreduction have made this combination strategy the de facto clinical standard for transfusion of LBWIs. Although preferred, this combined approach has not been validated in clinical trials and, thus, warrants a large prospective study to determine whether this is the optimal transfusion tactic or if additional safety measures are necessary to prevent TT-CMV in LBWIs born to both CMV- seronegative and CMV-seropositive mothers. The aim of this prospective birth cohort study, therefore, is to estimate the incidence of TT-CMV in 1300 LBWIs (<1500 g) who receive CMV-seronegative plus leuko-reduced blood products to evaluate the effectiveness of this coupled strategy. Conducted in Atlanta, GA, this study has been registered at the US National Institutes of Health (ClinicalTrials.gov no. NCT00907686).
Objective: Recent studies have detected an association between red blood cell (RBC) transfusions and NEC. We hypothesized that RBC transfusions increase the risk of NEC in premature infants, and investigated whether the risk of 'transfusion-associated' NEC is higher in infants with lower hematocrits and advanced postnatal age.
Methods: Retrospective comparison of NEC patients and control patients born at <34 weeks gestation.
Results: The frequency of RBC transfusions was similar in NEC patients (47/93, 51%) and controls (52/91, 58%). Late-onset NEC (> 4 weeks of age) was more frequently associated with a history of transfusion(s) than early-onset NEC (adjusted OR=6.7; 95% CI=1.5–31.2; p=0.02). Compared to non-transfused patients, RBC-transfused patients were born at an earlier gestational age, had greater intensive care needs (including at the time of onset of NEC), and longer hospital stay. A history of RBC transfusions within 48-hours prior to NEC onset was noted in 38% of patients, most of whom were extremely low birth weight (ELBW) infants.
Conclusions: In most patients, RBC transfusions were temporally unrelated to NEC and may be merely a marker of overall severity of illness. However, the relationship between RBC transfusions and NEC requires further evaluation in ELBW infants using a prospective cohort design.