Background: Patients with head and neck cancer (HNC) experience psychoneurological symptoms (PNS, i.e., depression, fatigue, sleep disturbance, pain, and cognitive dysfunction) during intensity-modulated radiotherapy (IMRT) that decrease their functional status, quality of life, and survival rates. The purposes of this study were to examine and visualize the relationships among PNS within networks over time and evaluate for demographic and clinical characteristics associated with symptom networks.
Methods: A total of 172 patients (mean age 59.8±9.9 years, 73.8% male, 79.4% White) completed symptom questionnaires four times, namely, prior to IMRT (T1), one month (T2), three months (T3), and 12 months (T4) post IMRT. Network analysis was used to examine the symptom-symptom relationships among PNS. Centrality indices, including strength, closeness, and betweenness, were used to describe the degrees of symptom network interconnections. Network comparison test was used to assess the differences between two symptom networks.
Results: Depression was associated with the other four symptoms, and fatigue was associated with the other three symptoms across the four assessments. Based on the centrality indices, depression (rstrength=1.3–1.4, rcloseness=0.06–0.08, rbetweeness=4–10) was the core symptom in all symptom networks, followed by fatigue. Female gender, higher levels of stress, and no alcohol use were associated with stronger symptom networks in network global strength prior to IMRT.
Conclusion: Network analysis provides a novel approach to gain insights into the relationships among self-reported PNS and identify the core symptoms and associated characteristics. Clinicians may use this information to develop symptom management interventions that target core symptoms and interconnections within a network.
by
Eddie Zhang;
Karen J. Ruth;
Mark K. Buyyounouski;
Robert A. Price, Jr.;
Robert G. Uzzo;
Mark L. Sobczak;
Alan Pollack;
J. Karen Wong;
David Y. T. Chen;
Mark A. Hallman;
Richard E. Greenberg;
Deborah W. Bruner;
Tahseen Al-Saleem;
Eric M. Horwitz
Purpose/Objective(s):
To prospectively determine if a novel technique to limit the doses delivered to the penile bulb (PB) and corporal bodies (CB) with IMRT preserves erectile function to a greater degree compared to standard IMRT in men with low to intermediate risk prostate cancer.
Materials/Methods:
From 2003–2012, 116 patients with low to intermediate risk, clinical T1a-T2c prostate adenocarcinoma were enrolled to a single-institution, prospective, single blind, phase III randomized trial. All patients received definitive IMRT to 74–80 Gy in 37–40 fractions. Patients were assigned to receive standard IMRT (s-IMRT) or erectile tissue sparing IMRT (ETS-IMRT), which placed additional planning constraints that limited the D90 to the PB and CB to ≤ 15 Gy and ≤ 7 Gy, respectively. Erectile potency was defined as the ability to attain and maintain penile erection sufficient for satisfactory sexual performance > 50% of the time and was measured using the simplified International Index of Erectile Function (IIEF-5) and PDE5 medication records. Potency preservation outcomes, biochemical failure, overall survival, and acute/late toxicity were compared.
Results:
62 patients received ETS-IMRT and 54 received s-IMRT. Treatment arms were balanced with respect to age (median 62 years [range 42–77]), race, smoking status, BMI, baseline AUA score, hypertension, diabetes, and hypercholesterolemia. The majority of patients presented with Gleason 6 disease (ETS-IMRT 79%, s-IMRT 93%, p=0.06), T1c stage (ETS-IMRT 73%, s-IMRT 82%, p=0.32), and pre-treatment PSA < 10 (ETS-IMRT 94%, s-IMRT 93%, p=0.99). Prior to treatment, all patients in both arms reported erectile potency. With a median follow-up time of 6.1 years, 85 patients were eligible for potency preservation analysis. At 24 months after treatment, erectile potency was seen in 52% of patients in the ETS-IMRT arm and 51% of patients in the s-IMRT arm (p=0.95). PDE5 inhibitors were initiated in 41.7% of ETS-IMRT patients and 35.1% of s-IMRT patients (p=0.54). Among all patients enrolled, there was no difference in freedom from biochemical failure between those treated with ETS-IMRT and s-IMRT (91.8% vs 90.7%, p=0.77) with a median follow-up of 7.4 years. There were no differences in acute or late GI or GU toxicity. Due to significant baseline PB and CB dose-sparing in the control arm (D90 to PB ≤ 15 Gy was met in 88% of ETS-IMRT patients and 75% of s-IMRT patients; D90 to the CB ≤ 7 Gy was met in 71% of ETS-IMRT patients and 43% of s-IMRT patients), an unplanned per-protocol analysis was performed. No differences in potency preservation or secondary endpoints were seen between patients who exceeded erectile tissue-sparing constraints and those who met constraints.
Conclusions:
Erectile tissue sparing IMRT that limits dose to the PB and CB is safe and feasible, although there was no significant difference in potency preservation with long-term follow-up.
by
Ronald Eldridge;
Stephanie L. Pugh;
Andy Trotti;
Kenneth Hu;
Sharon Spencer;
Sue Yom;
David Rosenthal;
Nancy Read;
Anand Desai;
Elizabeth Gore;
George Shenouda;
Mark V. Mishra;
Deborah Bruner;
Canhua Xiao
Background:
Is post-treatment functional status prognostic of overall survival in head and neck cancer (HNC) patients.
Methods:
In an HNC clinical trial, 495 patients had two post-treatment functional assessments measuring diet, public eating, and speech within 6 months. Patients were grouped by impairment (highly, moderately, modestly, or not impaired) and determined if they improved, declined, or did not change from the first assessment to the second. Multivariable Cox models estimated overall mortality.
Results:
Across all three scales, the change in post-treatment patient function strongly predicted overall survival. In diet, patients who declined to highly impaired had three times the mortality of patients who were not impaired at both assessments (hazard ratio=3.60; 95% confidence interval: 2.02, 6.42). For patients improving from highly impaired, mortality was statistically similar to patients with no impairment (HR=1.38, 95% CI: 0.82, 2.31).
Conclusions:
Post-treatment functional status is a strong prognostic marker of survival in HNC patients.
by
Stephanie L. Pugh;
Gwen Wyatt;
Raimond K. W. Wong;
Stephen M. Sagar;
Bevan Yueh;
Anurag K. Singh;
Min Yao;
Phuc Felix Nguyen-Tan;
Sue S. Yom;
Francis S. Cardinale;
Khalil Sultanem;
D. Ian Hodson;
Greg A. Krempl;
Ariel Chavez;
Alexander M. Yeh;
Deborah W. Bruner
Context The 15-item University of Washington Quality of Life questionnaire–Radiation Therapy Oncology Group (RTOG) modification (UW-QOL–RTOG modification) has been used in several trials of head and neck cancer conducted by NRG Oncology such as RTOG 9709, RTOG 9901, RTOG 0244, and RTOG 0537. Objectives This study is an exploratory factor analysis (EFA) to establish validity and reliability of the instrument subscales. Methods EFA on the UW-QOL–RTOG modification was conducted using baseline data from NRG Oncology's RTOG 0537, a trial of acupuncture-like transcutaneous electrical nerve stimulation in treating radiation-induced xerostomia. Cronbach α coefficient was calculated to measure reliability; correlation with the University of Michigan Xerostomia Related Quality of Life Scale was used to evaluate concurrent validity; and correlations between consecutive time points were used to assess test-retest reliability. Results The 15-item EFA of the modified tool resulted in 11 items split into four factors: mucus, eating, pain, and activities. Cronbach α ranged from 0.71 to 0.93 for the factors and total score, consisting of all 11 items. There were strong correlations (ρ ≥ 0.60) between consecutive time points and between total score and the Xerostomia Related Quality of Life Scale total score (ρ > 0.65). Conclusion The UW-QOL–RTOG modification is a valid tool that can be used to assess symptom burden of head and neck cancer patients receiving radiation therapy or those who have recently completed radiation. The modified tool has acceptable reliability, concurrent validity, and test-retest reliability in this patient population, as well as the advantage of having being shortened from 15 to 11 items.
by
Canhua Xiao;
Jennifer Moughan;
Benjamin Movsas;
Andre A. Konski;
Gerald E. Hanks;
James D. Cox;
Mack Roach, III;
Kenneth L. Zeitzer;
Colleen A. Lawton;
Christopher A. Peters;
Seth A. Rosenthal;
I.-Chow Joe Hsu;
Eric M. Horwitz;
Mark V. Mishra;
Jeff M. Michalski;
Matthew B. Parliament;
David P. D'Souza;
Stephanie L. Pugh;
Deborah W. Bruner
Purpose: A meta-analysis of sociodemographic variables and their association with late (>180 days from start of radiation therapy[RT]) bowel, bladder, and clustered bowel and bladder toxicities was conducted in patients with high-risk (clinical stages T2c-T4b or Gleason score 8-10 or prostate-specific antigen level >20) prostate cancer. Methods and materials: Three NRG trials (RTOG 9202, RTOG 9413, and RTOG 9406) that accrued from 1992 to 2000 were used. Late toxicities were measured with the Radiation Therapy Oncology Group Late Radiation Morbidity Scale. After controlling for study, age, Karnofsky Performance Status, and year of accrual, sociodemographic variables were added to the model for each outcome variable of interest in a stepwise fashion using the Fine-Gray regression models with an entry criterion of 0.05. Results: A total of 2432 patients were analyzed of whom most were Caucasian (76%), had a KPS score of 90 to 100 (92%), and received whole-pelvic RT+HT (67%). Of these patients, 13 % and 16% experienced late grade ≥2 bowel and bladder toxicities, respectively, and 2% and 3% experienced late grade ≥3 bowel and bladder toxicities, respectively. Late grade ≥2 clustered bowel and bladder toxicities were seen in approximately 1% of patients and late grade ≥3 clustered toxicities were seen in 2 patients (<1%). The multivariate analysis showed that patients who received prostate-only RT+HT had a lower risk of experiencing grade ≥2 bowel toxicities than those who received whole-pelvic RT+long-term (LT) HT (hazard ratio: 0.36; 95% confidence interval, 0.18-0.73; P =.0046 and hazard ratio: 0.43; 95% confidence interval, 0.23-0.80; P =.008, respectively). Patients who received whole-pelvic RT had similar chances of having grade ≥2 bowel or bladder toxicities no matter whether they received LT or short-term HT. Conclusions: Patients with high-risk prostate cancer who receive whole-pelvic RT+LT HT are more likely to have a grade ≥2 bowel toxicity than those who receive prostate-only RT. LT bowel and bladder toxicities were infrequent. Future studies will need to confirm these findings utilizing current radiation technology and patient-reported outcomes.
Purpose: To assess how accrual to clinical trials is related to U.S. minority population density relative to clinical trial site location and distance traveled to Radiation Therapy Oncology Group (RTOG) clinical trials sites.
Methods: Data included member site address and zip codes, patient accrual, and patient race/ethnicity and zip code. Geographic Information System (GIS) maps were developed for overall, Latino and African American accrual to trials by population density. The Kruskal-Wallis test was used to assess differences in distance traveled by site, type of trial and race/ethnicity.
Results: From 2006–2009, 6168 patients enrolled on RTOG trials. RTOG U.S. site distribution is generally concordant with overall population density. Sites with highest accrual are located throughout the U.S. and parts of Canada and do not cluster, nor does highest minority accrual cluster in areas of highest U.S. minority population density. Of the 4913 U.S. patients with complete data, patients traveled a median of 11.6 miles to participate in clinical trials. Whites traveled statistically longer distances (12.9 miles; p<0.0001) to participate followed by Latinos (8.22 miles), and African Americans (5.85 miles). Patients were willing to drive longer distances to academic sites than community sites and there was a trend toward significantly longer median travel for therapeutic vs cancer control or metastatic trials.
Conclusions: Location matters, but only to a degree, for minority compared to non-minority participation in clinical trials. GIS tools help identify gaps in geographic access and travel burden for clinical trials participation. Strategies that emerged using these tools are discussed.
Purpose: To determine individual, organizational, and protocol-specific factors associated with attrition in NRG Oncology's radiation-based clinical trials.
Methods and Materials: This retrospective analysis included 27,443 patients representing 134 NRG Oncology's radiation-based clinical trials.trials with primary efficacy results published from 1985-2011. Trials were separated on the basis of the primary endpoint (fixed time vs event driven). The cumulative incidence approach was used to estimate time to attrition, and cause-specific Cox proportional hazards models were used to assess factors associated with attrition.
Results: Most patients (69%) were enrolled in an event-driven trial (n = 18,809), while 31% were enrolled in a fixed-time trial (n = 8634). Median follow-up time for patients enrolled in fixed-time trials was 4.1 months and 37.2 months for patients enrolled in event-driven trials. Fixed time trials with a duration < 6 months had a 5 month attrition rate of 4.3% (95% confidence interval [CI]: 3.4%, 5.5%) and those with a duration ≥ 6 months had a 1 year attrition rate of 1.6% (95% CI: 1.2, 2.1). Event-driven trials had 1- and 5-year attrition rates of 0.5% (95% CI: 0.4%, 0.6%) and 13.6% (95% CI: 13.1%, 14.1%), respectively. Younger age, female gender, and Zubrod performance status >0 were associated with greater attrition as were enrollment by institutions in the West and South regions and participation in fixed-time trials.
Conclusions: Attrition in clinical trials can have a negative effect on trial outcomes. Data on factors associated with attrition can help guide the development of strategies to enhance retention. These strategies should focus on patient characteristics associated with attrition in both fixed-time and event-driven trials as well as in differing geographic regions of the country.
Rationale and Objectives: To investigate the diagnostic accuracy of ultrasound histogram features in the quantitative assessment of radiation-induced parotid gland injury and to identify potential imaging biomarkers for radiation-induced xerostomia (dry mouth)-the most common and debilitating side effect after head-and-neck radiotherapy (RT).
Materials and Methods: Thirty-four patients, who have developed xerostomia after RT for head-and-neck cancer, were enrolled. Radiation-induced xerostomia was defined by the Radiation Therapy Oncology Group/European Organization for Research and Treatment of Cancer morbidity scale. Ultrasound scans were performed on each patient's parotids bilaterally. The 34 patients were stratified into the acute-toxicity groups (16 patients, ≤3months after treatment) and the late-toxicity group (18 patients, >3months after treatment). A separate control group of 13 healthy volunteers underwent similar ultrasound scans of their parotid glands. Six sonographic features were derived from the echo-intensity histograms to assess acute and late toxicity of the parotid glands. The quantitative assessments were compared to a radiologist's clinical evaluations. The diagnostic accuracy of these ultrasonic histogram features was evaluated with the receiver operating characteristic (ROC) curve.
Results: With an area under the ROC curve greater than 0.90, several histogram features demonstrated excellent diagnostic accuracy for evaluation of acute and late toxicity of parotid glands. Significant differences (P<.05) in all six sonographic features were demonstrated between the control, acute-toxicity, and late-toxicity groups. However, subjective radiologic evaluation cannot distinguish between acute and late toxicity of parotid glands.
Conclusions: We demonstrated that ultrasound histogram features could be used to measure acute and late toxicity of the parotid glands after head-and-neck cancer RT, which may be developed into a low-cost imaging method for xerostomia monitoring and assessment.
PURPOSE: Hot flashes are a common and debilitating symptom among survivors of breast cancer. This study aimed at evaluating the effects of electroacupuncture (EA) versus gabapentin (GP) for hot flashes among survivors of breast cancer, with a specific focus on the placebo and nocebo effects. PATIENTS AND METHODS: We conducted a randomized controlled trial involving 120 survivors of breast cancer experiencing bothersome hot flashes twice per day or greater. Participants were randomly assigned to receive 8 weeks of EA or GP once per day with validated placebo controls (sham acupuncture [SA] or placebo pills [PPs]). The primary end point was change in the hot flash composite score (HFCS) between SA and PP at week 8, with secondary end points including group comparisons and additional evaluation at week 24 for durability of treatment effects. RESULTS: By week 8, SA produced significantly greater reduction in HFCS than did PP (-2.39; 95% CI, -4.60 to -0.17). Among all treatment groups, the mean reduction in HFCS was greatest in the EA group, followed by SA, GP, and PP (-7.4 v -5.9 v -5.2 v -3.4; P = < .001). The pill groups had more treatment-related adverse events than did the acupuncture groups: GP (39.3%), PP (20.0%), EA (16.7%), and SA (3.1%), with P = .005. By week 24, HFCS reduction was greatest in the EA group, followed by SA, PP, and GP (-8.5 v -6.1 v -4.6 v -2.8; P = .002). CONCLUSION: Acupuncture produced larger placebo and smaller nocebo effects than did pills for the treatment of hot flashes. EA may be more effective than GP, with fewer adverse effects for managing hot flashes among breast cancer survivors; however, these preliminary findings need to be confirmed in larger randomized controlled trials with long-term follow-up.