Correction: Health Econ Rev13, 34 (2023) Following publication of the original article [1], the authors identified an error in the surname of Lisa B. Hightow-Weidman. The incorrect author name is: Lisa B. Hightow-Wideman. The correct author name is: Lisa B. Hightow-Weidman. The author group has been updated above and the original article [1] has been corrected.
Introduction: Cultural competency has been identified as a barrier to lesbian, gay, bisexual and transgender (LGBT) populations seeking care. Mystery shopping has been widely employed in the formal health care sector as a quality improvement (QI) tool to address specific client needs. The approach has had limited use in community-based organizations due in part to lack of knowledge and resource requirement concerns. Several mystery shopping initiatives are now being implemented which focus on the LGBT population with the goal of reducing barriers to accessing care. One subset targets men who have sex with men (MSM) to increase uptake of human immunodeficiency virus (HIV) testing. No study investigates the costs of these initiatives. Get Connected was a randomized control trial with the objective of increasing uptake of HIV-prevention services among young men who have sex with men (YMSM) through use of a resource-locator application (App). The initial phase of the trial employed peer-led mystery shopping to identify culturally competent HIV testing sites for inclusion in the App. The second phase of the trial randomized YMSM to test the efficacy of the App. Our objective was to determine the resource inputs and costs of peer-led mystery shopping to identify clinics for inclusion in the App as costs would be critical in informing possible adoption by organizations and sustainability of this model. Methods: Through consultation with study staff, we created a resource inventory for undertaking the community-based, peer-led mystery shopping program. We used activity-based costing to price each of the inputs. We classified inputs as start-up and those for on-going implementation. We calculated costs for each category, total costs and cost per mystery shopper visit for the four-month trial and annually to reflect standard budgeting periods for data collected from September of 2019 through September of 2020. Results: Recruitment and training of peer mystery shoppers were the most expensive tasks. Average start-up costs were $10,001 (SD $39.8). Four-month average implementation costs per visit were $228 (SD $1.97). Average annual implementation costs per visit were 33% lower at $151 (SD $5.60). Conclusions: Peer-led, mystery shopping of HIV-testing sites is feasible, and is likely affordable for medium to large public health departments.
Background: Medicaid insurance in Georgia provides limited reimbursement for heart transplant (HT) and left ventricular assist devices (LVAD). We examined whether insurance type affects eligibility for and survival after receipt of HT or LVAD.
Methods and Results: We retrospectively identified patients evaluated for HT/LVAD from 2012 to 2016. We used multivariable logistic and Cox proportional hazards regression to examine the association of insurance type on treatment eligibility and 1-year survival. Of 569 patients evaluated, 282 (49.6%) had private, 222 (39.0%) had Medicare, and 65 (11.4%) had Medicaid insurance. Patients with Medicaid were younger, more likely to be Black, with fewer medical comorbidities. In adjusted models, Medicare and Medicaid insurance predicted lower odds of eligibility for HT, but did not affect survival after HT. Among those ineligible for HT, Medicaid patients were less likely to receive destination therapy (DT) LVAD (adj OR 0.08, 95% CI 0.01-0.66; P =.02) and had increased risk of death (adj HR = 2.03, 95% CI 1.13-3.63; P =.01).
Conclusions: Despite younger age and fewer comorbidities, patients with Medicaid insurance are less likely to receive DT LVAD and have an increased risk of death once deemed ineligible for HT. Medicaid patients in Georgia need improved access to DT LVAD.
Background: Economic dimensions of implementing quality improvement for diabetes care are understudied worldwide. We describe the economic evaluation protocol within a randomised controlled trial that tested a multi-component quality improvement (QI) strategy for individuals with poorly-controlled type 2 diabetes in South Asia. Methods/design: This economic evaluation of the Centre for Cardiometabolic Risk Reduction in South Asia (CARRS) randomised trial involved 1146 people with poorly-controlled type 2 diabetes receiving care at 10 diverse diabetes clinics across India and Pakistan. The economic evaluation comprises both a within-trial cost-effectiveness analysis (mean 2.5 years follow up) and a microsimulation model-based cost-utility analysis (life-time horizon). Effectiveness measures include multiple risk factor control (achieving HbA1c < 7% and blood pressure < 130/80 mmHg and/or LDL-cholesterol< 100 mg/dl), and patient reported outcomes including quality adjusted life years (QALYs) measured by EQ-5D-3 L, hospitalizations, and diabetes related complications at the trial end. Cost measures include direct medical and non-medical costs relevant to outpatient care (consultation fee, medicines, laboratory tests, supplies, food, and escort/accompanying person costs, transport) and inpatient care (hospitalization, transport, and accompanying person costs) of the intervention compared to usual diabetes care. Patient, healthcare system, and societal perspectives will be applied for costing. Both cost and health effects will be discounted at 3% per year for within trial cost-effectiveness analysis over 2.5 years and decision modelling analysis over a lifetime horizon. Outcomes will be reported as the incremental cost-effectiveness ratios (ICER) to achieve multiple risk factor control, avoid diabetes-related complications, or QALYs gained against varying levels of willingness to pay threshold values. Sensitivity analyses will be performed to assess uncertainties around ICER estimates by varying costs (95% CIs) across public vs. private settings and using conservative estimates of effect size (95% CIs) for multiple risk factor control. Costs will be reported in US$ 2018. Discussion: We hypothesize that the additional upfront costs of delivering the intervention will be counterbalanced by improvements in clinical outcomes and patient-reported outcomes, thereby rendering this multi-component QI intervention cost-effective in resource constrained South Asian settings. Trial registration: ClinicalTrials.gov: NCT01212328.
In year one of the COVID-19 epidemic, the incidence of infection for US carceral populations was 5.5-fold higher than that in the community. Prior to the rapid roll out of a comprehensive jail surveillance program of Wastewater-Based Surveillance (WBS) and individual testing for SARS-CoV-2, we sought the perspectives of formerly incarcerated individuals regarding mitigation strategies against COVID-19 to inform acceptability of the new program. In focus groups, participants discussed barriers to their receiving COVID-19 testing and vaccination. We introduced WBS and individual nasal self-testing, then queried if wastewater testing to improve surveillance of emerging outbreaks before case numbers surged, and specimen self-collection, would be valued. The participants’ input gives insight into ways to improve the delivery of COVID-19 interventions. Hearing the voices of those with lived experiences of incarceration is critical to understanding their views on infection control strategies and supports including justice-involved individuals in decision-making processes regarding jail-based interventions.
by
Lindsey R Riback;
Peter Dickson;
Keyanna Ralph;
Lindsay B Saber;
Rachel Devine;
Lindsay A Pett;
Alyssa J Clausen;
Jacob A Pluznik;
Chava J Bowden;
Jennifer Sarrett;
Alysse G Wurcel;
Victoria Phillips;
Anne Spaulding;
Matthew J Akiyama
Background: Correctional settings are hotspots for SARS-CoV-2 transmission. Social and biological risk factors contribute to higher rates of COVID-19 morbidity and mortality among justice-involved individuals. Rapidly identifying new cases in congregate settings is essential to promote proper isolation and quarantine. We sought perspectives of individuals incarcerated during COVID-19 on how to improve carceral infection control and their perspectives on acceptability of wastewater-based surveillance (WBS) accompanying individual testing. Methods: We conducted semi-structured interviews with 20 adults who self-reported being incarcerated throughout the United States between March 2020 and May 2021. We asked participants about facility enforcement of the Centers for Disease Control and Prevention (CDC) COVID-19 guidelines, and acceptability of integrating WBS into SARS-CoV-2 monitoring strategies at their most recent facility. We used descriptive statistics to characterize the study sample and report on acceptability of WBS. We analyzed qualitative data thematically using an iterative process. Results: Participants were predominantly Black or multiple races (50%) and men (75%); 46 years old on average. Most received a mask during their most recent incarceration (90%), although only 40% received counseling on proper mask wearing. A quarter of participants were tested for SARS-CoV-2 at intake. Most (70%) believed they were exposed to the virus while incarcerated. Reoccurring themes included (1) Correctional facility environment leading to a sense of insecurity, (2) Perceptions that punitive conditions in correctional settings were exacerbated by the pandemic; (3) Importance of peers as a source of information about mitigation measures; (4) Perceptions that the safety of correctional environments differed from that of the community during the pandemic; and (5) WBS as a logical strategy, with most (68%) believing WBS would work in the last correctional facility they were in, and 79% preferred monitoring SARS-CoV-2 levels through WBS rather than relying on just individual testing. Conclusion: Participants supported routine WBS to monitor for SARS-CoV-2. Integrating WBS into existing surveillance strategies at correctional facilities may minimize the impact of future COVID-19 outbreaks while conserving already constrained resources. To enhance the perception and reality that correctional systems are maximizing mitigation, future measures might include focusing on closer adherence to CDC recommendations and clarity about disease pathogenesis with residents.
OBJECTIVES: The authors present preliminary results on health-related outcomes of a randomized trial of telehealth interventions designed to reduce the incidence of secondary conditions among people with mobility impairment resulting from spinal cord injury (SCI).
METHODS: Patients with spinal cord injuries were recruited during their initial stay at a rehabilitation facility in Atlanta. They received a video-based intervention for nine weeks, a telephone-based intervention for nine weeks, or standard follow-up care. Participants are followed for at least one year, to monitor days of hospitalization, depressive symptoms, and health-related quality of life.
RESULTS: Health-related quality of life was measured using the Quality of Well-Being (QWB) scale. QWB scores (n = 111) did not differ significantly between the three intervention groups at the end of the intervention period. At year one post discharge, however, scores for those completing one year of enrollment (n = 47) were significantly higher for the intervention groups compared to standard care. Mean annual hospital days were 3.00 for the video group, 5.22 for the telephone group, and 7.95 for the standard care group.
CONCLUSIONS: Preliminary evidence suggests that in-home telephone or video-based interventions do improve health-related outcomes for newly injured SCI patients. Telehealth interventions may be cost-saving if program costs are more than offset by a reduction in rehospitalization costs, but differential advantages of video-based interventions versus telephone alone warrant further examination.
Objective. We compared the safety and effectiveness of minimally invasive parafascicular surgery (MIPS) as a frontline treatment for spontaneous supratentorial ICH to medical management. Patients. The sample consisted of 17 patients who underwent MIPS from January 2014 to December 2016 and a comparison group of 23 patients who were medically managed from June 2012 to December 2013. All had an International Classification of Disease (ICD) diagnosis of 431 and were treated at Grady Memorial Hospital, an urban, public, safety-net hospital. Methods. The primary endpoint was risk of inpatient mortality. Secondary endpoints were rates of inpatient infection and favorable discharge status, defined as discharge to home or rehabilitation facility. Demographics and pre- and postclinical outcomes were compared using t-tests, the Mann-Whitney test, and chi-squared tests for continuous, ordinal and categorical measures, respectively. Cox proportional hazard models were used to estimate the time to inpatient death. Logistic regression analyses were used to determine treatment effects on secondary outcomes. We also conducted exploratory subgroup analyses which compared MIPS to two medical management subgroups: those who had surgery during their hospitalization and those that did not. Results. Two patients (12%) died in the MIPS group compared to three (12%) in the medical management group. MIPS did not increase the risk of inpatient mortality relative to medical management. Rates of inpatient infection did not differ significantly between the two groups; eight MIPS patients (47%) and 13 medically managed patients (50%) contracted infections. MIPS significantly increased the likelihood of favorable discharge status (odds ratio (OR) 1.77; 95% CI, 1.12-21.9) compared to medical management. No outcome measures were significantly different between MIPS and the medical management subgroup without surgery, while rates of favorable discharge were higher among the MIPS patients compared to the medical management group with surgery. Conclusions. These data suggest that MIPS, as a frontline treatment for spontaneous ICH, versus medical management for spontaneous ICH warrants further investigation.
Background
Stroke victims are at relatively high risk for injurious falls. The purpose of this study was to document longitudinal fall patterns following inpatient rehabilitation for first-time stroke survivors.
Methods
Participants (n = 231) were recruited at the end of their rehab stay and interviewed monthly via telephone for 1 to 32 months regarding fall incidents. Analyses were conducted on: total reports of falls by month over time for first-time and repeat fallers, the incidence of falling in any given month; and factors differing between fallers and non fallers.
Results
The largest percentage of participants (14%) reported falling in the first month post-discharge. After month five, less than 10% of the sample reported falling, bar months 15 (10.4%) and 23 (13.2%). From months one to nine, the percentage of those reporting one fall with and without a prior fall were similar. After month nine, the number of individuals who reported a single fall with a fall history was twice as high compared to those without a prior fall who reported falling. In both cases the percentages were small. A very small subset of the population emerged who fell multiple times each month, most of whom had a prior fall history. At least a third of the sample reported a loss of balance each month. Few factors differed significantly between fallers and non-fallers in months one to six.
Conclusion
Longitudinal data suggest that falls most likely linked to first time strokes occur in the first six months post discharge, particularly month one. Data routinely available at discharge does not distinguish fallers from non-fallers. Once a fall incident has occurred however, preventive intervention is warranted.
BACKGROUND: The identification of cost-effective glycaemic management strategies is critical to hospitals. Treatment with a basal-bolus insulin (BBI) regimen has been shown to result in better glycaemic control and fewer complications than sliding scale regular insulin (SSI) in general surgery patients with type 2 diabetes mellitus (T2DM), but the effect on costs is unknown. OBJECTIVE: We conducted a post hoc analysis of the RABBIT Surgery trial to examine whether total inpatient costs per day for general surgery patients with T2DM treated with BBI (n = 103) differed from those for patients with T2DM treated with SSI (n = 99) regimens. METHODS: Data were collected from patient clinical and hospital billing records. Charges were adjusted to reflect hospital costs. General linearized models were used to estimate the risk-adjusted effects of BBI versus SSI treatment on average total inpatient costs per day. RESULTS: Risk-adjusted average total inpatient costs per day were $US5404. Treatment with BBI compared with SSI reduced average total inpatient costs per day by $US751 (14%; 95% confidence interval [CI] 20-4). Being treated in a university medical centre, being African American or having a bowel procedure or higher-volume pharmacy use significantly reduced costs per day. CONCLUSIONS: In general surgery patients with T2DM, a BBI regimen significantly reduced average total hospital costs per day compared with an SSI regimen. BBI has been shown to improve outcomes in a randomized controlled trial. Those results, combined with our findings regarding savings, suggest that hospitals should consider adopting BBI regimens in patients with T2DM undergoing surgery.