by
James W. Tanaka;
Julie M. Wolf;
Cheryl Klaiman;
Kathleen Koenig;
Jeffrey Cockburn;
Lauren Herlihy;
Carla Brown;
Sherin Stahl;
Mikle South;
James McPartland;
Martha D. Kaiser;
Robert T. Schultz
Background: Although impaired social-emotional ability is a hallmark of autism spectrum disorder (ASD), the perceptual skills and mediating strategies contributing to the social deficits of autism are not well understood. A perceptual skill that is fundamental to effective social communication is the ability to accurately perceive and interpret facial emotions. To evaluate the expression processing of participants with ASD, we designed the Let's Face It! Emotion Skills Battery (LFI! Battery), a computer-based assessment composed of three subscales measuring verbal and perceptual skills implicated in the recognition of facial emotions. Methods: We administered the LFI! Battery to groups of participants with ASD and typically developing control (TDC) participants that were matched for age and IQ. Results: On the Name Game labeling task, participants with ASD (N = 68) performed on par with TDC individuals (N = 66) in their ability to name the facial emotions of happy, sad, disgust and surprise and were only impaired in their ability to identify the angry expression. On the Matchmaker Expression task that measures the recognition of facial emotions across different facial identities, the ASD participants (N = 66) performed reliably worse than TDC participants (N = 67) on the emotions of happy, sad, disgust, frighten and angry. In the Parts-Wholes test of perceptual strategies of expression, the TDC participants (N = 67) displayed more holistic encoding for the eyes than the mouths in expressive faces whereas ASD participants (N = 66) exhibited the reverse pattern of holistic recognition for the mouth and analytic recognition of the eyes. Conclusion: In summary, findings from the LFI! Battery show that participants with ASD were able to label the basic facial emotions (with the exception of angry expression) on par with age- and IQ-matched TDC participants. However, participants with ASD were impaired in their ability to generalize facial emotions across different identities and showed a tendency to recognize the mouth feature holistically and the eyes as isolated parts.
Studies on the health impacts of climate change routinely use climate model output as future exposure projection. Uncertainty quantification, usually in the form of sensitivity analysis, has focused predominantly on the variability arise from different emission scenarios or multi-model ensembles. This paper describes a Bayesian spatial quantile regression approach to calibrate climate model output for examining to the risks of future temperature on adverse health outcomes. Specifically, we first estimate the spatial quantile process for climate model output using non-linear monotonic regression during a historical period. The quantile process is then calibrated using the quantile functions estimated from the observed monitoring data. Our model also down-scales the gridded climate model output to the point-level for projecting future exposure over a specific geographical region. The quantile regression approach is motivated by the need to better characterize the tails of future temperature distribution where the greatest health impacts are likely to occur. We applied the methodology to calibrate temperature projections from a regional climate model for the period 2041 to 2050. Accounting for calibration uncertainty, we calculated the number of excess deaths attributed to future temperature for three cities in the US state of Alabama.
While regression of focal nodular hyperplasia of the liver is not uncommon, reports of near-complete involution or regression of these lesions are rare. We report two cases of focal nodular hyperplasia that underwent near-complete regression-one in a 27-year-old female that regressed over a period of 4 years, and one in a 46-year-old female that regressed over a 7-year period. Both patients discontinued use of exogenous estrogens between the diagnosis of focal nodular hyperplasia and its subsequent regression. Although contemporary cross-sectional imaging has improved the ability to detect and follow these lesions, few studies examining the natural history of focal nodular hyperplasia have been conducted. We discuss pertinent imaging findings on magnetic resonance imaging and computed tomography, and review the literature on regression of focal nodular hyperplasia and the effects of endogenous hormones and exogenous hormone therapy.
Frameless radiosurgery is an attractive alternative to the framed procedure if it can be performed with comparable precision in a reasonable time frame. Here, we present a positioning approach for frameless radiosurgery based on in-room volumetric imaging coupled with an advanced six-degrees-of-freedom (6 DOF) image registration technique which avoids use of a bite block. Patient motion is restricted with a custom thermoplastic mask. Accurate positioning is achieved by registering a cone-beam CT to the planning CT scan and applying all translational and rotational shifts using a custom couch mount. System accuracy was initially verified on an anthropomorphic phantom. Isocenters of delineated targets in the phantom were computed and aligned by our system with an average accuracy of 0.2 mm, 0.3 mm, and 0.4 mm in the lateral, vertical, and longitudinal directions, respectively. The accuracy in the rotational directions was 0.1°, 0.2°, and 0.1° in the pitch, roll, and yaw, respectively. An additional test was performed using the phantom in which known shifts were introduced. Misalignments up to 10 mm and 3° in all directions/rotations were introduced in our phantom and recovered to an ideal alignment within 0.2 mm, 0.3 mm, and 0.4 mm in the lateral, vertical, and longitudinal directions, respectively, and within 0.3° in any rotational axis. These values are less than couch motion precision. Our first 28 patients with 38 targets treated over 63 fractions are analyzed in the patient positioning phase of the study. Mean error in the shifts predicted by the system were less than 0.5 mm in any translational direction and less than 0.3° in any rotation, as assessed by a confirmation CBCT scan. We conclude that accurate and efficient frameless radiosurgery positioning is achievable without the need for a bite block by using our 6DOF registration method. This system is inexpensive compared to a couch-based 6 DOF system, improves patient comfort compared to systems that utilize a bite block, and is ideal for the treatment of pediatric patients with or without general anesthesia, as well as of patients with dental issues. From this study, it is clear that only adjusting for 4 DOF may, in some cases, lead to significant compromise in PTV coverage. Since performing the additional match with 6 DOF in our registration system only adds a relatively short amount of time to the overall process, we advocate making the precise match in all cases.
The purpose of this study was to develop and validate a technique for unsealed source radiotherapy planning that combines the segmentation and registration tasks of single-photon emission tomography (SPECT) and computed tomography (CT) datasets. The segmentation task is automated by an atlas registration approach that takes advantage of a hybrid scheme using a diffeomorphic demons algorithm to warp a standard template to the patient's CT. To overcome the lack of common anatomical features between the CT and SPECT datasets, registration is achieved through a narrow band approach that matches liver contours in the CT with the gradients of the SPECT dataset. Deposited dose is then computed from the SPECT dataset using a convolution operation with tracer-specific deposition kernels. Automatic segmentation showed good agreement with manual contouring, measured using the dice similarity coefficient and ranging from 0.72 to 0.87 for the liver, 0.47 to 0.93 for the kidneys, and 0.74 to 0.83 for the spinal cord. The narrow band registration achieved variations of less 0.5 mm translation and 1° rotation, as measured with convergence analysis. With the proposed combined segmentation-registration technique, the uncertainty of soft-tissue target localization is greatly reduced, ensuring accurate therapy planning.
Controlled expansion and differentiation of pluripotent stem cells (PSCs) using reproducible, high-throughput methods could accelerate stem cell research for clinical therapies. Hydrodynamic culture systems for PSCs are increasingly being used for high-throughput studies and scale-up purposes; however, hydrodynamic cultures expose PSCs to complex physical and chemical environments that include spatially and temporally modulated fluid shear stresses and heterogeneous mass transport. Furthermore, the effects of fluid flow on PSCs cannot easily be attributed to any single environmental parameter since the cellular processes regulating self-renewal and differentiation are interconnected and the complex physical and chemical parameters associated with fluid flow are thus difficult to independently isolate. Regardless of the challenges posed by characterizing fluid dynamic properties, hydrodynamic culture systems offer several advantages over traditional static culture, including increased mass transfer and reduced cell handling. This article discusses the challenges and opportunities of hydrodynamic culture environments for the expansion and differentiation of PSCs in microfluidic systems and larger-volume suspension bioreactors. Ultimately, an improved understanding of the effects of hydrodynamics on the self-renewal and differentiation of PSCs could yield improved bioprocessing technologies to attain scalable PSC culture strategies that will probably be requisite for the development of therapeutic and diagnostic applications.
Background: Selecting an appropriate classifier for a particular biological application poses a difficult problem for researchers and practitioners alike. In particular, choosing a classifier depends heavily on the features selected. For high-throughput biomedical datasets, feature selection is often a preprocessing step that gives an unfair advantage to the classifiers built with the same modeling assumptions. In this paper, we seek classifiers that are suitable to a particular problem independent of feature selection. We propose a novel measure, called "win percentage", for assessing the suitability of machine classifiers to a particular problem. We define win percentage as the probability a classifier will perform better than its peers on a finite random sample of feature sets, giving each classifier equal opportunity to find suitable features.Results: First, we illustrate the difficulty in evaluating classifiers after feature selection. We show that several classifiers can each perform statistically significantly better than their peers given the right feature set among the top 0.001% of all feature sets. We illustrate the utility of win percentage using synthetic data, and evaluate six classifiers in analyzing eight microarray datasets representing three diseases: breast cancer, multiple myeloma, and neuroblastoma. After initially using all Gaussian gene-pairs, we show that precise estimates of win percentage (within 1%) can be achieved using a smaller random sample of all feature pairs. We show that for these data no single classifier can be considered the best without knowing the feature set. Instead, win percentage captures the non-zero probability that each classifier will outperform its peers based on an empirical estimate of performance.Conclusions: Fundamentally, we illustrate that the selection of the most suitable classifier (i.e., one that is more likely to perform better than its peers) not only depends on the dataset and application but also on the thoroughness of feature selection. In particular, win percentage provides a single measurement that could assist users in eliminating or selecting classifiers for their particular application.
Plasmodium vivax and P. falciparum cause malaria, so proteins essential for their survival in vivo are potential anti-malarial drug targets. Adenosine deaminases (ADA) catalyze the irreversible conversion of adenosine into inosine, and play a critical role in the purine salvage pathways of Plasmodia and their mammalian hosts. Currently, the number of selective inhibitors of Plasmodium ADAs is limited. One potent and widely used inhibitor of the human ADA (hADA), erythro-9-(2-hydroxy-3-nonly)adenine (EHNA), is a very weak inhibitor (Ki = 120uM) of P. falciparum ADA (pfADA). EHNA-like compounds are thus excluded from consideration as potential inhibitors of Plasmodium ADA in general. However, EHNA activity in P. vivax ADA (pvADA) has not been reported. Here we applied computational molecular modeling to identify the mechanisms of the ligand recognition unique for P. vivax and P. falciparum ADA. Based on the computational studies, we performed molecular biology experiments to show that EHNA is at least 60-fold more potent against pvADA (Ki = 1.9uM) than against pfADA. The D172A pvADA mutant is bound even more tightly (Ki = 0.9uM). These results improve our understanding of the mechanisms of ADA ligand recognition and species-selectivity, and facilitate the rational design of novel EHNA-based ADA inhibitors as anti-malarial drugs. To demonstrate a practical application of our findings we have computationally predicted a novel potential inhibitor of pvADA selective versus the human ADA.
Target localization using single photon emission computed tomography (SPECT) and planar imaging is being investigated for guiding radiation therapy delivery. Previous studies on SPECT-based localization have used computer-simulated or hybrid images with simulated tumors embedded in disease-free patient images where the tumor position is known and localization can be calculated directly. In the current study, localization was studied using scanner-acquired images. Five fillable spheres were placed in a whole body phantom. Sphere-to-background 99mTc radioactivity was 6:1. Ten independent SPECT scans were acquired with a Trionix Triad scanner using three detector trajectories: left lateral 180°, 360°, and right lateral 180°. Scan time was equivalent to 4.5 min. Images were reconstructed with and without attenuation correction. True target locations were estimated from 12 hr SPECT and CT images. From the 12 hr SPECT scan, 45 sets of orthogonal planar images were used to assess target localization; total acquisition time per set was equivalent to 4.5min. A numerical observer localized the center of the targets in the 4.5 min SPECT and planar images. SPECT-based localization errors were compared for the different detector trajectories. Across the four peripheral spheres, and using optimal iteration numbers and postreconstruction smoothing, means and standard deviations in localization errors were 0.90 ± 0.25 mm for proximal 180° trajectories, 1.31 ± 0.51 mm for 360° orbits, and 3.93 ± 1.48 mm for distal 180° trajectories. This rank order in localization performance is predicted by target attenuation and distance from the target to the collimator. For the targets with mean localization errors < 2 mm, attenuation correction reduced localization errors by 0.15 mm on average. The improvement from attenuation correction was 1.0 mm on average for the more poorly localized targets. Attenuation correction typically reduced localization errors, but for well-localized targets, the detector trajectory generally had a larger effect. Localization performance was found to be robust to iteration number and smoothing. Localization was generally worse using planar images as compared with proximal 180° and 360° SPECT scans. Using a proximal detector trajectory and attenuation correction, localization errors were within 2 mm for the three superficial targets, thus supporting the current role in biopsy and surgery, and demonstrating the potential for SPECT imaging inside radiation therapy treatment rooms.