This review focusses on cross-modal plasticity resulting from visual deprivation. This is viewed against the background of task-specific visual cortical recruitment that is routine during tactile tasks in the sighted and that may depend in part on visual imagery. Superior tactile perceptual performance in the blind may be practice-related, although there are unresolved questions regarding the effects of Braille-reading experience and the age of onset of blindness. While visual cortical areas are clearly more involved in tactile microspatial processing in the blind than in the sighted, it still remains unclear how to reconcile these tactile processes with the growing literature implicating visual cortical activity in a wide range of cognitive tasks in the blind, including those involving language, or with studies of short-term, reversible visual deprivation in the normally sighted that reveal plastic changes even over periods of hours or days.
Despite considerable work, the neural basis of perceptual learning remains uncertain. For visual learning, although some studies suggested that changes in early sensory representations are responsible, other studies point to decision-level reweighting of perceptual readout. These competing possibilities have not been examined in other sensory systems, investigating which could help resolve the issue. Here we report a study of human tactile microspatial learning in which participants achieved >six-fold decline in acuity threshold after multiple training sessions. Functional magnetic resonance imaging was carried out during performance of the tactile microspatial task and a control, tactile temporal task. Effective connectivity between relevant brain regions was estimated using multivariate, autoregressive models of hidden neuronal variables obtained by deconvolution of the hemodynamic response. Training-specific increases in task-selective activation assessed using the task-by-session interaction, and associated changes in effective connectivity, primarily involved subcortical and anterior neocortical regions implicated in motor and/or decision processes, rather than somatosensory cortical regions. A control group of participants tested twice, without intervening training, exhibited neither threshold improvement nor increases in task-selective activation. Our observations argue that neuroplasticity mediating perceptual learning occurs at the stage of perceptual readout by decision networks. This is consonant with the growing shift away from strictly modular conceptualization of the brain towards the idea that complex network interactions underlie even simple tasks. The convergence of our findings on tactile learning with recent studies of visual learning reconciles earlier discrepancies in the literature on perceptual learning.
Background: Mild cognitive impairment (MCI) is often a precursor to Alzheimer’s disease. Little research has examined the efficacy of cognitive rehabilitation in patients with MCI, and the relevant neural mechanisms have not been explored. We previously reported on a pilot study showing the behavioral efficacy of cognitive rehabilitation using mnemonic strategies for face-name associations in patients with MCI. Here we used functional magnetic resonance imaging (fMRI) to test whether there were training-specific changes in activation and connectivity within memory-related areas.
Methods: Six patients with amnestic, multi-domain MCI underwent pre- and post-training fMRI scans, during which they encoded 90 novel face-name pairs, and completed a 4-choice recognition memory test immediately after scanning. Patients were taught mnemonic strategies for half the face-name pairs during three intervening training sessions.
Results: Training-specific effects comprised significantly increased activation within a widespread cerebral cortical network involving medial frontal, parietal, and occipital regions, the left frontal operculum and angular gyrus, and regions in left lateral temporal cortex. Increased activation common to trained and untrained stimuli was found in a separate network involving inferior frontal, lateral parietal and occipital cortical regions. Effective connectivity analysis using multivariate, correlation-purged Granger causality analysis revealed generally increased connectivity after training, particularly involving the middle temporal gyrus and foci in the occipital cortex and the precuneus.
Conclusion: Our findings suggest that the effectiveness of explicit memory training in patients with MCI is associated with training-specific increases in activation and connectivity in a distributed neural system that includes areas involved in explicit memory.
Although blindness alters neocortical processing of non-visual tasks, previous studies do not allow clear conclusions about purely perceptual tasks. We used functional magnetic resonance imaging (fMRI) to examine the neural processing underlying tactile microspatial discrimination in the blind. Activity during the tactile microspatial task was contrasted against that during a tactile temporal discrimination task. The spatially-selective network included frontoparietal and visual cortical regions. Activation magnitudes in left primary somatosensory cortex and in visual cortical foci predicted acuity thresholds. Effective connectivity was investigated using multivariate Granger causality analyses. Bilateral primary somatosensory cortical foci and a left inferior temporal focus were important sources of connections. Visual cortical regions interacted mainly with one another and with somatosensory cortical regions. Among a set of distributed cortical regions exhibiting greater spatial selectivity in early blind compared to late blind individuals, the age of complete blindness was predicted by activity in a subset of frontoparietal regions, and by the weight of a path from the right lateral occipital complex to right occipitopolar cortex. Thus, many aspects of neural processing during tactile microspatial discrimination differ between the blind and sighted, with some of the key differences reflecting visual cortical engagement in the blind.
Object recognition studies have almost exclusively involved vision, focusing on shape rather than surface properties such as color. Visual object representations are thought to integrate shape and color information because changing the color of studied objects impairs their subsequent recognition. However, little is known about integration of surface properties into visuo-haptic multisensory representations. Here, participants studied objects with distinct patterns of surface properties (color in Experiment 1, texture in Experiments 2 & 3) and had to discriminate between object shapes when color/texture schemes were altered in within-modal (visual and haptic) and cross-modal (visual study/haptic test and vice versa) conditions. In Experiment 1, color changes impaired within-modal visual recognition but had no effect on cross-modal recognition, suggesting that the multisensory representation is not influenced by modality-specific surface properties. In Experiment 2, texture changes impaired recognition in all conditions, suggesting that both unisensory and multisensory representations integrate modality-independent surface properties. However, the cross-modal impairment might have reflected either the texture change or a failure to form the multisensory representation. Experiment 3 attempted to distinguish between these possibilities by combining changes in texture with changes in orientation, taking advantage of the known view-independence of the multisensory representation, but the results were not conclusive owing to the overwhelming effect of texture change. The simplest account is that the multisensory representation integrates shape and modality-independent surface properties. However, more work is required to investigate this and the conditions under which multisensory integration of structural and surface properties occurs.
by
Simon A Lacey;
Henrik Hagtvedt;
Vanessa M. Patrick;
Amy Anderson;
Randall Stilla;
Gopikrishna Deshpande;
Xiaoping P Hu;
João R. Sato;
Srinivas Reddy;
Krish Sathian
A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non -art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value.
Although visual cortical engagement in haptic shape perception is well established, its relationship with visual imagery remains controversial. We addressed this using functional magnetic resonance imaging during separate visual object imagery and haptic shape perception tasks. Two experiments were conducted. In the first experiment, the haptic shape task employed unfamiliar, meaningless objects, whereas familiar objects were used in the second experiment. The activations evoked by visual object imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. In the companion paper (Deshpande et al., 2009), we used task-specific functional and effective connectivity analyses to provide convergent evidence: these analyses showed that the neural networks underlying visual imagery were similar to those underlying haptic shape perception of familiar, but not unfamiliar, objects. We conclude that visual object imagery is more closely linked to haptic shape perception when objects are familiar, compared to when they are unfamiliar.
In the preceding paper (Lacey et al., 2009), we showed that the activations evoked by visual imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. Here we used task-specific analyses of functional and effective connectivity to provide convergent evidence. These analyses showed that the visual imagery and familiar haptic shape tasks activated similar networks, whereas the unfamiliar haptic shape task activated a different network. Multivariate Granger causality analyses of effective connectivity, in both a conventional form and one purged of zero-lag correlations, showed that the visual imagery and familiar haptic shape networks involved top-down paths from prefrontal cortex into the lateral occipital complex (LOC), whereas the unfamiliar haptic shape network was characterized by bottom-up, somatosensory inputs into the LOC. We conclude that shape representations in the LOC are flexibly accessible, either top-down or bottom-up, according to task demands, and that visual imagery is more involved in LOC activation during haptic shape perception when objects are familiar, compared to unfamiliar.
Although it is accepted that visual cortical areas are recruited during touch, it remains uncertain whether this depends on top-down inputs mediating visual imagery or engagement of modality-independent representations by bottom-up somatosensory inputs. Here we addressed this by examining effective connectivity in humans during haptic perception of shape and texture with the right hand. Multivariate Granger causality analysis of functional magnetic resonance imaging (fMRI) data was conducted on a network of regions that were shape- or texture-selective. A novel network reduction procedure was employed to eliminate connections that did not contribute significantly to overall connectivity. Effective connectivity during haptic perception was found to involve a variety of interactions between areas generally regarded as somatosensory, multisensory, visual and motor, emphasizing flexible cooperation between different brain regions rather than rigid functional separation. The left postcentral sulcus (PCS), left precentral gyrus and right posterior insula were important sources of connections in the network. Bottom-up somatosensory inputs from the left PCS and right posterior insula fed into visual cortical areas, both the shape-selective right lateral occipital complex (LOC) and the texture-selective right medial occipital cortex (probable V2). In addition, top-down inputs from left postero-supero-medial parietal cortex influenced the right LOC. Thus, there is strong evidence for the bottom-up somatosensory inputs predicted by models of visual cortical areas as multisensory processors and suggestive evidence for top-down parietal (but not prefrontal) inputs that could mediate visual imagery. This is consistent with modality-independent representations accessible through both bottom-up sensory inputs and top-down processes such as visual imagery.
Conceptual metaphor theory suggests that knowledge is structured around metaphorical mappings derived from physical experience. Segregated processing of object properties in sensory cortex allows testing of the hypothesis that metaphor processing recruits activity in domain-specific sensory cortex. Using functional magnetic resonance imaging (fMRI) we show that texture-selective somatosensory cortex in the parietal operculum is activated when processing sentences containing textural metaphors, compared to literal sentences matched for meaning. This finding supports the idea that comprehension of metaphors is perceptually grounded.