It is often assumed that a fundamental property of language is the arbitrariness of the relationship between sound and meaning. Sound symbolism, which refers to non-arbitrary mapping between the sound of a word and its meaning, contradicts this assumption. Sensitivity to sound symbolism has been studied through crossmodal correspondences (CCs) between auditory pseudowords (e.g. ‘loh-moh’) and visual shapes (e.g. a blob). We used representational similarity analysis to examine the relationships between physical stimulus parameters and perceptual ratings that varied on dimensions of roundedness and pointedness, for a range of auditory pseudowords and visual shapes. We found that perceptual ratings of these stimuli relate to certain physical features of both the visual and auditory domains. Representational dissimilarity matrices (RDMs) of parameters that capture the spatial profile of the visual shapes, such as the simple matching coefficient and Jaccard distance, were significantly correlated with those of the visual ratings. RDMs of certain acoustic parameters of the pseudowords, such as the temporal fast Fourier transform (FFT) and spectral tilt, that reflect spectral composition, as well as shimmer and speech envelope that reflect aspects of amplitude variation over time, were significantly correlated with those of the auditory perceptual ratings. RDMs of the temporal FFT (acoustic) and the simple matching coefficient (visual) were significantly correlated. These findings suggest that sound-symbolic CCs are related to basic properties of auditory and visual stimuli, and thus provide insights into the fundamental nature of sound symbolism and how this might evoke specific impressions of physical meaning in natural language.
Over the past two decades, there has been growing appreciation of the multisensory nature of perception and its neural basis. Consequently, the concept has arisen that the brain is “metamodal”, with a task-based rather than strictly modality-based organization (Pascual-Leone & Hamilton, 2001; Lacey et al., 2009a; James et al., 2011). Here we focus on interactions between vision and touch in humans, including crossmodal interactions where tactile inputs evoke activity in neocortical regions traditionally considered visual, and multisensory integrative interactions. It is now established that cortical areas in both the ventral and dorsal pathways, previously identified as specialized for various aspects of visual processing, are also routinely recruited during the corresponding aspects of touch (for reviews see Amedi et al., 2005; Sathian & Lacey, 2007; Lacey & Sathian, 2011, 2014). When these regions are in classical visual cortex so that they would traditionally be regarded as unisensory, their engagement is referred to as crossmodal, whereas other regions lie in classically multisensory sectors of the association neocortex. Much of the relevant work concerns haptic perception (active sensing using the hand) of shape; this work is therefore considered in detail. We consider how vision and touch might be integrated in various situations and address the role of mental imagery in visual cortical activity during haptic perception. Finally, we present a model of haptic object recognition and its relationship with mental imagery (Lacey et al., 2014).
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Cervical dystonia (CD) is a neurological disorder characterized by abnormal movements and postures of the head. The brain regions responsible for these abnormal movements are not well understood, because most imaging techniques for assessing regional brain activity cannot be used when the head is moving. Recently, we mapped brain activation in healthy individuals using functional magnetic resonance imaging during isometric head rotation, when muscle contractions occur without actual head movements. In the current study, we used the same methods to explore the neural substrates for head movements in subjects with CD who had predominantly rotational abnormalities (torticollis). Isometric wrist extension was examined for comparison. Electromyography of neck and hand muscles ensured compliance with tasks during scanning, and any head motion was measured and corrected. Data were analyzed in three steps. First, we conducted within-group analyses to examine task-related activation patterns separately in subjects with CD and in healthy controls. Next, we directly compared task-related activation patterns between participants with CD and controls. Finally, considering that the abnormal head movements in CD occur in a consistently patterned direction for each individual, we conducted exploratory analyses that involved normalizing data according to the direction of rotational CD. The between-group comparisons failed to reveal any significant differences, but the normalization procedure in subjects with CD revealed that isometric head rotation in the direction of dystonic head rotation was associated with more activation in the ipsilateral anterior cerebellum, whereas isometric head rotation in the opposite direction was associated with more activity in sensorimotor cortex. These findings suggest that the cerebellum contributes to abnormal head rotation in CD, whereas regions in the cerebral cortex are involved in opposing the involuntary movements.
Synesthesia is a phenomenon in which an experience in one domain is accompanied by an involuntary secondary experience in another, unrelated domain; in classical synesthesia, these associations are arbitrary and idiosyncratic. Cross-modal correspondences refer to universal associations between seemingly unrelated sensory features, e.g., auditory pitch and visual size. Some argue that these phenomena form a continuum, with classical synesthesia being an exaggeration of universal cross-modal correspondences, whereas others contend that the two are quite different, since cross-modal correspondences are non-arbitrary, non-idiosyncratic, and do not involve secondary experiences. Here, we used the implicit association test to compare sy nesthetes’ and non-synesthetes’ sensitivity to cross-modal correspondences. We tested the associations between auditory pitch and visual elevation, auditory pitch and visual size, and sound-symbolic correspondences between auditory pseudowords and visual shapes. Synesthetes were more sensitive than non-synesthetes to cross-modal correspondences involving sound-symbolic, but not low-level sensory, associations. We conclude that synesthesia heightens universally experienced cross-modal correspondences, but only when these involve sound symbolism. This is only partly consistent with the idea of a continuum between synesthesia and cross-modal correspondences, but accords with the idea that synesthesia is a high-level, post-perceptual phenomenon, with spillover of the abilities of synesthetes into domains outside their synesthesias. To our knowledge, this is the first demonstration that synesthetes, relative to non-synesthetes, experience stronger cross-modal correspondences outside their synesthetic domains.
Introduction Memory deficits characterize Alzheimer's dementia and the clinical precursor stage known as mild cognitive impairment. Nonpharmacologic interventions hold promise for enhancing functioning in these patients, potentially delaying functional impairment that denotes transition to dementia. Previous findings revealed that mnemonic strategy training (MST) enhances long-term retention of trained stimuli and is accompanied by increased blood oxygen level–dependent signal in the lateral frontal and parietal cortices as well as in the hippocampus. The present study was designed to enhance MST generalization, and the range of patients who benefit, via concurrent delivery of transcranial direct current stimulation (tDCS). Methods This protocol describes a prospective, randomized controlled, four-arm, double-blind study targeting memory deficits in those with mild cognitive impairment. Once randomized, participants complete five consecutive daily sessions in which they receive either active or sham high definition tDCS over the left lateral prefrontal cortex, a region known to be important for successful memory encoding and that has been engaged by MST. High definition tDCS (active or sham) will be combined with either MST or autobiographical memory recall (comparable to reminiscence therapy). Participants undergo memory testing using ecologically relevant measures and functional magnetic resonance imaging before and after these treatment sessions as well as at a 3-month follow-up. Primary outcome measures include face-name and object-location association tasks. Secondary outcome measures include self-report of memory abilities as well as a spatial navigation task (near transfer) and prose memory (medication instructions; far transfer). Changes in functional magnetic resonance imaging will be evaluated during both task performance and the resting-state using activation and connectivity analyses. Discussion The results will provide important information about the efficacy of cognitive and neuromodulatory techniques as well as the synergistic interaction between these promising approaches. Exploratory results will examine patient characteristics that affect treatment efficacy, thereby identifying those most appropriate for intervention.
This review surveys the recent literature on visuo-haptic convergence in the perception of object form, with particular reference to the lateral occipital complex (LOC) and the intraparietal sulcus (IPS) and discusses how visual imagery or multisensory representations might underlie this convergence. Drawing on a recent distinction between object- and spatially-based visual imagery, we propose a putative model in which LOtv, a subregion of LOC, contains a modality-independent representation of geometric shape that can be accessed either bottom-up from direct sensory inputs or top-down from frontoparietal regions. We suggest that such access is modulated by object familiarity: spatial imagery may be more important for unfamiliar objects and involve IPS foci in facilitating somatosensory inputs to the LOC; by contrast, object imagery may be more critical for familiar objects, being reflected in prefrontal drive to the LOC.
Previous research has shown that there is considerable overlap in the neural networks mediating successful memory encoding and retrieval. However, little is known about how the relevant human brain regions interact during these distinct phases of memory or how such interactions are affected by memory deficits that characterize mild cognitive impairment (MCI), a condition that often precedes dementia due to Alzheimer's disease. Here we employed multivariate Granger causality analysis using autoregressive modeling of inferred neuronal time series obtained by deconvolving the hemodynamic response function from measured blood oxygenation level-dependent (BOLD) time series data, in order to examine the effective connectivity between brain regions during successful encoding and/or retrieval of object location associations in MCI patients and comparable healthy older adults. During encoding, healthy older adults demonstrated a left hemisphere dominant pattern where the inferior frontal junction, anterior intraparietal sulcus (likely involving the parietal eye fields), and posterior cingulate cortex drove activation in most left hemisphere regions and virtually every right hemisphere region tested. These regions are part of a frontoparietal network that mediates top-down cognitive control and is implicated in successful memory formation. In contrast, in the MCI patients, the right frontal eye field drove activation in every left hemisphere region examined, suggesting reliance on more basic visual search processes. Retrieval in the healthy older adults was primarily driven by the right hippocampus with lesser contributions of the right anterior thalamic nuclei and right inferior frontal sulcus, consistent with theoretical models holding the hippocampus as critical for the successful retrieval of memories. The pattern differed in MCI patients, in whom the right inferior frontal junction and right anterior thalamus drove successful memory retrieval, reflecting the characteristic hippocampal dysfunction of these patients. These findings demonstrate that neural network interactions differ markedly between MCI patients and healthy older adults. Future efforts will investigate the impact of cognitive rehabilitation of memory on these connectivity patterns.
Neuroimaging studies investigating somatosensory-based object recognition in humans have revealed activity in the lateral occipital complex, a cluster of regions primarily associated with visual object recognition. To date, determining whether this activity occurs during or subsequent to recognition per se, has been difficult to assess due to the lowtemporal resolution of the hemodynamic response. To more finelymeasure the timing of somatosensory object recognition processes we employed high density EEG using amodified version of a paradigm previously applied to neuroimaging experiments. Simple geometric shapes were presented to the right index finger of 10 participantswhile the ongoing EEGwas measured time locked to the stimulus.
In the condition of primary interest participants discriminated the shape of the stimulus. In the alternate condition they judged stimulus duration. Using traditional event-related potential analysis techniques we found significantly greater amplitudes in the evoked potentials of the shape discrimination condition between 140 and 160ms, a timeframe inwhich LOCmediated perceptual processes are believed to occur during visual object recognition. Scalp voltage topography and source analysis procedures indicated the lateral occipital complex as the likely source behind this effect. This finding supports amultisensory role for the lateral occipital complex during object recognition.