About this item:

502 Views | 281 Downloads

Author Notes:

To whom correspondence should be addressed. E-mail: krish.sathian@emory.edu

Conceived and designed the experiments: JLF-K JMID RDS.

Performed the experiments: JMID RDS DH.

Analyzed the data: JMID RDS DH JLF-K.

Wrote the paper: JLF-K JMID RDS DH.

We thank members of the Departments of Human Genetics and Cell Biology and Biology at Emory University for many helpful discussions concerning this project.

The authors have declared that no competing interests exist.

Subject:

Research Funding:

This work was funded by NIH grant DK046403 (to JLF-K), National Institutes of Health Training Program in Human Disease Genetics grant 1T32MH087977, and National Institutes of Health Training grant T32GM08490-16.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Cross-Modal Object Recognition Is Viewpoint-Independent

Tools:

Journal Title:

PLoS ONE

Volume:

Volume 2, Number 9

Publisher:

, Pages e890-e890

Type of Work:

Article | Final Publisher PDF

Abstract:

Background Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. Methodology/Principal Findings Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. Conclusions/Significance The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.

Copyright information:

© Daenzer et al.

This is an Open Access work distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).
Export to EndNote