About this item:

37 Views | 27 Downloads

Author Notes:

gregory.berns@emory.edu

Acknowledgements: We thank Kate Revill, Raveena Chhibber, and Jon King for their helpful insights in the development of this analysis, Mark Spivak for his assistance recruiting and training dogs for MRI, and Phyllis Guo for her help in video creation and labeling. We also thank our dedicated dog owners, Rebecca Beasley (Daisy) and Ashwin Sakhardande (Bhubo).

Disclosures: None.

Subjects:

Research Funding:

The human studies were supported by a grant from the National Eye Institute (Grant R01 EY029724 to D.D.D.).

Keywords:

  • visual stream
  • pathways
  • what/where dichotomy
  • cortex
  • perception
  • AI based fMRI decoding
  • dog cortex
  • naturalistic video classification
  • object based vs action based classification
  • cross species comparison of visual perception

Through a Dog's Eyes: fMRI Decoding of Naturalistic Videos from the Dog Cortex

Tools:

Journal Title:

Neuroscience

Publisher:

Type of Work:

Article | Final Publisher PDF

Abstract:

Recent advancements using machine learning and functional magnetic resonance imaging (fMRI) to decode visual stimuli from the human and nonhuman cortex have resulted in new insights into the nature of perception. However, this approach has yet to be applied substantially to animals other than primates, raising questions about the nature of such representations across the animal kingdom. Here, we used awake fMRI in two domestic dogs and two humans, obtained while each watched specially created dog-appropriate naturalistic videos. We then trained a neural net (Ivis) to classify the video content from a total of 90 min of recorded brain activity from each. We tested both an object-based classifier, attempting to discriminate categories such as dog, human, and car, and an action-based classifier, attempting to discriminate categories such as eating, sniffing, and talking. Compared to the two human subjects, for whom both types of classifier performed well above chance, only action-based classifiers were successful in decoding video content from the dogs. These results demonstrate the first known application of machine learning to decode naturalistic videos from the brain of a carnivore and suggest that the dog's-eye view of the world may be quite different from our own.

Copyright information:

2022 JoVE

This is an Open Access work distributed under the terms of the Creative Commons Attribution-NonCommerical-NoDerivs 3.0 Unported License (https://creativecommons.org/licenses/by-nc-nd/3.0/).
Export to EndNote