Skip to content

Because of a lapse in government funding, the information on this website may not be up to date, transactions submitted via the website may not be processed, and the agency may not be able to respond to inquiries until appropriations are enacted.
The NIH Clinical Center (the research hospital of NIH) is open. For more details about its operating status, please visit cc.nih.gov.
Updates regarding government operating status and resumption of normal operations can be found at OPM.gov.

Decoding Canine Cognition

Machine learning gives glimpse of how a dog's brain represents what it sees
September 14, 2022
Artificial Intelligence Neuroscience Visual Processing
Basic Research
Grantee
A man and a dog in front of an MRI scanner

Bhubo, shown with his owner Ashwin Sakhardande, prepares for his video-watching session in an fMRI scanner. The dog's ears are taped to hold in ear plugs that muffle the noise of the fMRI scanner. Image credit: Emory Canine Cognitive Neuroscience Lab

Scientists have decoded visual images from a dog’s brain, offering a first look at how the canine mind reconstructs what it sees. The Journal of Visualized Experiments published the research done at Emory University.

The results suggest that dogs are more attuned to actions in their environment rather than to who or what is doing the action.

The researchers recorded the fMRI neural data for two awake, unrestrained dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. They then used a machine-learning algorithm to analyze the patterns in the neural data.

“We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” says Gregory Berns, Emory professor of psychology and corresponding author of the paper. “The fact that we are able to do that is remarkable.”