A depth map of visual space in the primary visual cortex

This article has 1 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Depth perception is essential for visually-guided behavior. Computer vision algorithms use depth maps to encode distances in three-dimensional scenes but it is unknown whether such depth maps are generated by animal visual systems. To answer this question, we focused on motion parallax, a depth cue relying on visual motion resulting from movement of the observer. As neurons in the mouse primary visual cortex (V1) are broadly modulated by locomotion, we hypothesized that they may integrate vision- and locomotion-related signals to estimate depth from motion parallax. Using recordings in a three-dimensional virtual reality environment, we found that conjunctive coding of visual and self-motion speeds gave rise to depth-selective neuronal responses. Depth-selective neurons could be characterized by three-dimensional receptive fields, responding to specific virtual depths and retinotopic locations. Neurons tuned to a broad range of virtual depths were found across all sampled retinotopic locations, showing that motion parallax generates a depth map of visual space in V1.

Related articles

Related articles are currently not available for this article.