Neural Mechanisms Linking Global Maps to First-Person Perspectives

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Humans and many animals possess the remarkable ability to navigate environments by seamlessly switching between first-person perspectives (FPP) and global map perspectives (GMP). However, the neural mechanisms that underlie this transformation remain poorly understood. In this study, we developed a variational autoencoder (VAE) model, enhanced with recurrent neural networks (RNNs), to investigate the computational principles behind perspective transformations. Our results reveal that temporal sequence modeling is crucial for maintaining spatial continuity and improving transformation accuracy when switching between FPPs and GMPs. The model’s latent variables capture many representational forms seen in the distributed cognitive maps of the mammalian brain, such as head direction cells, place cells, corner cells, and border cells, but notably not grid cells, suggesting that perspective transformation engages multiple brain regions beyond the hippocampus and entorhinal cortex. Furthermore, our findings demonstrate that landmark encoding, particularly proximal environmental cues such as boundaries and objects, play a critical role in enabling successful perspective shifts, whereas distal cues are less influential. These insights on perspective linking provide a new computational framework for understanding spatial cognition and offer valuable directions for future animal and human studies, highlighting the significance of temporal sequences, distributed representations, and proximal cues in navigating complex environments.

Significance Statement

Understanding how the brain transforms between different spatial perspectives is crucial for advancing our knowledge of spatial cognition and navigation. This study presents a novel computational approach that bridges the gap between neural recordings and behavior, offering insights into the underlying mechanisms of perspective transformation. Our findings suggest how the brain integrates temporal sequences, distributed representations, and environmental cues to maintain a coherent sense of space. By demonstrating the importance of proximal cues and temporal context, our computational model provides testable predictions for future neurophysiological studies in humans and animals.

Related articles

Related articles are currently not available for this article.