Neural Computational Model Predicts Attentional Dynamics during Immersive Search in Virtual Reality
Abstract
Much of the scientific understanding of visual attention has come from desktop paradigms where body, head, and eye movements are restricted. This stands in contrast to the ability to search and navigate with few constraints in the real world. To bridge this gap, a computational model parametrized on a wide range of well-controlled desktop search tasks was evaluated on its ability to predict search behaviors in a less constrained, immersive environment in virtual reality. A set of validation metrics showed that the model’s attention map could predict empirical behaviors such as eye gaze and manual responses in VR despite the field-of-view varying from one moment to another from the participant’s movements. The present work quantifies the real-world applicability of laboratory-based theory and highlights ways to address outstanding limitations in explaining naturalistic visual search behaviors.
Related articles
Related articles are currently not available for this article.