Tripartite organization of brain state dynamics underlying spoken narrative comprehension
Abstract
Speech comprehension involves the dynamic interplay of multiple cognitive processes, from basic sound perception, to linguistic encoding, and finally to complex semantic-conceptual interpretations. How the brain handles the diverse streams of information processing remains poorly understood. Applying Hidden Markov Modeling to fMRI data obtained during spoken narrative comprehension, we reveal that the whole brain networks predominantly oscillate within a tripartite latent state space. These states are respectively characterized by high activities in the sensory-motor (State #1), bilateral temporal (State #2), and DMN (State #3) regions, with State #2 acting as a transitional hub. The three states are selectively modulated by the acoustic, word-level semantic and clause-level semantic properties of the narrative. Moreover, the alignment with the best performer in brain state expression can predict participants' narrative comprehension scores. These results are reproducible with different brain network atlas and generalizable to two independent datasets consisting of young and older adults. Our study suggests that the brain underlies narrative comprehension by switching through a tripartite state space, with each state probably dedicated to a specific component of language faculty, and effective narrative comprehension relies on engaging those states in a timely manner.
Related articles
Related articles are currently not available for this article.