Low-Rank Tensor Encoding Models Decompose Natural Speech Comprehension Processes

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

How does the brain process language over time? Research suggests that natural human language is processed hierarchically across brain regions over time. However, attempts to characterize this computation have thus far been limited to tightly controlled experimental settings that capture only a coarse picture of the brain dynamics underlying human natural language comprehension. The recent emergence of LLM encoding models promises a new avenue to discover and characterize rich semantic information in the brain, yet interpretable methods for linking information in LLMs to language processing over time are limited. In this work, we develop a low-rank tensor regression method to decompose LLM encoding models into interpretable components of semantics, time, and brain region activation, and apply the method to a Magnetoencephalography (MEG) dataset in which subjects listened to narrative stories. With only a few components, we show improved performance compared to a standard ridge regression encoding model, suggesting the low-rank models provide a good inductive bias for language encoding. In addition, our method discovers a diverse spectrum of interpretable response components that are sensitive to a rich set of low-level and semantic language features, showing that our method is able to separate distinct language processing features in neural signals. After controlling for low-level audio and sentence features, we demonstrate better capture of semantic features. Through use of low-rank tensor encoding models we are able to decompose neural responses to language features, showing improved encoding performance and interpretable processing components, suggesting our method as a useful tool for uncovering language processes in naturalistic settings.

Related articles

Related articles are currently not available for this article.