AI-discovered tuning laws explain neuronal population code geometry

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

The activity of visual cortical neurons forms a population code representing image stimuli. There is, however, a discrepancy between our understanding of this code at the single-cell and population levels: direct measurements indicate the population code is high-dimensional, but established models of single-cell tuning give rise to low-dimensional codes. We reconciled this discrepancy by developing an AI science system to find a new parsimonious, interpretable equation for visual cortical orientation tuning. Candidate equations were expressed as short computer programs and evolved by Large Language Models (LLMs) using graphical diagnostics. The resulting equation not only improved single-cell fits, but also accurately modelled the population code’s high-dimensional geometry. A novel parameter of the AI-discovered equation, which controls single-cell tuning smoothness, gives rise to high-dimensional population codes. The same parameter drives high-dimensional coding in head-direction cells, suggesting a common coding strategy across brain regions. We used this equation to hypothesize a circuit mechanism generating high-dimensional population codes, and to demonstrate the advantages of these codes in a simulated hyperacuity task. These results show that tuning smoothness has a key role in controlling population code geometry, and demonstrate how AI equation discovery can deliver interpretable models accelerating scientific understanding in neuroscience and beyond.

Related articles

Related articles are currently not available for this article.