Concept2Brain: An AI model for predicting subject-level neurophysiological responses to text and pictures

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

The current growth of artificial intelligence (AI) tools provides an unprecedented opportunity to extract deeper insights from neurophysiological data while also enabling the reproduction and prediction of brain responses to a wide range of events and situations. Here, we introduce the Concept2Brain model, a deep network architecture designed to generate synthetic electrophysiological responses to semantic/emotional information conveyed through pictures or text. Leveraging AI solutions like CLIP from OpenAI, the model generates a representation of pictorial or language input and maps it into an electrophysiological latent space. We demonstrate that this openly available resource generates synthetic neural responses that closely resemble those observed in studies of naturalistic scene perception. The Concept2Brain model is provided as a web service tool for creating open and reproducible EEG datasets, allowing users to predict brain responses to any semantic concept or picture. Beyond its applied functionality, it also paves the way for AI-driven modeling of brain activity, offering new possibilities for studying how the brain represents the world.

Related articles

Related articles are currently not available for this article.