Multimodal Survival Analysis of Glioblastoma Using Novel Deep Learning-Based Feature Extraction on H&E Slides and RNA Sequences

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Multimodal survival analysis is essential for tackling the complexity of diseases such as glioblastoma (GBM), a highly aggressive brain cancer with a dismal prognosis. While single-modality approaches exist, most fail to consider the combinatorial effects of multiple genetic variants or focus solely on aggregated gene expression data, neglecting critical nucleotide-level alterations. Current multimodal survival frameworks also exhibit significant limitations, such as disregarding sequence-level RNA information and failing to retain spatial context in histopathological Hematoxylin and Eosin (H&E) slides. To address these challenges, we developed MUSA, a novel deep learning–based survival model that integrates four complementary data streams: (1) a hierarchical transformer module for extracting base-level RNA sequence alterations, (2) an unsupervised convolutional neural network pipeline for preserving spatial features in H&E whole-slide images, (3) gene expression profiling, and (4) clinical records. By unifying these modalities through a late-fusion mechanism, MUSA significantly outperforms single-modality and existing multimodal approaches, achieving higher AUC scores and more robust patient stratification on a GBM dataset. This innovative multimodal framework, along with its specialized feature extraction techniques, provides a foundation for more precise and clinically actionable survival predictions in glioblastoma.

Related articles

Related articles are currently not available for this article.