Graph-based Contrastive Learning Enables Unified Integration and Niche Transfer Across Single-Cell and Spatial Multi-Omics
Abstract
The rapid growth of single-cell and spatial omics has outpaced computational methods capable of unifying these data into a cohesive framework for tissue atlas construction and cross-sample analysis. A critical bottleneck lies in the inability of existing tools to co-embed cells from diverse technologies-spanning transcriptomics, epigenomics, and proteomics-into a shared reference space while preserving spatial architecture and molecular specificity. Here, we present Garfield (Graph-based Contrastive Learning Enables Fast Single-Cell Embedding), a geometric deep-learning framework that addresses these challenges through spatially or molecularly aware cell embedding. Leveraging a graph contrastive learning framework, Garfield learns a shared embedding space for data generated by diverse technologies, enabling seamless construction and querying of spatial reference atlases. Our results show that Garfield consistently outperforms state-of-the-art benchmark models in identifying spatial niches across multiple datasets. We further demonstrate Garfield's versatility by applying it to multimodal spatial data, including gene expression and chromatin accessibility, where it successfully identifies distinct niches in the mouse brain. Notably, Garfield reveals tumor microenvironment heterogeneity in non-small cell lung cancer and breast cancer, uncovered conserved, barrier-like immune niches at tumor margins orchestrating CD80-mediated T cell-B cell-dendritic cell interactions and IFN-y/B cell activation pathways, forming spatially coordinated immune surveillance hubs. These findings underscore Garfield's potential to advance spatial omics research by offering a robust, scalable solution for integrating and interpreting complex spatial data across diverse tissue types and modalities.
Related articles
Related articles are currently not available for this article.