본문으로 건너뛰기
← 뒤로

GR2ST: Spatial Transcriptomics Prediction based on Graph-Enhanced Multimodal Contrastive Learning.

2/5 보강
Bioinformatics (Oxford, England) 2026 OA Single-cell and spatial transcriptom
Retraction 확인
출처
PubMed DOI OpenAlex 마지막 보강 2026-04-29
OpenAlex 토픽 · Single-cell and spatial transcriptomics Gene expression and cancer classification Domain Adaptation and Few-Shot Learning

Zhou J, Li S, Han R, Wang X, Wang Y, Li J

📖 무료 전문 🔓 OA PDF oa
📝 환자 설명용 한 줄

[MOTIVATION] Spatial transcriptomics techniques capture gene expression data and spatial coordinates, while simultaneously correlating them with tissue section images.

이 논문을 인용하기

↓ .bib ↓ .ris
APA Jingli Zhou, S L Li, et al. (2026). GR2ST: Spatial Transcriptomics Prediction based on Graph-Enhanced Multimodal Contrastive Learning.. Bioinformatics (Oxford, England). https://doi.org/10.1093/bioinformatics/btag209
MLA Jingli Zhou, et al.. "GR2ST: Spatial Transcriptomics Prediction based on Graph-Enhanced Multimodal Contrastive Learning.." Bioinformatics (Oxford, England), 2026.
PMID 42036805 ↗

Abstract

[MOTIVATION] Spatial transcriptomics techniques capture gene expression data and spatial coordinates, while simultaneously correlating them with tissue section images. This advantage makes Spatial transcriptomics data highly valuable for research, such as investigating disease mechanisms and cancer prognosis. However, the extended time and high cost of spatial transcriptomic sequencing currently limit further advancements in this field. The development of numerous deep learning methods aimed at predicting spatial transcriptomics from histology images has advanced significantly. However, these approaches often lack the ability to effectively integrate histology images with spatial transcriptomic data. Here, we propose GR2ST, a deep learning model that learns the underlying connections between image features and gene expression to predict spatial transcriptomics.

[RESULTS] GR2ST leverages a large pre-trained pathology model to extract high-level histological features. We designed a dual-branch graph architecture, consisting of a dynamic threshold-based functional graph and a radius-constrained spatial graph, to capture complex spot interactions within heterogeneous tissues. The model aligns histology images with gene expression representations through a multimodal contrastive learning framework. It achieves adaptive gene expression generation via a Cell-Type Guided Multi-Branch Regression Head supervised by a context-aware weighting network, which is further integrated with cross-sample retrieval to construct an ensemble prediction. The performance of the model is evaluated on three cancer-related spatial transcriptomics datasets, including cutaneous squamous cell carcinoma and two human breast cancer cohorts, to demonstrate its effectiveness and robustness.

[AVAILABILITY] https://github.com/zjl1109294570/GR2ST.

[SUPPLEMENTARY INFORMATION] Supplementary data are available at Bioinformatics online.

같은 제1저자의 인용 많은 논문 (5)

🔓 OA PDF 열기