본문으로 건너뛰기
← 뒤로

Anatomy-guided prompting with cross-modal self-alignment for whole-body PET-CT breast cancer segmentation.

2/5 보강
Medical image analysis 📖 저널 OA 7.1% 2025: 0/7 OA 2026: 2/21 OA 2025~2026 2026 Vol.110() p. 103956 Advanced Neural Network Applications
TL;DR This paper proposes a novel anatomy-guided cross-modal learning framework that outperforms eight state-of-the-art methods, including CNN-based, transformer-based, and Mamba-based approaches, on two datasets encompassing primary breast cancer, metastatic breast cancer, and other types of cancer segmentation tasks.
Retraction 확인
출처
PubMed DOI OpenAlex Semantic 마지막 보강 2026-04-29
OpenAlex 토픽 · Advanced Neural Network Applications AI in cancer detection Medical Imaging Techniques and Applications

Huang J, Yang X, Liang X, Chen S, Sun Y, Mok GS

📝 환자 설명용 한 줄

This paper proposes a novel anatomy-guided cross-modal learning framework that outperforms eight state-of-the-art methods, including CNN-based, transformer-based, and Mamba-based approaches, on two da

이 논문을 인용하기

↓ .bib ↓ .ris
APA Jiaju Huang, Xiao Yang, et al. (2026). Anatomy-guided prompting with cross-modal self-alignment for whole-body PET-CT breast cancer segmentation.. Medical image analysis, 110, 103956. https://doi.org/10.1016/j.media.2026.103956
MLA Jiaju Huang, et al.. "Anatomy-guided prompting with cross-modal self-alignment for whole-body PET-CT breast cancer segmentation.." Medical image analysis, vol. 110, 2026, pp. 103956.
PMID 41616643 ↗

Abstract

Accurate segmentation of breast cancer in PET-CT images is crucial for precise staging, monitoring treatment response, and guiding personalized therapy. However, the small size and dispersed nature of metastatic lesions, coupled with the scarcity of annotated data and heterogeneity between modalities that hinders effective information fusion, make this task challenging. This paper proposes a novel anatomy-guided cross-modal learning framework to address these issues. Our approach first generates organ pseudo-labels through a teacher-student learning paradigm, which serve as anatomical prompts to guide cancer segmentation. We then introduce a self-aligning cross-modal pre-training method that aligns PET and CT features in a shared latent space through masked 3D patch reconstruction, enabling effective cross-modal feature fusion. Finally, we initialize the segmentation network's encoder with the pre-trained encoder weights, and incorporate organ labels through a Mamba-based prompt encoder and Hypernet-Controlled Cross-Attention mechanism for dynamic anatomical feature extraction and fusion. Notably, our method outperforms eight state-of-the-art methods, including CNN-based, transformer-based, and Mamba-based approaches, on two datasets encompassing primary breast cancer, metastatic breast cancer, and other types of cancer segmentation tasks.

🏷️ 키워드 / MeSH 📖 같은 키워드 OA만

같은 제1저자의 인용 많은 논문 (5)

🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반