Anatomy-guided prompting with cross-modal self-alignment for whole-body PET-CT breast cancer segmentation.
2/5 보강
TL;DR
This paper proposes a novel anatomy-guided cross-modal learning framework that outperforms eight state-of-the-art methods, including CNN-based, transformer-based, and Mamba-based approaches, on two datasets encompassing primary breast cancer, metastatic breast cancer, and other types of cancer segmentation tasks.
OpenAlex 토픽 ·
Advanced Neural Network Applications
AI in cancer detection
Medical Imaging Techniques and Applications
This paper proposes a novel anatomy-guided cross-modal learning framework that outperforms eight state-of-the-art methods, including CNN-based, transformer-based, and Mamba-based approaches, on two da
APA
Jiaju Huang, Xiao Yang, et al. (2026). Anatomy-guided prompting with cross-modal self-alignment for whole-body PET-CT breast cancer segmentation.. Medical image analysis, 110, 103956. https://doi.org/10.1016/j.media.2026.103956
MLA
Jiaju Huang, et al.. "Anatomy-guided prompting with cross-modal self-alignment for whole-body PET-CT breast cancer segmentation.." Medical image analysis, vol. 110, 2026, pp. 103956.
PMID
41616643 ↗
Abstract 한글 요약
Accurate segmentation of breast cancer in PET-CT images is crucial for precise staging, monitoring treatment response, and guiding personalized therapy. However, the small size and dispersed nature of metastatic lesions, coupled with the scarcity of annotated data and heterogeneity between modalities that hinders effective information fusion, make this task challenging. This paper proposes a novel anatomy-guided cross-modal learning framework to address these issues. Our approach first generates organ pseudo-labels through a teacher-student learning paradigm, which serve as anatomical prompts to guide cancer segmentation. We then introduce a self-aligning cross-modal pre-training method that aligns PET and CT features in a shared latent space through masked 3D patch reconstruction, enabling effective cross-modal feature fusion. Finally, we initialize the segmentation network's encoder with the pre-trained encoder weights, and incorporate organ labels through a Mamba-based prompt encoder and Hypernet-Controlled Cross-Attention mechanism for dynamic anatomical feature extraction and fusion. Notably, our method outperforms eight state-of-the-art methods, including CNN-based, transformer-based, and Mamba-based approaches, on two datasets encompassing primary breast cancer, metastatic breast cancer, and other types of cancer segmentation tasks.
🏷️ 키워드 / MeSH 📖 같은 키워드 OA만
같은 제1저자의 인용 많은 논문 (5)
- Secondary Upper Blepharoplasty: Converting Static Folds Into Dynamic Folds.
- Performance of contrast-enhanced cone-beam breast CT to predict nipple-areolar complex involvement in early-stage breast cancer.
- The Effect of Standardized Postoperative Neck and Orofacial Rehabilitation Exercise on Quality of Life in Post-Thyroidectomy Patients: A Randomized Controlled Trial.
- A novel whole cancer cell vaccine based on modified β-glucan elicits robust anti-tumor immunity.
- Increased IL4I1 expression predicts poor survival and modulates the immune microenvironment in acute myeloid leukemia.
🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반
- A Phase I Study of Hydroxychloroquine and Suba-Itraconazole in Men with Biochemical Relapse of Prostate Cancer (HITMAN-PC): Dose Escalation Results.
- Self-management of male urinary symptoms: qualitative findings from a primary care trial.
- Clinical and Liquid Biomarkers of 20-Year Prostate Cancer Risk in Men Aged 45 to 70 Years.
- Diagnostic accuracy of Ga-PSMA PET/CT versus multiparametric MRI for preoperative pelvic invasion in the patients with prostate cancer.
- Comprehensive analysis of androgen receptor splice variant target gene expression in prostate cancer.
- Clinical Presentation and Outcomes of Patients Undergoing Surgery for Thyroid Cancer.