본문으로 건너뛰기
← 뒤로

A multimodal data-based model for breast cancer diagnosis.

Computer methods and programs in biomedicine 2026 Vol.279() p. 109288 🔓 OA AI in cancer detection
TL;DR A web-based breast cancer pathological staging diagnosis system is developed to visualize and deploy the FESCA model, demonstrating a step toward clinical application and providing a benchmark for other research methods.
OpenAlex 토픽 · AI in cancer detection Infrared Thermography in Medicine MRI in cancer diagnosis

Wang H, Wei L, Li J, Liu B, Fang J, Mooney C

📝 환자 설명용 한 줄

A web-based breast cancer pathological staging diagnosis system is developed to visualize and deploy the FESCA model, demonstrating a step toward clinical application and providing a benchmark for oth

이 논문을 인용하기

BibTeX ↓ RIS ↓
APA Huina Wang, Lan Wei, et al. (2026). A multimodal data-based model for breast cancer diagnosis.. Computer methods and programs in biomedicine, 279, 109288. https://doi.org/10.1016/j.cmpb.2026.109288
MLA Huina Wang, et al.. "A multimodal data-based model for breast cancer diagnosis.." Computer methods and programs in biomedicine, vol. 279, 2026, pp. 109288.
PMID 41759488

Abstract

[BACKGROUND AND OBJECTIVE] Developing multimodal data-driven diagnostic systems has become a key clinical strategy for improving breast cancer outcomes. However, effectively modeling multimodal features remains challenging due to substantial semantic heterogeneity, scale discrepancies, and the inherent difficulty of cross-modal alignment. Although existing studies have proposed various multimodal fusion methods, most rely on direct feature concatenation or shallow integration, which fail to capture fine-grained intra-modality semantics as well as the complex interactions between histopathological and genomic modalities.

[METHODS] In this study, we propose a multimodal diagnostic framework based on Feature Enhancement and Semantic Collaborative Alignment (FESCA). The method incorporates a semantic-guided modality feature enhancement mechanism that effectively extracts and strengthens diagnostic cues from both pathological images and genomic data. In addition, a contrastive-learning-based cross-modal alignment strategy is introduced to map heterogeneous modalities into a unified semantic space and achieve deep semantic collaboration through contrastive optimization. To ensure robust breast cancer classification under varying modality availability, a multimodal collaborative diagnostic strategy is employed to dynamically adapt the feature representations.

[RESULTS] We evaluate FESCA on the TCGA-BRCA dataset, and the experimental results demonstrate that it outperforms state-of-the-art methods in breast cancer classification while significantly improving both intra-modality representation quality and cross-modal semantic alignment.

[CONCLUSION] To enhance accessibility and practical application, we developed a web-based breast cancer pathological staging diagnosis system to visualize and deploy the FESCA model, demonstrating a step toward clinical application and providing a benchmark for other research methods.

MeSH Terms

Humans; Breast Neoplasms; Female; Semantics; Algorithms; Diagnosis, Computer-Assisted

같은 제1저자의 인용 많은 논문 (5)