본문으로 건너뛰기
← 뒤로

Development and Validation of an AI-Based Multimodal Model for Pathological Staging of Gastric Cancer Using CT and Endoscopic Images.

1/5 보강
Academic radiology 📖 저널 OA 7.7% 2023: 1/1 OA 2024: 1/8 OA 2025: 4/67 OA 2026: 6/79 OA 2023~2026 2025 Vol.32(5) p. 2604-2617
Retraction 확인
출처

Zhang C, Li S, Huang D, Wen B, Wei S, Song Y

📝 환자 설명용 한 줄

[RATIONALE AND OBJECTIVES] Accurate preoperative pathological staging of gastric cancer is crucial for optimal treatment selection and improved patient outcomes.

🔬 핵심 임상 통계 (초록에서 자동 추출 — 원문 검증 권장)
  • 95% CI 0.887-0.979

이 논문을 인용하기

↓ .bib ↓ .ris
APA Zhang C, Li S, et al. (2025). Development and Validation of an AI-Based Multimodal Model for Pathological Staging of Gastric Cancer Using CT and Endoscopic Images.. Academic radiology, 32(5), 2604-2617. https://doi.org/10.1016/j.acra.2024.12.029
MLA Zhang C, et al.. "Development and Validation of an AI-Based Multimodal Model for Pathological Staging of Gastric Cancer Using CT and Endoscopic Images.." Academic radiology, vol. 32, no. 5, 2025, pp. 2604-2617.
PMID 39753481 ↗

Abstract

[RATIONALE AND OBJECTIVES] Accurate preoperative pathological staging of gastric cancer is crucial for optimal treatment selection and improved patient outcomes. Traditional imaging methods such as CT and endoscopy have limitations in staging accuracy.

[METHODS] This retrospective study included 691 gastric cancer patients treated from March 2017 to March 2024. Enhanced venous-phase CT and endoscopic images, along with postoperative pathological results, were collected. We developed three modeling approaches: (1) nine deep learning models applied to CT images (DeepCT), (2) 11 machine learning algorithms using handcrafted radiomic features from CT images (HandcraftedCT), and (3) ResNet-50-extracted deep features from endoscopic images followed by 11 machine learning algorithms (DeepEndo). The two top-performing models from each approach were combined into the Integrated Multi-Modal Model using a stacking ensemble method. Performance was assessed using ROC-AUC, sensitivity, and specificity.

[RESULTS] The Integrated Multi-Modal Model achieved an ROC-AUC of 0.933 (95% CI, 0.887-0.979) on the test set, outperforming individual models. Sensitivity and specificity were 0.869 and 0.840, respectively. Various evaluation metrics demonstrated that the final fusion model effectively integrated the strengths of each sub-model, resulting in a balanced and robust performance with reduced false-positive and false-negative rates.

[CONCLUSION] The Integrated Multi-Modal Model effectively integrates radiomic and deep learning features from CT and endoscopic images, demonstrating superior performance in preoperative pathological staging of gastric cancer. This multimodal approach enhances predictive accuracy and provides a reliable tool for clinicians to develop individualized treatment plans, thereby improving patient outcomes.

[DATA AVAILABILITY] The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons. All code used in this study is based on third-party libraries and all custom code developed for this study is available upon reasonable request from the corresponding author.

🏷️ 키워드 / MeSH 📖 같은 키워드 OA만

같은 제1저자의 인용 많은 논문 (5)

🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반