본문으로 건너뛰기
← 뒤로

Evaluation of Artificial Intelligence as a Decision-Support Tool in Urological Tumor Boards: A Study in Real Clinical Practice.

1/5 보강
Journal of clinical medicine 📖 저널 OA 100% 2021: 34/34 OA 2022: 61/61 OA 2023: 78/78 OA 2024: 135/135 OA 2025: 265/265 OA 2026: 192/192 OA 2021~2026 2026 Vol.15(6)
Retraction 확인
출처

De la Torre-Trillo J, Yáñez Castillo Y, Melgarejo Segura MT, Carmona Sánchez E, Zambudio Munuera A, Mora-Delgado J, López Luque A

📝 환자 설명용 한 줄

: Artificial intelligence (AI) tools, particularly large language models (LLMs) such as ChatGPT-4o, are gaining prominence in medicine.

이 논문을 인용하기

↓ .bib ↓ .ris
APA De la Torre-Trillo J, Yáñez Castillo Y, et al. (2026). Evaluation of Artificial Intelligence as a Decision-Support Tool in Urological Tumor Boards: A Study in Real Clinical Practice.. Journal of clinical medicine, 15(6). https://doi.org/10.3390/jcm15062130
MLA De la Torre-Trillo J, et al.. "Evaluation of Artificial Intelligence as a Decision-Support Tool in Urological Tumor Boards: A Study in Real Clinical Practice.." Journal of clinical medicine, vol. 15, no. 6, 2026.
PMID 41899054 ↗
DOI 10.3390/jcm15062130

Abstract

: Artificial intelligence (AI) tools, particularly large language models (LLMs) such as ChatGPT-4o, are gaining prominence in medicine. While their diagnostic capabilities have been explored across various oncologic domains, their role in clinical decision-making within multidisciplinary tumor boards (MTBs) remains largely unexamined in urologic oncology. This study evaluates the performance of ChatGPT-4o as a decision-support tool in a real-world MTB setting by comparing its recommendations with those of expert clinicians. : A retrospective study was conducted using 98 anonymized clinical cases discussed by a urologic MTB between June 2024 and February 2025. An independent urologist entered the same cases into ChatGPT-4o using a standardized prompt replicating real-world presentation. Two certified urologists independently assessed the model's responses. Agreement was analyzed overall and by tumor type, disease stage, clinical context, and treatment strategy. : ChatGPT-4o fully agreed with the MTB in 56.1% of cases, was correct but incomplete in 23.5%, and provided partially accurate but flawed recommendations in 18.4%. Overall concordance between ChatGPT-4o and the MTB yielded a Cohen's kappa of 0.61, indicating moderate-to-good agreement. Discrepancies were most common in metastatic prostate cancer, often due to misclassification of tumor burden or errors in treatment sequencing. Highest agreement rates were observed in bladder and renal tumors, and in standardized therapeutic scenarios such as radiotherapy. : ChatGPT-4o demonstrated moderate alignment with expert MTB decisions and performed best in well-defined clinical contexts. While it cannot replace multidisciplinary expertise, it may serve as a supportive tool to enhance access to standardized oncologic care.

🏷️ 키워드 / MeSH 📖 같은 키워드 OA만

🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반

🟢 PMC 전문 열기