본문으로 건너뛰기
← 뒤로

Adaptive diagnostic reasoning framework for pathology with multimodal large language models.

1/5 보강
Communications medicine 📖 저널 OA 87.9% 2024: 1/1 OA 2025: 24/24 OA 2026: 33/41 OA 2024~2026 2026 Vol.6(1)
Retraction 확인
출처

Hong Y, Kao KC, Edwards L, Liu NT, Huang CY, Oliveira-Kowaleski A, Hsieh CJ, Lin NYC

📝 환자 설명용 한 줄

[BACKGROUND] Artificial intelligence enhances pathology screening efficiency, yet clinical adoption remains limited because most systems operate as opaque black boxes.

이 논문을 인용하기

↓ .bib ↓ .ris
APA Hong Y, Kao KC, et al. (2026). Adaptive diagnostic reasoning framework for pathology with multimodal large language models.. Communications medicine, 6(1). https://doi.org/10.1038/s43856-026-01491-z
MLA Hong Y, et al.. "Adaptive diagnostic reasoning framework for pathology with multimodal large language models.." Communications medicine, vol. 6, no. 1, 2026.
PMID 41794962 ↗

Abstract

[BACKGROUND] Artificial intelligence enhances pathology screening efficiency, yet clinical adoption remains limited because most systems operate as opaque black boxes. We aim to resolve this opacity by establishing a framework that generates transparent, evidence-linked reasoning to support diagnostic auditing.

[METHODS] We present a framework that shifts off-the-shelf multimodal large language models from passive pattern recognition to active diagnostic reasoning. Using small labeled subsets from breast and prostate cancer datasets, we employ a two-phase self-learning process to derive diagnostic criteria without updating model weights. We integrate expert feedback from board-certified pathologists to ensure the generated descriptions align with established medical standards.

[RESULTS] Here we show that our framework produces audit-ready rationales while achieving over 90% accuracy in distinguishing normal tissue from invasive carcinoma. Beyond binary classification, the model effectively differentiates complex subtypes like ductal carcinoma in situ by autonomously identifying hallmark histological features, including nuclear irregularities and structural disruption. These computer-generated descriptions closely match expert assessments. Our approach delivers substantial performance gains over conventional baselines and adapts effectively across diverse tissue types and independent foundation models.

[CONCLUSIONS] By uniting visual understanding with reasoning, our framework provides a promising approach for clinically trustworthy artificial intelligence. This framework helps bridge the gap between opaque classifiers and auditable systems, suggesting a viable path toward evidence-linked interpretation in medical workflows.

같은 제1저자의 인용 많은 논문 (5)

🟢 PMC 전문 열기