본문으로 건너뛰기
← 뒤로

Scoring Physician Risk Communication in Prostate Cancer Using Large Language Models.

1/5 보강
Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing 2026 Vol.31() p. 71-84
Retraction 확인
출처

Lopez-Garcia G, Xu D, Luu M, Zheng R, Daskivich TJ, Gonzalez-Hernandez G

ℹ️ 이 논문은 무료 전문이 아직 없습니다. 코퍼스 전체의 43.8%는 무료 가능 (통계 →) · 🏥 기관 EZproxy로 시도

📝 환자 설명용 한 줄

Effective risk communication is essential to shared decision-making in prostate cancer care.

이 논문을 인용하기

↓ .bib ↓ .ris
APA Lopez-Garcia G, Xu D, et al. (2026). Scoring Physician Risk Communication in Prostate Cancer Using Large Language Models.. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 31, 71-84. https://doi.org/10.1142/9789819824755_0006
MLA Lopez-Garcia G, et al.. "Scoring Physician Risk Communication in Prostate Cancer Using Large Language Models.." Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, vol. 31, 2026, pp. 71-84.
PMID 41758134 ↗

Abstract

Effective risk communication is essential to shared decision-making in prostate cancer care. However, the quality of physician communication of key concepts varies widely in real-world consultations. Manual evaluation of communication is labor-intensive and not scalable. We present a structured, rubric-based framework that uses large language models (LLMs) to automatically score the quality of risk communication in prostate cancer consultations. Using transcripts from 20 clinical visits, we curated and annotated 487 physician-spoken sentences that referenced five key concepts for shared decision-making: cancer prognosis, life expectancy, and three treatment side effects (erectile dysfunction, incontinence, and irritative urinary symptoms). Each sentence was assigned a score from 0 to 5 based on the precision and patient-specificity of communicated risk, using a validated scoring rubric. We modeled this task as five multiclass classification problems and evaluated both finetuned transformer baselines and GPT-4o with rubric-based and chain-of-thought (CoT) prompting. Our best performing approach, which combined rubric-based CoT prompting with few-shot learning, achieved micro averaged F1 scores between 85.0 and 92.0 across domains, outperforming supervised baselines and matching inter-annotator agreement. These findings establish a scalable foundation for AI-driven evaluation of physician-patient communication in oncology and beyond.

🏷️ 키워드 / MeSH 📖 같은 키워드 OA만

같은 제1저자의 인용 많은 논문 (1)

🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반