Large language models versus traditional textbooks: optimizing learning for plastic surgery case preparation.
Abstract
[BACKGROUND] Large language models (LLMs), such as ChatGPT-4 and Gemini, represent a new frontier in surgical education by offering dynamic, interactive learning experiences. Despite their potential, concerns about the accuracy, depth of knowledge, and bias in LLM responses persist. This study evaluates the effectiveness of LLMs in aiding surgical trainees in plastic and reconstructive surgery through comparison with traditional case-preparation textbooks.
[METHODS] Six representative cases from key areas of plastic and reconstructive surgery-craniofacial, hand, microsurgery, burn, gender-affirming, and aesthetics-were selected. Four types of questions were developed for each case to cover clinical anatomy, indications, contraindications, and complications. Responses from LLMs (ChatGPT-4 and Gemini) and textbooks were compared using surveys distributed to medical students, research fellows, residents, and attending surgeons. Reviewers rated each response on accuracy, thoroughness, usefulness for case preparation, brevity, and overall quality using a 5-point Likert scale. Statistical analyses, including ANOVA and unpaired T-tests, were conducted to assess the differences between LLM and textbook responses.
[RESULTS] A total of 90 surveys were completed. LLM responses were rated as more thorough (p < 0.001) but less concise (p < 0.001) than textbook responses. Textbooks were rated superior for answering questions on contraindications (p = 0.027) and complications (p = 0.014). ChatGPT was perceived as more accurate (p = 0.018), thorough (p = 0.002), and useful (p = 0.026) than Gemini. Gemini was rated lower in quality (p = 0.30) compared to ChatGPT along with being inferior to textbook answers for burn-related questions (p = 0.017) and anatomical questions (p = 0.013).
[CONCLUSION] While LLMs show promise in generating thorough educational content, they require improvement in conciseness, accuracy, and utility for practical case preparation. ChatGPT generally outperforms Gemini, indicating variability in LLM capabilities. Further development should focus on enhancing accuracy and consistency to establish LLMs as reliable tools in medical education and practice.
[METHODS] Six representative cases from key areas of plastic and reconstructive surgery-craniofacial, hand, microsurgery, burn, gender-affirming, and aesthetics-were selected. Four types of questions were developed for each case to cover clinical anatomy, indications, contraindications, and complications. Responses from LLMs (ChatGPT-4 and Gemini) and textbooks were compared using surveys distributed to medical students, research fellows, residents, and attending surgeons. Reviewers rated each response on accuracy, thoroughness, usefulness for case preparation, brevity, and overall quality using a 5-point Likert scale. Statistical analyses, including ANOVA and unpaired T-tests, were conducted to assess the differences between LLM and textbook responses.
[RESULTS] A total of 90 surveys were completed. LLM responses were rated as more thorough (p < 0.001) but less concise (p < 0.001) than textbook responses. Textbooks were rated superior for answering questions on contraindications (p = 0.027) and complications (p = 0.014). ChatGPT was perceived as more accurate (p = 0.018), thorough (p = 0.002), and useful (p = 0.026) than Gemini. Gemini was rated lower in quality (p = 0.30) compared to ChatGPT along with being inferior to textbook answers for burn-related questions (p = 0.017) and anatomical questions (p = 0.013).
[CONCLUSION] While LLMs show promise in generating thorough educational content, they require improvement in conciseness, accuracy, and utility for practical case preparation. ChatGPT generally outperforms Gemini, indicating variability in LLM capabilities. Further development should focus on enhancing accuracy and consistency to establish LLMs as reliable tools in medical education and practice.
📄 전문 PDF PMC 오픈액세스 · 1.1MB ↓ 다운로드
추출된 의학 개체 (NER)
| 유형 | 영어 표현 | 한국어 / 풀이 | UMLS CUI | 출처 | 등장 |
|---|---|---|---|---|---|
| 시술 | microsurgery
|
미세수술 | dict | 1 | |
| 약물 | [BACKGROUND] Large
|
scispacy | 1 | ||
| 약물 | Gemini
|
scispacy | 1 | ||
| 약물 | [RESULTS] A
|
scispacy | 1 | ||
| 약물 | ChatGPT
|
scispacy | 1 | ||
| 질환 | burn-related
|
scispacy | 1 | ||
| 질환 | LLM
|
scispacy | 1 | ||
| 질환 | surgery-craniofacial
|
scispacy | 1 | ||
| 기타 | Gemini
|
scispacy | 1 |
MeSH Terms
Humans; Textbooks as Topic; Surgery, Plastic; Language; Female; Male; Clinical Competence; Surveys and Questionnaires; Students, Medical; Large Language Models
🔗 함께 등장하는 도메인
이 논문이 속한 카테고리와 같은 논문에서 자주 함께 다뤄지는 카테고리들
관련 논문
- Endodontic implications of hypercementosis: A systematic review of anatomical challenges and therapeutic strategies.
- Breast plastic surgery in perimenopausal and postmenopausal women: Menopause-informed counseling on screening, safety, and long-term breast health.
- Application of the SCIA-Pure Skin Perforator Flap in Bilateral Upper Eyelid Reconstruction: A Case Report and Review of the Literature.
- Free flap reconstruction of a cast-related pressure ulcer in a pediatric patient with spinal muscular atrophy.
- Characterization of Trimmed Nerve Morphology Using High-Resolution Imaging: Comparison of Three Surgical Instruments.