본문으로 건너뛰기
← 뒤로

ChatGPT Is Equivalent to First-Year Plastic Surgery Residents: Evaluation of ChatGPT on the Plastic Surgery In-Service Examination.

Aesthetic surgery journal 2023 Vol.43(12) p. NP1085-NP1089 피인용 1회 🌐 cited 157 🔓 OA Artificial Intelligence in Healthcar
📈 연도별 인용 (2022–2026) · 합계 157
OpenAlex 토픽 · Artificial Intelligence in Healthcare and Education Radiomics and Machine Learning in Medical Imaging Surgical Simulation and Training

Humar P, Asaad M, Bengur FB, Nguyen V

📝 환자 설명용 한 줄

【연구 목적】 ChatGPT와 같은 대규모 언어 모델이 성형외과 인서비스 시험(Plastic Surgery In-Service Examination)에서 수행하는 수준을 평가하고, 이를 전국 성형외과 레지던트들의 성적과 비교하여 AI의 의료 교육 및 임상 지식 적용 능력을 검증하는 것을 목적으로 한다.

이 논문을 인용하기

BibTeX ↓ RIS ↓
APA Pooja Humar, Malke Asaad, et al. (2023). ChatGPT Is Equivalent to First-Year Plastic Surgery Residents: Evaluation of ChatGPT on the Plastic Surgery In-Service Examination.. Aesthetic surgery journal, 43(12), NP1085-NP1089. https://doi.org/10.1093/asj/sjad130
MLA Pooja Humar, et al.. "ChatGPT Is Equivalent to First-Year Plastic Surgery Residents: Evaluation of ChatGPT on the Plastic Surgery In-Service Examination.." Aesthetic surgery journal, vol. 43, no. 12, 2023, pp. NP1085-NP1089.
PMID 37140001
DOI 10.1093/asj/sjad130

Abstract

[BACKGROUND] ChatGPT is an artificial intelligence language model developed and released by OpenAI (San Francisco, CA) in late 2022.

[OBJECTIVES] The aim of this study was to evaluate the performance of ChatGPT on the Plastic Surgery In-Service Examination and to compare it to residents' performance nationally.

[METHODS] The Plastic Surgery In-Service Examinations from 2018 to 2022 were used as a question source. For each question, the stem and all multiple-choice options were imported into ChatGPT. The 2022 examination was used to compare the performance of ChatGPT to plastic surgery residents nationally.

[RESULTS] In total, 1129 questions were included in the final analysis and ChatGPT answered 630 (55.8%) of these correctly. ChatGPT scored the highest on the 2021 exam (60.1%) and on the comprehensive section (58.7%). There were no significant differences regarding questions answered correctly among exam years or among the different exam sections. ChatGPT answered 57% of questions correctly on the 2022 exam. When compared to the performance of plastic surgery residents in 2022, ChatGPT would rank in the 49th percentile for first-year integrated plastic surgery residents, 13th percentile for second-year residents, 5th percentile for third- and fourth-year residents, and 0th percentile for fifth- and sixth-year residents.

[CONCLUSIONS] ChatGPT performs at the level of a first-year resident on the Plastic Surgery In-Service Examination. However, it performed poorly when compared with residents in more advanced years of training. Although ChatGPT has many undeniable benefits and potential uses in the field of healthcare and medical education, it will require additional research to assess its efficacy.

추출된 의학 개체 (NER)

유형영어 표현한국어 / 풀이UMLS CUI출처등장
해부 stem scispacy 1
약물 third- scispacy 1
약물 ChatGPT scispacy 1
약물 [BACKGROUND] ChatGPT scispacy 1
약물 55.8 scispacy 1
약물 [CONCLUSIONS] ChatGPT scispacy 1
기타 ChatGPT scispacy 1

MeSH Terms

Humans; Artificial Intelligence; Surgery, Plastic; Physical Examination

📑 인용 관계

같은 제1저자의 인용 많은 논문 (1)