Using ChatGPT to write a literature review on autologous fat grafting.
TL;DR
The strengths and weaknesses of ChatGPT in writing a plastic surgery literature review are defined and proper methodologies for optimizing GPT-generated output are described.
OpenAlex 토픽 ·
Artificial Intelligence in Healthcare and Education
Body Contouring and Surgery
Mesenchymal stem cell research
【연구 목적】 대규모 언어 모델인 ChatGPT가 성형외과 문헌 고찰 작성 도구로 활용될 때의 강점과 한계를 규명하고, 생성된 출력물의 정확성을 최적화하기 위한 적절한 방법론을 제시하는 데 목적이 있다.
APA
Kate Manley, Sophia Salingaros, et al. (2025). Using ChatGPT to write a literature review on autologous fat grafting.. Journal of plastic, reconstructive & aesthetic surgery : JPRAS, 105, 292-304. https://doi.org/10.1016/j.bjps.2025.04.015
MLA
Kate Manley, et al.. "Using ChatGPT to write a literature review on autologous fat grafting.." Journal of plastic, reconstructive & aesthetic surgery : JPRAS, vol. 105, 2025, pp. 292-304.
PMID
40339455
Abstract
[BACKGROUND] ChatGPT is a large language model (LLM) that has been proposed as a scientific writing tool, though its ethical use remains a highly debated topic within the academic community. This article defines the strengths and weaknesses of ChatGPT in writing a plastic surgery literature review and describes proper methodologies for optimizing GPT-generated output.
[METHODS] ChatGPT-4o was prompted to brainstorm topics for a literature review on plastic surgery. Autologous fat grafting was chosen and ChatGPT generated each section of the literature review with citations, which were subsequently evaluated for accuracy. The ability of medical professionals to discriminate between a ChatGPT-generated and published fat grafting abstract was assessed.
[RESULTS] ChatGPT successfully conceived and performed a literature review on autologous fat grafting. The model performed well in outline creation, article summarization, and editing content. It generated a professional review of fat grafting, though its claims were generalized, not completely factual, and lacked accurate citations. ChatGPT provided 21 citations, 5 of which correctly referenced a real article. Eight contained errors in their publication details, such as publication dates and author lists. The remaining 8 were unable to be found in PubMed (hallucinated). Medical professionals were unable to distinguish ChatGPT-generated material from a published abstract.
[CONCLUSIONS] With appropriate vigilance, ChatGPT may be cautiously used as a writing assistant throughout the literature review process; however, authors must verify all scientific claims and citations. ChatGPT's greatest limitation remains its tendency to hallucinate, which undermines the reliability of a generated manuscript and perpetuates inaccurate information.
[METHODS] ChatGPT-4o was prompted to brainstorm topics for a literature review on plastic surgery. Autologous fat grafting was chosen and ChatGPT generated each section of the literature review with citations, which were subsequently evaluated for accuracy. The ability of medical professionals to discriminate between a ChatGPT-generated and published fat grafting abstract was assessed.
[RESULTS] ChatGPT successfully conceived and performed a literature review on autologous fat grafting. The model performed well in outline creation, article summarization, and editing content. It generated a professional review of fat grafting, though its claims were generalized, not completely factual, and lacked accurate citations. ChatGPT provided 21 citations, 5 of which correctly referenced a real article. Eight contained errors in their publication details, such as publication dates and author lists. The remaining 8 were unable to be found in PubMed (hallucinated). Medical professionals were unable to distinguish ChatGPT-generated material from a published abstract.
[CONCLUSIONS] With appropriate vigilance, ChatGPT may be cautiously used as a writing assistant throughout the literature review process; however, authors must verify all scientific claims and citations. ChatGPT's greatest limitation remains its tendency to hallucinate, which undermines the reliability of a generated manuscript and perpetuates inaccurate information.
추출된 의학 개체 (NER)
| 유형 | 영어 표현 | 한국어 / 풀이 | UMLS CUI | 출처 | 등장 |
|---|---|---|---|---|---|
| 해부 | fat
|
scispacy | 1 | ||
| 약물 | ChatGPT
|
scispacy | 1 | ||
| 약물 | [BACKGROUND] ChatGPT
|
scispacy | 1 |
MeSH Terms
Humans; Adipose Tissue; Generative Artificial Intelligence; Medical Writing; Plastic Surgery Procedures; Review Literature as Topic; Surgery, Plastic; Transplantation, Autologous