본문으로 건너뛰기
← 뒤로

A segmentation method with a large vision model for magnetic resonance imaging-guided adaptive radiotherapy.

1/5 보강
Medical physics 2026 Vol.53(2) p. e70257
Retraction 확인
출처

PICO 자동 추출 (휴리스틱, conf 2/4)

유사 논문
P · Population 대상 환자/모집단
38 patients with prostate cancer and 10 patients with rectal cancer.
I · Intervention 중재 / 시술
추출되지 않음
C · Comparison 대조 / 비교
추출되지 않음
O · Outcome 결과 / 결론
[CONCLUSIONS] This study proposed a novel method that integrates personalized information and manual prompts into a SAM-based segmentation model. It outperformed the baseline methods, with only a few contours needing to be revised for clinical use.

Men K, Yang B, Liu Y, Tang Y, Lu N, Dai J

📝 환자 설명용 한 줄

[BACKGROUND] Segmentation is the most effort-consuming step for magnetic resonance imaging guided adaptive radiotherapy (MRIgART).

이 논문을 인용하기

↓ .bib ↓ .ris
APA Men K, Yang B, et al. (2026). A segmentation method with a large vision model for magnetic resonance imaging-guided adaptive radiotherapy.. Medical physics, 53(2), e70257. https://doi.org/10.1002/mp.70257
MLA Men K, et al.. "A segmentation method with a large vision model for magnetic resonance imaging-guided adaptive radiotherapy.." Medical physics, vol. 53, no. 2, 2026, pp. e70257.
PMID 41615083
DOI 10.1002/mp.70257

Abstract

[BACKGROUND] Segmentation is the most effort-consuming step for magnetic resonance imaging guided adaptive radiotherapy (MRIgART). Although the segment anything model (SAM) exhibits impressive capabilities, its application in medical imaging necessitates clicks, bounding boxes, or providing mask prompts on each target image, which would still require complex human interactions.

[PURPOSE] This study introduces SAM-ART, a large vision model that integrates personalized information to enhance the segmentation accuracy of MRIgART.

[METHODS] This study utilized planning computed tomography (pCT), approved contours, and daily MRI (dMRI) from 38 patients with prostate cancer and 10 patients with rectal cancer. SAM-ART comprises an image encoder, a prompt encoder, and a mask decoder. Using mask and box prompts, SAM-ART propagates contours from pCT to dMRI using deformable image registration (DIR) and employs them as mask prompts, providing patient-specific information. The box prompts are used in slices prone to false negative (FN) predictions. A 5-fold cross-validation was then conducted, comparing SAM-ART with DIR, traditional deep learning (tDL), and SAM-ART using other manual prompts (point or box).

[RESULTS] The proposed SAM-ART exhibited a mean dice similarity coefficient of 0.934 ± 0.023 for the regions of interest, surpassing DIR (0.873 ± 0.063) and tDL (0.887 ± 0.056). Moreover, the proposed mask/box prompts also outperformed the other modes (point: 0.910 ± 0.027, and box: 0.921 ± 0.025). Mask/box prompts effectively mitigated FN predictions with minimal manual intervention. The ratio of acceptable slices (using the criteria of dice ≥ 0.85, 95th percentile of Hausdorff distance ≤ 5 mm, and mean distance to agreement ≤ 1.5 mm) was 89.38% with the proposed method, which means that the segmentations on about 90% of the slices did not require manual modification.

[CONCLUSIONS] This study proposed a novel method that integrates personalized information and manual prompts into a SAM-based segmentation model. It outperformed the baseline methods, with only a few contours needing to be revised for clinical use.

MeSH Terms

Radiotherapy, Image-Guided; Humans; Magnetic Resonance Imaging; Image Processing, Computer-Assisted; Prostatic Neoplasms; Male; Radiotherapy Planning, Computer-Assisted; Rectal Neoplasms