본문으로 건너뛰기
← 뒤로

GFANet: Global Feature Attention Network for Polyp Segmentation.

1/5 보강
Journal of imaging informatics in medicine 📖 저널 OA 40.6% 2024: 3/3 OA 2025: 9/27 OA 2026: 16/39 OA 2024~2026 2025
Retraction 확인
출처

Lin L, Huang W, Ouyang N

📝 환자 설명용 한 줄

Colorectal polyps are early indicators of colorectal cancer, and their accurate segmentation holds significant clinical value for assisting diagnosis and treatment.

이 논문을 인용하기

↓ .bib ↓ .ris
APA Lin L, Huang W, Ouyang N (2025). GFANet: Global Feature Attention Network for Polyp Segmentation.. Journal of imaging informatics in medicine. https://doi.org/10.1007/s10278-025-01734-w
MLA Lin L, et al.. "GFANet: Global Feature Attention Network for Polyp Segmentation.." Journal of imaging informatics in medicine, 2025.
PMID 41214244 ↗

Abstract

Colorectal polyps are early indicators of colorectal cancer, and their accurate segmentation holds significant clinical value for assisting diagnosis and treatment. However, the automatic segmentation of polyps remains a considerable challenge owing to their high variability in polyp size, shape, and color, along with the indistinct boundaries between polyps and surrounding mucosal tissues. Although existing deep learning-based methods have shown encouraging results in addressing these problems, they still encounter the following challenges: (1) the lack of incorporation of geometric orientation information restricts the model's understanding of structural features; (2) the models show low sensitivity to small polyps, often resulting in missed detections; and (3) insufficient multi-scale information fusion affects both the completeness and accuracy of segmentation. To address these challenges, we introduce a global feature attention network (GFANet), which integrates three innovative modules: the global feature direction encoder (GFDE), the feature attention module (FAM), and the multi-scale information aggregation (MIA) module. Specifically, GFDE captures global context from both vertical and horizontal directions, greatly enhancing the model's ability to localize polyps with blurred boundaries or camouflage-like appearance. FAM enhances feature representation within polyp regions while suppressing background noise. MIA progressively aggregates features across multiple scales and semantic levels, boosting the segmentation accuracy for polyps of various sizes. In our experiments, we conducted a systematic comparison between GFANet and ten state-of-the-art polyp segmentation networks. The results demonstrate that GFANet achieved superior performance across five publicly available datasets. On the CVC-300 dataset, the model achieved 90.2% mean dice coefficient (mDice) and 83.5% mean intersection over union (mIoU), representing the best results among all competing approaches. Furthermore, on the ETIS-LaribPolypDB dataset, GFANet outperformed the classical model PraNet by as much as 17.2% in terms of mDice, highlighting its strong cross-dataset generalization capability.

🏷️ 키워드 / MeSH 📖 같은 키워드 OA만

같은 제1저자의 인용 많은 논문 (5)