본문으로 건너뛰기
← 뒤로

LPA-Tuning CLIP: An Improved CLIP-Based Classification Model for Intestinal Polyps.

1/5 보강
Sensors (Basel, Switzerland) 📖 저널 OA 100% 2021: 1/1 OA 2022: 4/4 OA 2023: 4/4 OA 2024: 6/6 OA 2025: 9/9 OA 2026: 9/9 OA 2021~2026 2026 Vol.26(6)
Retraction 확인
출처

Wang Z, Gao J, Ping W, Qin J, Ji C

📝 환자 설명용 한 줄

[BACKGROUND AND OBJECTIVE] Accurate classification of intestinal polyps is crucial for preventing colorectal cancer but is hindered by visual similarity among subtypes and endoscopic variability.

이 논문을 인용하기

↓ .bib ↓ .ris
APA Wang Z, Gao J, et al. (2026). LPA-Tuning CLIP: An Improved CLIP-Based Classification Model for Intestinal Polyps.. Sensors (Basel, Switzerland), 26(6). https://doi.org/10.3390/s26061764
MLA Wang Z, et al.. "LPA-Tuning CLIP: An Improved CLIP-Based Classification Model for Intestinal Polyps.." Sensors (Basel, Switzerland), vol. 26, no. 6, 2026.
PMID 41901934 ↗
DOI 10.3390/s26061764

Abstract

[BACKGROUND AND OBJECTIVE] Accurate classification of intestinal polyps is crucial for preventing colorectal cancer but is hindered by visual similarity among subtypes and endoscopic variability. While deep learning aids in diagnosis, single-modal models face efficiency-accuracy trade-offs and ignore pathological semantics. We propose a multimodal framework that integrates endoscopic images with structured pathological descriptions to bridge this gap.

[METHODS] We propose LPA-Tuning CLIP, which incorporates three key innovations: replacing CLIP's instance-level contrastive loss with cross-modal projection matching (CMPM) with ID loss to explicitly optimize intraclass compactness and interclass separation through label-aware image-text similarity matrices; introducing structured clinical semantic templates that encode WHO diagnostic criteria into hierarchical text prompts for consistent pathology annotations; and developing medical-aware augmentation that preserves lesion features while reducing domain shifts.

[RESULTS] The experimental results demonstrate that our proposed method achieves an accuracy of 85.8% and an F1 score of 0.862 on the internal test set, establishing a new state-of-the-art performance for intestinal polyp classification.

[CONCLUSIONS] This study proposes a multimodal polyp classification paradigm that achieves 85.8% accuracy on three-subtype classification via endoscopic image-pathology text joint representation learning, outperforming unimodal baselines by 8.7% and a multimodal baseline by 4.3%.

🏷️ 키워드 / MeSH 📖 같은 키워드 OA만

같은 제1저자의 인용 많은 논문 (5)

🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반

🟢 PMC 전문 열기