본문으로 건너뛰기
← 뒤로

Deep learning for fluorescence confocal microscopy image interpretation in radical prostatectomy.

BJU international 2026

Fang L, Mayor N, Light A, Silvanto A, Haider A, Ng C, Gopalakrishnan A, Boaz RJ, Tanaka MB, Khoubehi B, Hellawell G, Almeida-Magana R, Mendes L, Dinneen E, Shaw G, Challacombe B, Cathcart P, Connor MJ, Shah TT, Ahmed HU, Fiorentino F, Giannarou S, Winkler M

📝 환자 설명용 한 줄

[OBJECTIVE] To develop and validate a deep learning model for interpretation of fluorescence confocal microscopy (FCM) images for intraoperative surgical margin assessment during radical prostatectomy

🔬 핵심 임상 통계 (초록에서 자동 추출 — 원문 검증 권장)
  • 표본수 (n) 57
  • Sensitivity 87.5%
  • Specificity 97.9%

이 논문을 인용하기

BibTeX ↓ RIS ↓
APA Fang L, Mayor N, et al. (2026). Deep learning for fluorescence confocal microscopy image interpretation in radical prostatectomy.. BJU international. https://doi.org/10.1111/bju.70273
MLA Fang L, et al.. "Deep learning for fluorescence confocal microscopy image interpretation in radical prostatectomy.." BJU international, 2026.
PMID 42001901
DOI 10.1111/bju.70273

Abstract

[OBJECTIVE] To develop and validate a deep learning model for interpretation of fluorescence confocal microscopy (FCM) images for intraoperative surgical margin assessment during radical prostatectomy (RP).

[PATIENTS AND METHODS] Fluorescence confocal microscopy images from the multicentre Imperial Prostate 8-Fluorescence Confocal Microscopy for Rapid Evaluation of Surgical Cancer Excision (IP8-FLUORESCE) study were used to train and test a convolutional neural network model. The modified model incorporated focal loss with label smoothing, dropout regularisation, adaptive class weighting, and weighted sampling to address pronounced class imbalance. Images were pre-processed by extracting regions of interest at a defined digital zoom level and normalised to 896 × 896 pixels. The reference standard was surgical margin status on conventional histopathology assessed by an expert histopathologist. Diagnostic performance was assessed using sensitivity, specificity, positive and negative predictive value, area under the receiver-operating-characteristic curve (AUC), and calibration via Brier scores. External validation was conducted using an independent dataset from the LaserSAFE feasibility trial. Model explainability was evaluated using Gradient-weighted Class Activation Mapping (Grad-CAM) and a custom graphical user interface (GUI) was developed to support real-time deployment.

[RESULTS] A total of 275 images (37 tumour and 238 benign from 24 patients) were included for model development and internal testing. On the internal test set (n = 57), the model achieved a sensitivity of 87.5%, specificity of 97.9%, and an AUC of 0.93, with good calibration (Brier score 0.16). External validation using 46 independent images yielded a sensitivity of 91.3%, specificity of 73.9%, and an AUC of 0.83, with acceptable calibration (Brier score 0.20). Grad-CAM visualisations aligned with malignant structures on FCM images, and the GUI enabled rapid, interpretable predictions in <2 s.

[CONCLUSIONS] We developed and validated a deep learning model for interpretation of FCM images from RP specimens, which demonstrated strong discriminative performance and generalisability for automated FCM interpretation. This approach represents a scalable solution for real-time intraoperative margin assessment and may reduce reliance on intraoperative pathology support.

같은 제1저자의 인용 많은 논문 (5)