Deep learning for fluorescence confocal microscopy image interpretation in radical prostatectomy.
[OBJECTIVE] To develop and validate a deep learning model for interpretation of fluorescence confocal microscopy (FCM) images for intraoperative surgical margin assessment during radical prostatectomy
- 표본수 (n) 57
- Sensitivity 87.5%
- Specificity 97.9%
APA
Fang L, Mayor N, et al. (2026). Deep learning for fluorescence confocal microscopy image interpretation in radical prostatectomy.. BJU international. https://doi.org/10.1111/bju.70273
MLA
Fang L, et al.. "Deep learning for fluorescence confocal microscopy image interpretation in radical prostatectomy.." BJU international, 2026.
PMID
42001901
Abstract
[OBJECTIVE] To develop and validate a deep learning model for interpretation of fluorescence confocal microscopy (FCM) images for intraoperative surgical margin assessment during radical prostatectomy (RP).
[PATIENTS AND METHODS] Fluorescence confocal microscopy images from the multicentre Imperial Prostate 8-Fluorescence Confocal Microscopy for Rapid Evaluation of Surgical Cancer Excision (IP8-FLUORESCE) study were used to train and test a convolutional neural network model. The modified model incorporated focal loss with label smoothing, dropout regularisation, adaptive class weighting, and weighted sampling to address pronounced class imbalance. Images were pre-processed by extracting regions of interest at a defined digital zoom level and normalised to 896 × 896 pixels. The reference standard was surgical margin status on conventional histopathology assessed by an expert histopathologist. Diagnostic performance was assessed using sensitivity, specificity, positive and negative predictive value, area under the receiver-operating-characteristic curve (AUC), and calibration via Brier scores. External validation was conducted using an independent dataset from the LaserSAFE feasibility trial. Model explainability was evaluated using Gradient-weighted Class Activation Mapping (Grad-CAM) and a custom graphical user interface (GUI) was developed to support real-time deployment.
[RESULTS] A total of 275 images (37 tumour and 238 benign from 24 patients) were included for model development and internal testing. On the internal test set (n = 57), the model achieved a sensitivity of 87.5%, specificity of 97.9%, and an AUC of 0.93, with good calibration (Brier score 0.16). External validation using 46 independent images yielded a sensitivity of 91.3%, specificity of 73.9%, and an AUC of 0.83, with acceptable calibration (Brier score 0.20). Grad-CAM visualisations aligned with malignant structures on FCM images, and the GUI enabled rapid, interpretable predictions in <2 s.
[CONCLUSIONS] We developed and validated a deep learning model for interpretation of FCM images from RP specimens, which demonstrated strong discriminative performance and generalisability for automated FCM interpretation. This approach represents a scalable solution for real-time intraoperative margin assessment and may reduce reliance on intraoperative pathology support.
[PATIENTS AND METHODS] Fluorescence confocal microscopy images from the multicentre Imperial Prostate 8-Fluorescence Confocal Microscopy for Rapid Evaluation of Surgical Cancer Excision (IP8-FLUORESCE) study were used to train and test a convolutional neural network model. The modified model incorporated focal loss with label smoothing, dropout regularisation, adaptive class weighting, and weighted sampling to address pronounced class imbalance. Images were pre-processed by extracting regions of interest at a defined digital zoom level and normalised to 896 × 896 pixels. The reference standard was surgical margin status on conventional histopathology assessed by an expert histopathologist. Diagnostic performance was assessed using sensitivity, specificity, positive and negative predictive value, area under the receiver-operating-characteristic curve (AUC), and calibration via Brier scores. External validation was conducted using an independent dataset from the LaserSAFE feasibility trial. Model explainability was evaluated using Gradient-weighted Class Activation Mapping (Grad-CAM) and a custom graphical user interface (GUI) was developed to support real-time deployment.
[RESULTS] A total of 275 images (37 tumour and 238 benign from 24 patients) were included for model development and internal testing. On the internal test set (n = 57), the model achieved a sensitivity of 87.5%, specificity of 97.9%, and an AUC of 0.93, with good calibration (Brier score 0.16). External validation using 46 independent images yielded a sensitivity of 91.3%, specificity of 73.9%, and an AUC of 0.83, with acceptable calibration (Brier score 0.20). Grad-CAM visualisations aligned with malignant structures on FCM images, and the GUI enabled rapid, interpretable predictions in <2 s.
[CONCLUSIONS] We developed and validated a deep learning model for interpretation of FCM images from RP specimens, which demonstrated strong discriminative performance and generalisability for automated FCM interpretation. This approach represents a scalable solution for real-time intraoperative margin assessment and may reduce reliance on intraoperative pathology support.
같은 제1저자의 인용 많은 논문 (5)
- A Cascade-Responsive AND-Logic-Activatable Nanoprobe for Intraoperative Fluorescence Imaging of Colorectal Cancer.
- HMMR is a novel prognostic marker and a potential therapeutic target for colon cancer.
- Glutamine's double-edged sword: fueling tumor growth and offering therapeutic hope.
- Ultrasound-Responsive Lipid Nanosonosensitizers with Size Reduction and NO Release: Synergistic Sonodynamic-Chemo-Immunotherapy for Pancreatic Tumors.
- Ultrahigh Sensing Performance: Coresponse and Differentiation of Ethyl Acetate and Its Byproducts in Fe-Ce-O Interfacial Sensor.