Finding Holes: Pathologist-Level Performance Using AI for Cribriform Morphology Detection in Prostate Cancer.
OpenAlex 토픽 ·
AI in cancer detection
Digital Imaging for Blood Diseases
Medical Image Segmentation Techniques
[BACKGROUND] Cribriform morphology in prostate cancer is a histological feature that indicates poor prognosis and contraindicates active surveillance.
APA
Kelvin Szolnoky, Anders Blilie, et al. (2026). Finding Holes: Pathologist-Level Performance Using AI for Cribriform Morphology Detection in Prostate Cancer.. European urology open science, 87, 31-39. https://doi.org/10.1016/j.euros.2026.03.016
MLA
Kelvin Szolnoky, et al.. "Finding Holes: Pathologist-Level Performance Using AI for Cribriform Morphology Detection in Prostate Cancer.." European urology open science, vol. 87, 2026, pp. 31-39.
PMID
42004835
Abstract
[BACKGROUND] Cribriform morphology in prostate cancer is a histological feature that indicates poor prognosis and contraindicates active surveillance. However, it remains underreported and subject to significant interobserver variability among pathologists.
[OBJECTIVE] We aimed to develop and validate an artificial intelligence (AI)-based system to improve cribriform pattern detection.
[DESIGN SETTING AND PARTICIPANTS] We created a deep learning model using an EfficientNetV2-S encoder with multiple instance learning for end-to-end whole-slide classification. The model was trained on 640 digitised prostate core needle biopsies from 430 patients, collected across three cohorts. It was validated internally (261 slides from 171 patients) and externally (266 slides, 104 patients from three independent cohorts). Internal validation cohorts included laboratories or scanners from the development set, while external cohorts used completely independent instruments and laboratories. Annotations were provided by three expert uropathologists with known high concordance.
[OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS] We assessed model performance using the area under the receiver operating characteristic curve (AUC) and Cohen's κ with 95% confidence intervals calculated through bootstrapping. Additionally, we conducted an inter-rater analysis and compared the model's performance against nine expert uropathologists on 88 slides from the internal validation cohort.
[RESULTS AND LIMITATIONS] The model showed strong internal validation performance (AUC: 0.97, 95% CI: 0.95, 0.99; Cohen's κ: 0.81, 95% CI: 0.72, 0.89) and robust external validation (AUC: 0.90, 95% CI: 0.86, 0.93; Cohen's κ: 0.55, 95% CI: 0.45, 0.64). In our inter-rater analysis, the model achieved the highest average agreement (Cohen's κ: 0.66, 95% CI: 0.57, 0.74), outperforming all nine pathologists whose Cohen's κ ranged from 0.35 to 0.62. Limitations include the retrospective design and that the cross-scanner reproducibility and inter-rater analyses were conducted exclusively on internal validation data, potentially overestimating performance in these analyses.
[CONCLUSIONS] Our AI model demonstrates pathologist-level performance for cribriform morphology detection in prostate cancer. This approach could enhance diagnostic reliability, standardise reporting, and improve treatment decisions for patients with prostate cancer.
[OBJECTIVE] We aimed to develop and validate an artificial intelligence (AI)-based system to improve cribriform pattern detection.
[DESIGN SETTING AND PARTICIPANTS] We created a deep learning model using an EfficientNetV2-S encoder with multiple instance learning for end-to-end whole-slide classification. The model was trained on 640 digitised prostate core needle biopsies from 430 patients, collected across three cohorts. It was validated internally (261 slides from 171 patients) and externally (266 slides, 104 patients from three independent cohorts). Internal validation cohorts included laboratories or scanners from the development set, while external cohorts used completely independent instruments and laboratories. Annotations were provided by three expert uropathologists with known high concordance.
[OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS] We assessed model performance using the area under the receiver operating characteristic curve (AUC) and Cohen's κ with 95% confidence intervals calculated through bootstrapping. Additionally, we conducted an inter-rater analysis and compared the model's performance against nine expert uropathologists on 88 slides from the internal validation cohort.
[RESULTS AND LIMITATIONS] The model showed strong internal validation performance (AUC: 0.97, 95% CI: 0.95, 0.99; Cohen's κ: 0.81, 95% CI: 0.72, 0.89) and robust external validation (AUC: 0.90, 95% CI: 0.86, 0.93; Cohen's κ: 0.55, 95% CI: 0.45, 0.64). In our inter-rater analysis, the model achieved the highest average agreement (Cohen's κ: 0.66, 95% CI: 0.57, 0.74), outperforming all nine pathologists whose Cohen's κ ranged from 0.35 to 0.62. Limitations include the retrospective design and that the cross-scanner reproducibility and inter-rater analyses were conducted exclusively on internal validation data, potentially overestimating performance in these analyses.
[CONCLUSIONS] Our AI model demonstrates pathologist-level performance for cribriform morphology detection in prostate cancer. This approach could enhance diagnostic reliability, standardise reporting, and improve treatment decisions for patients with prostate cancer.