Human-in-the-loop validation of an artificial intelligence-driven real-time prostatic capsule recognition model for nerve-sparing robot-assisted radical prostatectomy.
1/5 보강
PICO 자동 추출 (휴리스틱, conf 2/4)
유사 논문P · Population 대상 환자/모집단
추출되지 않음
I · Intervention 중재 / 시술
the highest rating and significantly exceeding other domains (8
C · Comparison 대조 / 비교
추출되지 않음
O · Outcome 결과 / 결론
[CONCLUSIONS] AI-driven real-time prostatic capsule navigation demonstrates clinical feasibility for pilot trials despite moderate technical performance. Surgeons exhibited well-calibrated trust across varying conditions, prioritizing anatomically correct positioning over pixel-perfect segmentation.
[BACKGROUND] Nerve-sparing robot-assisted radical prostatectomy (NS-RARP) requires precise prostatic capsule identification to balance oncological control with functional preservation.
- p-value p = 0.048
APA
Chen W, Fukuda S, et al. (2026). Human-in-the-loop validation of an artificial intelligence-driven real-time prostatic capsule recognition model for nerve-sparing robot-assisted radical prostatectomy.. Surgical endoscopy. https://doi.org/10.1007/s00464-026-12681-0
MLA
Chen W, et al.. "Human-in-the-loop validation of an artificial intelligence-driven real-time prostatic capsule recognition model for nerve-sparing robot-assisted radical prostatectomy.." Surgical endoscopy, 2026.
PMID
41787080
Abstract
[BACKGROUND] Nerve-sparing robot-assisted radical prostatectomy (NS-RARP) requires precise prostatic capsule identification to balance oncological control with functional preservation. Artificial intelligence (AI)-based intraoperative navigation could provide real-time anatomical guidance, but clinical feasibility remains unvalidated.
[METHODS] We conducted a human-in-the-loop feasibility study evaluating a convolutional neural network system for real-time fluorescence-like navigation (FLN) prostatic capsule identification. A total of 285 frames from 12 NS-RARP procedures have been used to establish the AI model. The validation and clinical acceptance assessment were performed on 23 representative frames from 3 NS-RARP procedures by four surgical scenarios: initial lateral dissection (Phase I), visual interference from bleeding/smoke (Phase II), limited capsule exposure (Phase III), and distant field view (Phase IV). 6 experienced urologic surgeons independently evaluated AI output using structured 10-point scales across 6 clinical domains. Quantitative assessment employed standard segmentation metrics comparing AI output to surgeon-annotated ground truth.
[RESULTS] The AI system achieved consistently high clinical acceptance (surgeon ratings 8.21-9.01/10) despite moderate segmentation performance (Dice 0.533 ± 0.098). Boundary positioning accuracy received the highest rating and significantly exceeding other domains (8.63 ± 0.61). Performance varied across surgical scenarios with mean Dice scores of 0.622 in Phase I, 0.489 in Phase II, 0.616 in Phase III, and 0.390 in Phase IV, yet surgeon reliability ratings remained high across all phases (8.39-9.10/10). Weak positive correlation between Dice scores and clinical ratings was observed (r = 0.42, p = 0.048). The system demonstrated high recall of 79.7% with moderate precision of 40.9%, a pattern preferred by surgeons for safety.
[CONCLUSIONS] AI-driven real-time prostatic capsule navigation demonstrates clinical feasibility for pilot trials despite moderate technical performance. Surgeons exhibited well-calibrated trust across varying conditions, prioritizing anatomically correct positioning over pixel-perfect segmentation.
[METHODS] We conducted a human-in-the-loop feasibility study evaluating a convolutional neural network system for real-time fluorescence-like navigation (FLN) prostatic capsule identification. A total of 285 frames from 12 NS-RARP procedures have been used to establish the AI model. The validation and clinical acceptance assessment were performed on 23 representative frames from 3 NS-RARP procedures by four surgical scenarios: initial lateral dissection (Phase I), visual interference from bleeding/smoke (Phase II), limited capsule exposure (Phase III), and distant field view (Phase IV). 6 experienced urologic surgeons independently evaluated AI output using structured 10-point scales across 6 clinical domains. Quantitative assessment employed standard segmentation metrics comparing AI output to surgeon-annotated ground truth.
[RESULTS] The AI system achieved consistently high clinical acceptance (surgeon ratings 8.21-9.01/10) despite moderate segmentation performance (Dice 0.533 ± 0.098). Boundary positioning accuracy received the highest rating and significantly exceeding other domains (8.63 ± 0.61). Performance varied across surgical scenarios with mean Dice scores of 0.622 in Phase I, 0.489 in Phase II, 0.616 in Phase III, and 0.390 in Phase IV, yet surgeon reliability ratings remained high across all phases (8.39-9.10/10). Weak positive correlation between Dice scores and clinical ratings was observed (r = 0.42, p = 0.048). The system demonstrated high recall of 79.7% with moderate precision of 40.9%, a pattern preferred by surgeons for safety.
[CONCLUSIONS] AI-driven real-time prostatic capsule navigation demonstrates clinical feasibility for pilot trials despite moderate technical performance. Surgeons exhibited well-calibrated trust across varying conditions, prioritizing anatomically correct positioning over pixel-perfect segmentation.
같은 제1저자의 인용 많은 논문 (5)
- FcγRIIb deficiency inhibits tumor development by attenuating the immunosuppressive phenotype of MDSCs.
- Effectiveness of KAP-based nursing program in managing digestive symptoms in colorectal cancer patients undergoing chemotherapy: A retrospective controlled study.
- Oligometastatic Prostate and Bladder Cancer: An Integrative Narrative Review.
- Real world deployment of a pancreatic cancer risk model: impact of refitting, imputation, and computational burden.
- IRF1 suppresses gastric tumorigenesis via dual PI3K/AKT-ERK pathway modulation and functional antagonism of oncogenic MX2.