Feature Tracking and Segmentation in Real Time via Deep Learning in Vitreoretinal Surgery: A Platform for Artificial Intelligence-Mediated Surgical Guidance.

Ophthalmology. Retina 2023 Vol.7(3) p. 236-242

Nespolo RG, Yi D, Cole E, Wang D, Warren A, Leiderman YI

관련 도메인

Abstract

[PURPOSE] This study investigated whether a deep-learning neural network can detect and segment surgical instrumentation and relevant tissue boundaries and landmarks within the retina using imaging acquired from a surgical microscope in real time, with the goal of providing image-guided vitreoretinal (VR) microsurgery.

[DESIGN] Retrospective analysis via a prospective, single-center study.

[PARTICIPANTS] One hundred and one patients undergoing VR surgery, inclusive of core vitrectomy, membrane peeling, and endolaser application, in a university-based ophthalmology department between July 1, 2020, and September 1, 2021.

[METHODS] A dataset composed of 606 surgical image frames was annotated by 3 VR surgeons. Annotation consisted of identifying the location and area of the following features, when present in-frame: vitrector-, forceps-, and endolaser tooltips, optic disc, fovea, retinal tears, retinal detachment, fibrovascular proliferation, endolaser spots, area where endolaser was applied, and macular hole. An instance segmentation fully convolutional neural network (YOLACT++) was adapted and trained, and fivefold cross-validation was employed to generate metrics for accuracy.

[MAIN OUTCOME MEASURES] Area under the precision-recall curve (AUPR) for the detection of elements tracked and segmented in the final test dataset; the frames per second (FPS) for the assessment of suitability for real-time performance of the model.

[RESULTS] The platform detected and classified the vitrector tooltip with a mean AUPR of 0.972 ± 0.009. The segmentation of target tissues, such as the optic disc, fovea, and macular hole reached mean AUPR values of 0.928 ± 0.013, 0.844 ± 0.039, and 0.916 ± 0.021, respectively. The postprocessed image was rendered at a full high-definition resolution of 1920 × 1080 pixels at 38.77 ± 1.52 FPS when attached to a surgical visualization system, reaching up to 87.44 ± 3.8 FPS.

[CONCLUSIONS] Neural Networks can localize, classify, and segment tissues and instruments during VR procedures in real time. We propose a framework for developing surgical guidance and assessment platform that may guide surgical decision-making and help in formulating tools for systematic analyses of VR surgery. Potential applications include collision avoidance to prevent unintended instrument-tissue interactions and the extraction of spatial localization and movement of surgical instruments for surgical data science research.

[FINANCIAL DISCLOSURE(S)] Proprietary or commercial disclosure may be found after the references.

추출된 의학 개체 (NER)

유형영어 표현한국어 / 풀이UMLS CUI출처등장
시술 microsurgery 미세수술 dict 1
해부 tissue scispacy 1
해부 membrane scispacy 1
해부 endolaser scispacy 1
해부 optic scispacy 1
해부 fovea scispacy 1
해부 fibrovascular scispacy 1
해부 macular scispacy 1
해부 tissues scispacy 1
약물 FPS → frames per second C3714799
Frames Per Second
scispacy 1
약물 [DESIGN] scispacy 1
약물 [MAIN OUTCOME scispacy 1
약물 [CONCLUSIONS] Neural Networks scispacy 1
질환 retinal tears C0035321
Retinal Perforations
scispacy 1
질환 retinal detachment C0035305
Retinal Detachment
scispacy 1
기타 neural network scispacy 1
기타 retina scispacy 1
기타 patients scispacy 1
기타 retinal scispacy 1
기타 FPS → frames per second scispacy 1
기타 optic disc scispacy 1

MeSH Terms

Humans; Deep Learning; Artificial Intelligence; Retinal Perforations; Retrospective Studies; Ophthalmology; Vitreoretinal Surgery; Prospective Studies

🔗 함께 등장하는 도메인

이 논문이 속한 카테고리와 같은 논문에서 자주 함께 다뤄지는 카테고리들

관련 논문