Artificial Intelligence in Point-of-Care Imaging for Clinical Decision Support: Systematic Review of Diagnostic Accuracy, Task-Shifting, and Explainability.
[BACKGROUND] Artificial intelligence (AI) integrated with point-of-care (POC) imaging has emerged as a promising approach to expand diagnostic access in settings with limited specialist availability.
- 표본수 (n) 5
- Sensitivity 92%
- Specificity 90.6%
- 연구 설계 systematic review
APA
Wadie P, Zakher B, et al. (2026). Artificial Intelligence in Point-of-Care Imaging for Clinical Decision Support: Systematic Review of Diagnostic Accuracy, Task-Shifting, and Explainability.. JMIR AI. https://doi.org/10.2196/80928
MLA
Wadie P, et al.. "Artificial Intelligence in Point-of-Care Imaging for Clinical Decision Support: Systematic Review of Diagnostic Accuracy, Task-Shifting, and Explainability.." JMIR AI, 2026.
PMID
41665551
DOI
10.2196/80928
Abstract
[BACKGROUND] Artificial intelligence (AI) integrated with point-of-care (POC) imaging has emerged as a promising approach to expand diagnostic access in settings with limited specialist availability. However, no systematic review has comprehensively evaluated AI-assisted clinical decision support across multiple POC imaging modalities, assessed explainability implementation, or quantified clinical impact evidence gaps.
[OBJECTIVE] To systematically evaluate and synthesize evidence on AI-based clinical decision support systems utilizing point-of-care imaging, with particular attention to task-shifting potential, explainability implementation, and clinical outcome evidence.
[METHODS] We searched PubMed, Scopus, IEEE Xplore, and Web of Science (January 2018 to November 2025). We included research studies evaluating AI/machine learning systems applied to POC-capable imaging modalities in POC clinical settings with clinical decision support outputs. Two reviewers independently screened studies, extracted data across 15 domains, and assessed methodological quality using QUADAS-2. Proposed frameworks were developed to evaluate explainability implementation and clinical impact evidence. Narrative synthesis was performed due to substantial data heterogeneity.
[RESULTS] Of 2,113 records identified, 20 studies met inclusion criteria, encompassing approximately 78,296 patients across 15 countries. Studies evaluated tuberculosis (n=5), breast cancer (n=3), deep vein thrombosis (n=2), and nine other conditions using ultrasound (35%, 7/20), chest X-ray (25%, 5/20), photography-based and colposcopic imaging (15%, 3/20), fundus photography (10%, 2/20), microscopy (10%, 2/20), and dermoscopy (5%, 1/20). Median sensitivity was 92% (IQR 85.7%-98.0%), and median specificity was 90.6% (IQR 70.0%-95.7%). Task-shifting was demonstrated in 65% (13/20) of studies, with nonspecialists achieving specialist-level performance after a median of 1 hour of training. The explainable AI (XAI) implementation cascade revealed critical gaps: 75% (15/20) of studies did not mention explainability, 10% (2/20) provided explanations to users, and none evaluated whether clinicians understood explanations or whether XAI influenced decisions. The clinical impact pyramid showed 15% (3/20) of studies reported technical accuracy only, 65% (13/20) reported process outcomes, 20% (4/20) documented clinical actions, and none measured patient outcomes. Methodological quality was concerning, as 70% (14/20) of studies were at high or very high risk of bias, with verification bias (70%, 14/20) and selection bias (50%, 10/20) being the most common. The overall certainty of evidence was very low-Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) ⊕◯◯◯, primarily due to risk of bias, heterogeneity, and imprecision.
[CONCLUSIONS] AI-assisted POC imaging demonstrates promising diagnostic accuracy and enables meaningful task-shifting with minimal training requirements. However, critical evidence gaps remain, including absent patient outcome measurement, inadequate explainability evaluation, regulatory misalignment, and lack of cross-context validation despite claims of global applicability. Addressing these gaps requires implementation research with patient outcome end points, rigorous XAI evaluation, and multi-context validation before widespread adoption. Limitations include restriction to English-language publications, grey literature exclusion, and heterogeneity precluding meta-analysis.
[CLINICALTRIAL] This review was not prospectively registered due to time constraints.
[OBJECTIVE] To systematically evaluate and synthesize evidence on AI-based clinical decision support systems utilizing point-of-care imaging, with particular attention to task-shifting potential, explainability implementation, and clinical outcome evidence.
[METHODS] We searched PubMed, Scopus, IEEE Xplore, and Web of Science (January 2018 to November 2025). We included research studies evaluating AI/machine learning systems applied to POC-capable imaging modalities in POC clinical settings with clinical decision support outputs. Two reviewers independently screened studies, extracted data across 15 domains, and assessed methodological quality using QUADAS-2. Proposed frameworks were developed to evaluate explainability implementation and clinical impact evidence. Narrative synthesis was performed due to substantial data heterogeneity.
[RESULTS] Of 2,113 records identified, 20 studies met inclusion criteria, encompassing approximately 78,296 patients across 15 countries. Studies evaluated tuberculosis (n=5), breast cancer (n=3), deep vein thrombosis (n=2), and nine other conditions using ultrasound (35%, 7/20), chest X-ray (25%, 5/20), photography-based and colposcopic imaging (15%, 3/20), fundus photography (10%, 2/20), microscopy (10%, 2/20), and dermoscopy (5%, 1/20). Median sensitivity was 92% (IQR 85.7%-98.0%), and median specificity was 90.6% (IQR 70.0%-95.7%). Task-shifting was demonstrated in 65% (13/20) of studies, with nonspecialists achieving specialist-level performance after a median of 1 hour of training. The explainable AI (XAI) implementation cascade revealed critical gaps: 75% (15/20) of studies did not mention explainability, 10% (2/20) provided explanations to users, and none evaluated whether clinicians understood explanations or whether XAI influenced decisions. The clinical impact pyramid showed 15% (3/20) of studies reported technical accuracy only, 65% (13/20) reported process outcomes, 20% (4/20) documented clinical actions, and none measured patient outcomes. Methodological quality was concerning, as 70% (14/20) of studies were at high or very high risk of bias, with verification bias (70%, 14/20) and selection bias (50%, 10/20) being the most common. The overall certainty of evidence was very low-Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) ⊕◯◯◯, primarily due to risk of bias, heterogeneity, and imprecision.
[CONCLUSIONS] AI-assisted POC imaging demonstrates promising diagnostic accuracy and enables meaningful task-shifting with minimal training requirements. However, critical evidence gaps remain, including absent patient outcome measurement, inadequate explainability evaluation, regulatory misalignment, and lack of cross-context validation despite claims of global applicability. Addressing these gaps requires implementation research with patient outcome end points, rigorous XAI evaluation, and multi-context validation before widespread adoption. Limitations include restriction to English-language publications, grey literature exclusion, and heterogeneity precluding meta-analysis.
[CLINICALTRIAL] This review was not prospectively registered due to time constraints.