본문으로 건너뛰기
← 뒤로

Quantification-based explainable artificial intelligence for deep learning decisions: clustering and visualization of quantitative morphometric features in hepatocellular carcinoma discrimination.

1/5 보강
Journal of medical imaging (Bellingham, Wash.) 2025 Vol.12(6) p. 061407
Retraction 확인
출처

PICO 자동 추출 (휴리스틱, conf 2/4)

유사 논문
P · Population 대상 환자/모집단
We propose a QXAI approach to objectively and quantitatively elucidate the reasoning behind DL model decisions in hepatocellular carcinoma (HCC) pathological image analysis.
I · Intervention 중재 / 시술
추출되지 않음
C · Comparison 대조 / 비교
추출되지 않음
O · Outcome 결과 / 결론
[CONCLUSIONS] Our QXAI framework enhances the interpretability of DL-based pathology analysis by linking morphological features to classification decisions. This fosters trust in DL models and facilitates their clinical integration.

Takagi G, Takeyama S, Abe T, Hashiguchi A, Sakamoto M, Suzuki K

📝 환자 설명용 한 줄

[PURPOSE] Deep learning (DL) is rapidly advancing in computational pathology, offering high diagnostic accuracy but often functioning as a "black box" with limited interpretability.

이 논문을 인용하기

↓ .bib ↓ .ris
APA Takagi G, Takeyama S, et al. (2025). Quantification-based explainable artificial intelligence for deep learning decisions: clustering and visualization of quantitative morphometric features in hepatocellular carcinoma discrimination.. Journal of medical imaging (Bellingham, Wash.), 12(6), 061407. https://doi.org/10.1117/1.JMI.12.6.061407
MLA Takagi G, et al.. "Quantification-based explainable artificial intelligence for deep learning decisions: clustering and visualization of quantitative morphometric features in hepatocellular carcinoma discrimination.." Journal of medical imaging (Bellingham, Wash.), vol. 12, no. 6, 2025, pp. 061407.
PMID 41079974 ↗

Abstract

[PURPOSE] Deep learning (DL) is rapidly advancing in computational pathology, offering high diagnostic accuracy but often functioning as a "black box" with limited interpretability. This lack of transparency hinders its clinical adoption, emphasizing the need for quantitative explainable artificial intelligence (QXAI) methods. We propose a QXAI approach to objectively and quantitatively elucidate the reasoning behind DL model decisions in hepatocellular carcinoma (HCC) pathological image analysis.

[APPROACH] The proposed method utilizes clustering in the latent space of embeddings generated by a DL model to identify regions that contribute to the model's discrimination. Each cluster is then quantitatively characterized by morphometric features obtained through nuclear segmentation using HoverNet and key feature selection with LightGBM. Statistical analysis is performed to assess the importance of selected features, ensuring an interpretable relationship between morphological characteristics and classification outcomes. This approach enables the quantitative interpretation of which regions and features are critical for the model's decision-making, without sacrificing accuracy.

[RESULTS] Experiments on pathology images of hematoxylin-and-eosin-stained HCC tissue sections showed that the proposed method effectively identified key discriminatory regions and features, such as nuclear size, chromatin density, and shape irregularity. The clustering-based analysis provided structured insights into morphological patterns influencing classification, with explanations evaluated as clinically relevant and interpretable by a pathologist.

[CONCLUSIONS] Our QXAI framework enhances the interpretability of DL-based pathology analysis by linking morphological features to classification decisions. This fosters trust in DL models and facilitates their clinical integration.

🏷️ 키워드 / MeSH 📖 같은 키워드 OA만

🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반

🟢 PMC 전문 열기