본문으로 건너뛰기
← 뒤로

MOSAIC: Multi-scale orientation-aware segmentation and instance classification network for histopathological image analysis.

Computer methods and programs in biomedicine 2026 Vol.280() p. 109334 AI in cancer detection
OpenAlex 토픽 · AI in cancer detection Medical Image Segmentation Techniques Cell Image Analysis Techniques

Wadood AS, Fauzi MFA, Wong LK, Lee JTH, Khor SY, Looi LM

📝 환자 설명용 한 줄

[BACKGROUND AND OBJECTIVE] Accurate nuclei segmentation and instance classification are fundamental tasks in biomedical image analysis; however, many existing computational models exhibit limited robu

이 논문을 인용하기

BibTeX ↓ RIS ↓
APA Arbab Sufyan Wadood, Mohammad Faizal Ahmad Fauzi, et al. (2026). MOSAIC: Multi-scale orientation-aware segmentation and instance classification network for histopathological image analysis.. Computer methods and programs in biomedicine, 280, 109334. https://doi.org/10.1016/j.cmpb.2026.109334
MLA Arbab Sufyan Wadood, et al.. "MOSAIC: Multi-scale orientation-aware segmentation and instance classification network for histopathological image analysis.." Computer methods and programs in biomedicine, vol. 280, 2026, pp. 109334.
PMID 41871485

Abstract

[BACKGROUND AND OBJECTIVE] Accurate nuclei segmentation and instance classification are fundamental tasks in biomedical image analysis; however, many existing computational models exhibit limited robustness when confronted with scale variability, morphological heterogeneity, and arbitrary rotational orientations commonly observed in histopathological images. The objective of this work is to develop a unified computational framework that is robust to effective magnification variability, arbitrary orientations, and long-range contextual dependencies, without relying on multi-magnification supervision or magnification-specific retraining.

[METHODS] We propose a multi-scale orientation-aware segmentation and instance classification (MOSAIC) framework, which integrates hierarchical context extraction, rotation-aware feature fusion, and transformer-based long-range contextual modeling within a single encoder-decoder architecture. The proposed model combines large-, medium-, and small-scale contextual cues derived from a single native training magnification to enable robust learning across effective magnifications. The proposed method is evaluated on an institutional estrogen receptor immunohistochemistry cohort, the multi-organ nuclei segmentation and classification dataset, and the colorectal nuclei segmentation and phenotypes dataset.

[RESULTS] The proposed model outperforms baseline methods, achieving a mean Dice coefficient of 0.862, an Aggregated Jaccard Index of 0.721, and a Panoptic Quality score of 0.647, with consistent improvements of 3%-7% across datasets. The model also demonstrates favorable computational cost relative to representative baselines, with an inference time of 0.175 s per 512 × 512 image patch and a peak memory footprint of 3.7 GB.

[CONCLUSIONS] The results demonstrate that orientation-aware multi-scale fusion and long-range contextual modeling improve boundary precision, instance separation, and classification consistency across heterogeneous nuclear morphologies. These improvements indicate that the proposed design generalizes reliably across challenging tissue appearances.

MeSH Terms

Humans; Algorithms; Image Processing, Computer-Assisted; Cell Nucleus