본문으로 건너뛰기
← 뒤로

Building a challenging medical dataset for comparative evaluation of classifier capabilities.

1/5 보강
Computers in biology and medicine 2024 Vol.178() p. 108721
Retraction 확인
출처

Bozkurt B, Coskun K, Bakal G

📝 환자 설명용 한 줄

Since the 2000s, digitalization has been a crucial transformation in our lives.

이 논문을 인용하기

BibTeX ↓ RIS ↓
APA Bozkurt B, Coskun K, Bakal G (2024). Building a challenging medical dataset for comparative evaluation of classifier capabilities.. Computers in biology and medicine, 178, 108721. https://doi.org/10.1016/j.compbiomed.2024.108721
MLA Bozkurt B, et al.. "Building a challenging medical dataset for comparative evaluation of classifier capabilities.." Computers in biology and medicine, vol. 178, 2024, pp. 108721.
PMID 38901188

Abstract

Since the 2000s, digitalization has been a crucial transformation in our lives. Nevertheless, digitalization brings a bulk of unstructured textual data to be processed, including articles, clinical records, web pages, and shared social media posts. As a critical analysis, the classification task classifies the given textual entities into correct categories. Categorizing documents from different domains is straightforward since the instances are unlikely to contain similar contexts. However, document classification in a single domain is more complicated due to sharing the same context. Thus, we aim to classify medical articles about four common cancer types (Leukemia, Non-Hodgkin Lymphoma, Bladder Cancer, and Thyroid Cancer) by constructing machine learning and deep learning models. We used 383,914 medical articles about four common cancer types collected by the PubMed API. To build classification models, we split the dataset into 70% as training, 20% as testing, and 10% as validation. We built widely used machine-learning (Logistic Regression, XGBoost, CatBoost, and Random Forest Classifiers) and modern deep-learning (convolutional neural networks - CNN, long short-term memory - LSTM, and gated recurrent unit - GRU) models. We computed the average classification performances (precision, recall, F-score) to evaluate the models over ten distinct dataset splits. The best-performing deep learning model(s) yielded a superior F1 score of 98%. However, traditional machine learning models also achieved reasonably high F1 scores, 95% for the worst-performing case. Ultimately, we constructed multiple models to classify articles, which compose a hard-to-classify dataset in the medical domain.

MeSH Terms

Humans; Deep Learning; Machine Learning; Neural Networks, Computer; Neoplasms; Databases, Factual

같은 제1저자의 인용 많은 논문 (2)