Detailed Information

Cited 73 time in webofscience Cited 94 time in scopus
Metadata Downloads

COVID-Transformer: Interpretable COVID-19 Detection Using Vision Transformer for Healthcareopen access

Authors
Shome, D[Shome, Debaditya]Kar, T[Kar, T.]Mohanty, SN[Mohanty, Sachi Nandan]Tiwari, P[Tiwari, Prayag]MUHAMMAD, K.[MUHAMMAD, KHAN]AlTameem, A[AlTameem, Abdullah]Zhang, YZ[Zhang, Yazhou]Saudagar, AKJ[Saudagar, Abdul Khader Jilani]
Issue Date
Nov-2021
Publisher
MDPI
Keywords
vision transformer; COVID-19; deep learning; data science; healthcare; interpretability; transfer learning; grad-CAM
Citation
INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, v.18, no.21
Indexed
SCIE
SSCI
SCOPUS
Journal Title
INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH
Volume
18
Number
21
URI
https://scholarx.skku.edu/handle/2021.sw.skku/94155
DOI
10.3390/ijerph182111086
ISSN
1661-7827
Abstract
In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient's X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Computing and Informatics > Convergence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher MUHAMMAD, KHAN photo

MUHAMMAD, KHAN
Computing and Informatics (Convergence)
Read more

Altmetrics

Total Views & Downloads

BROWSE