Abstract

Pneumonia is an infamous life-threatening lung bacterial or viral infection. The latest viral infection endangering the lives of many people worldwide is the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which causes COVID-19. This paper is aimed at detecting and differentiating viral pneumonia and COVID-19 disease using digital X-ray images. The current practices include tedious conventional processes that solely rely on the radiologist or medical consultant’s technical expertise that are limited, time-consuming, inefficient, and outdated. The implementation is easily prone to human errors of being misdiagnosed. The development of deep learning and technology improvement allows medical scientists and researchers to venture into various neural networks and algorithms to develop applications, tools, and instruments that can further support medical radiologists. This paper presents an overview of deep learning techniques made in the chest radiography on COVID-19 and pneumonia cases.

1. Introduction

Pneumonia is life-threatening and one of the top diseases, which causes most deaths worldwide. It was projected that 1.4 million children die of pneumonia every year, in which 18% of the total children who died are below five years of age. In December 2019, at the epicentre in Wuhan, China, a novel coronavirus, severe acute respiratory syndrome–coronavirus-2 (SARS-CoV-2), causing COVID-19, emerged and is now a worldwide pandemic. As of 29th September 2020, COVID-19 has been confirmed in 215 countries and territories, involving 33,558,131 cases with 1,006,471 deaths globally, which is a 3% mortality rate [1]. Most reported infections were in the USA, Brazil, India, Russia, South Africa, Mexico, Peru, Colombia, Chile, Spain, and many others [1]. Countries have declared emergencies and national lockdown while cases have been reported to increase at an alarming rate [2].

Pneumonia is the inflammation of the alveoli inside the lungs [3]. The inflammation will build up fluid and pus that subsequently causes breathing difficulties. The patient will show symptoms such as shortness of breath, cough, fever, chest pains, chills, or fatigue. Antibiotics and antiviral drugs can treat bacterial and viral pneumonia. COVID-19 was originally called novel coronavirus-infected pneumonia (NCIP) [3]. The symptoms are similar to other variations of viral pneumonia and more [4] of which include rapid heartbeat, breathlessness, rapid breathing—also known as acute respiratory distress syndrome (ARDS), dizziness, and heavy perspiration [3]. COVID-19 damages the cells and tissues that line the air sacs in the lungs [3]. The damaged cells and tissues can disintegrate and clot the lungs causing difficulties in breathing [3]. Nevertheless, an immediate diagnosis of COVID-19 and the consequent application of medication and treatment can significantly aid and prevent the deterioration of the patient’s condition, which eventually can lead to death [5].

Hence, it is a challenge to diagnose a patient with COVID-19 via medical imaging. Deep learning models mimic human-level accuracy and precision in analysing and segmenting a medical image without human error [6]. However, deep learning cannot substitute medical professionals like physicians, clinicians, and radiologists in medical diagnosis [6], but it can assist medical experts in the field in executing and processing time-consuming works, such as determining chest radiographs for the signs of pneumonia and distinguishing the types of pneumonia and its severity [6].

2. Background of COVID-19

Coronaviruses are single-stranded ribonucleic acid (RNA) viruses, with the size of the virus approximately 26 to 32 kilobases. In late December 2019, a new (novel) coronavirus was identified in China, causing severe respiratory disease, including pneumonia. US Department of Health and Human Services/Centers for Disease Control and Prevention (CDC) reported that Chinese authorities declared an outbreak caused by a novel coronavirus, SARS-CoV-2 [7]. The coronavirus can cause mild to severe respiratory illness, known as Coronavirus Disease 2019 (COVID-19). The outbreak began in Wuhan, Hubei Province, China, and has spread to many countries worldwide—including Malaysia. World Health Organisation (WHO) declared COVID-19 a pandemic on 11 March 2020. CDC also stated the coronavirus could be spread mainly through close contact from person to person, face to face, and physically near each other within 6 feet [8].

The SARS-CoV-2 spreads more efficiently than influenza but not as efficiently as measles, one of the most contagious viruses. The respiratory ailment spreads throughout droplets of air. The infection is transmitted primarily via close contact and infects through respiratory droplets distributed in the air when a person coughs or sneezes. When a person contaminated with SARS-CoV-2 coughs, sneezes, sings, talks, or breathes, he or she produces respiratory droplets which range in size from large droplets visible to the human eye to smaller droplets. The tiny droplets can also form particles as they dry very quickly in the airstream [8]. Breathing difficulty is an indication of plausible pneumonia and requires prompt clinical deliberation and care. Research indicates that people suffering from COVID-19 often show hyperthermia and breathing problems [9]. Currently, there are no antibodies or definitive treatment for COVID-19 patients available to the public. The US Food and Drug Administration (FDA) had no authorised or approved vaccine to prevent COVID-19 [8] until 12 December 2020, when the Pfizer-BioNTech coronavirus vaccine, which offers up to 95% protection against COVID-19, has been authorised as safe, effective, and only for emergency use [10]. However, the World Health Organisation (WHO) encouraged the facilitation of vaccines by public persuasion instead of making the injections mandatory [11].

Early diagnosis of COVID-19 is critical to prevent human transmission of the virus to maintain a healthy population. Reverse transcription-polymerase chain reaction (RT-PCR) test is used to detect COVID-19 disease. It shows high specificity but is inconsistent with sensitivity in sensing the existence of the disease [12]. It demonstrates a certain proportion of false-negative results. However, when the pathological load is high during the symptomatic phase, the test is more accurate. The (RT-PCR) test kits are also limited in some geographical regions, especially third-world countries [13]. The turnaround time is 24 hours in major cities and is even longer in rural regions [9]. There is an urgency to explore other possibilities to distinguish the ailment and enable immediate referrals for the SARS-CoV-2-infected patient [9]. The chest X-ray plays a crucial role and is the first imaging technique to diagnose COVID-19 [14]. The virus presents on the Chest X-ray as ground-glass opacities, with peripheral, bilateral, and primary basal distribution [12]. These presentations seem comparable to those resulting from non-SARS-CoV-2-related viral, bacterial, fungal pneumonia [9, 12].

Furthermore, researchers found it problematic to differentiate viral pneumonia from other forms of bacterial and fungal pathogens [15]. Both chest X-ray and CT scan are not encouraged to be used as the primary diagnostic tool to screen/confirm and evaluate respiratory damage in COVID-19 because of the high risk and rapid increase in disease transmission [9, 13]. CT scans are discovered to be less explicit than RT-PCR but highly sensitive in sensing COVID-19 and can act as a fundamental role in disease analysis/treatment [13]. Nevertheless, the American College of Radiology has endorsed CT scans’ practice as a primary-line assessment [16]. There are further concerns in using CT scans as a first-line test for the augmented risk of transmission, access, and cost, contributing to the recommendation [9]. As the pandemic became calamitous, radiological imaging is considered compulsory where portable chest X-rays are a useful and practical alternative [12]. However, the images’ valuation placed a severe responsibility for radiological know-how, which is frequently lacking in regions with limited resources. Therefore, automated decision-making tools could be essential to appease some of this problem and to quantify and identify disease development [9].

2.1. Background on Deep Learning (DL)

Artificial intelligence (AI) is a computer science branch that allows machines to execute human intelligence tasks. With the evolution of AI and Internet-of-Things, medical equipment has rapidly changed, which provides many possibilities in medical radiology. Machine learning (ML) techniques can achieve the objective of AI. It is the subset of AI to allow computer systems with the learning ability and implement tasks with the data automatically without manual programming. Deep learning (DL) is a subset of machine learning related to methods simulating the neurons of the human brain [17, 18]. The implementation of ML is to apply DL as an essential subject with its technology in classification, recognition, and identification of images or videos. The algorithm instructs the information to process patterns impersonating the human neural system. DL is currently an essential subject with its technology in classification, recognition, and identification of images or videos. DL functions on algorithms for cognitive method simulation and data mining developing concepts [19]. DL maps input data consisting of hidden deep layers required to be labelled and analyzed concealed patterns within the complex data [20]. Between ML and DL, DL can automatically classify features and provide accurate results with high-end GPU help whereas ML requires a wider range of data to be preprocessed as the features need to be extracted manually. ML integrates various computational models and algorithms to mimic the human neural system whereas the DL-based network is more profound and is created with many hidden layers compared to conventional ANN. DL algorithms do not require many feature classifications and acquire directly from the data to display their higher problem-solving aptitudes. DL can interpret data and extract a wide range of dimensional features, notwithstanding if the features are visible or invisible to the naked human eye. This diminishes manual data preprocessing such as segmentation. DL can handle complex data representations and mimic trained physicians by identifying and detecting the features to make clinical decisions. DL architectures are applied in medical X-ray detection and various areas such as image processing and computer vision in medicine [17]. DL progresses in the medical sector to comprehend higher results, expand disease possibility, and execute valid real-time medical image [21, 22] in disease recognition systems [23]. Table 1 shows the neural network’s significant contributions to deep learning [23, 24].

Figure 1 below shows the mind map of the types of machine learning and deep learning techniques created [25].

Convolutional neural network (CNN) most often apply to image processing problems where a computer identifies the object in an image. CNN can also be used in natural language processing projects as well. CNN modelling is adequate for processing and classifying images. A regular neural network has three layers: an input layer, a hidden layer, and an output layer. The input layer has different forms, whereas the hidden layer performs calculations on these inputs. The output layer delivers the outcome of the calculations and extractions. Each of the layers contains neurons and has its weight connected to the neurons in the previous layer. Hence, the data that is provided in the network does not produce assumptions via the output layer. However, the regular neural network cannot be applied if the data consists of images or languages. This is where convolutional neural network (CNN) comes in. CNN treats data as spatial data. Unlike regular neural network, the CNN neurons are not connected to every layer from the input layers to the hidden layers, and finally, the output layers only choose the neurons closest to it with the same weight. CNN upholds the spatial aspect of the dataset, which means that it undergoes a filtering process that simplifies complex images to better-processed images that are understood. The CNN is made up of many layers, consisting of several individual layers known as the convolutional layer, the pooling layer, and a fully connected layer. Inside, the layer of the CNN also consists of the rectified linear unit layer (ReLU). The ReLU layer activates the function to ensure nonlinearity as the data progresses through each layer in the network. Without ReLU, the data that is provided at the input layer would lose the dimensionality that is required in the network. The fully connected layer performs classification on the datasets. The CNN works by placing a filter over an array of image pixels and creating a convolved feature map. The analogy is like looking at an image through a window allowing specific features within the image to be seen. This is also known as the typical 2D convolutional neural network. The pooling layer reduces the sample size of the particular feature map, which speeds up the process by reducing the parameters the network needs. The output is the pool featured map, consisting of two execution methods, i.e., max pooling and average pooling. Max pooling takes the maximum input of a particular convolved feature, whereas the average pooling takes the convolved feature’s average. The next step is feature extractions, whereby the network creates a picture of the image data based on its mathematical rules. The images’ classification requires the network to move into the fully connected layer by flattening and simplifying the images. A complex set of neural network connections can only process linear data. If the data is unlabelled, unsupervised learning methods can be applied by using autoencoders to compile the data in a low dimension space performing calculations, then, additional layers are reconstructed to upsample the existing data.

CNN is the reason DL is so well known, but it has limitations and fundamental drawbacks. The max-pooling or successive convolutional layers lose valuable information. CNN needs a large amount of data to work, and it loses information in the pooling area, which in turn reduces spatial resolution, resulting their outputs to be invariant to small changes in the inputs. Currently, the issue is addressed by building complex architectures around CNNs to recover the lost information.

Generative adversarial network (GAN) trains two networks which comprise the artificial data samples that resemble data in the training set and the discriminative network that distinguishes the artificial and the original model: in simple means, GAN has a generator data, and the other is the discriminative data. The generator data is the counterfeiter that consistently produces artificial data, and the discriminator will try to expose the counterfeiter. Each time the discriminator manages to identify the image as a counterfeit, the generator will keep improving it until it is as accurate as possible.

Capsule networks is an artificial neural network that is significantly new. It is a network that applies local capsules in an artificial neural network that consists of complicated internal computations on the inputs and encapsulates these computations’ results into a small vector of highly informative outputs. CapsNet architecture reached state-of-the-art performance on MNIST and had better performance than CNNs on MultiMNIST [26].

3. Radiology Perspective of Coronavirus Disease 2019 (COVID-19)

In December 2019, a lower respiratory tract feverish illness of unfamiliar derivation was informed in a cluster of patients in Wuhan City, Hubei Province, China. Coronavirus disease 2019 (COVID-19) is accountable for this epidemic to date. Other corresponding pulmonic conditions have been documented as being triggered by other strains of the coronavirus family. The most notable instances are the severe acute respiratory syndrome (SARS) and the Middle East respiratory syndrome (MERS). The SARS epidemic was under control with no human contaminations reported since 2003 whereas minor MERS occurrences continue to be stated. Hence, imaging is an essential analytical procedure tool observing disease development and coronavirus-related pulmonary syndrome [27]. Imaging structures in critical and chronic phases of SARS and MERS are inconsistent and inexplicit [28]. The first accounts of imaging discoveries of COVID-19 have also been described as inconclusive [2931]. Researchers are conducting various studies to distinguish further and identify the imaging features of this new coronavirus syndrome, but the information is still inadequate.

The incident of COVID-19 intensified beyond human beings comprehension; more clusters and incidences are reported daily by the several ten thousand in some parts of the world. The disorder’s etiologic and medical structures are comparable to SARS and MERS; the knowledge and aptitude from those pulmonary syndromes can support handling the sharp increase of COVID-19 eruption. This review segment will allow us to be familiar with the radiologist and imaging spectrum of coronavirus syndromes and discuss the reported imaging features of COVID-19.

SARS was discovered in 2003 as the first epidemic of the new era in Guangdong Province, China, which its clinical discovery presented as novel viral pneumonia. The clinical disease-infested 8,422 individuals demanded 916 lives before it was confined, and no occurrence has been reported ever since [32]. MERS was revealed in Saudi Arabia, where a patient’s sputum consisted of the novel coronavirus in 2012 [32]. The disease has infected 2,492 individuals worldwide, and 858 human lives were lost, as the latest discovery was reported in December 2019 [32].

There are various imaging features of SARS and MERS that share similarity to one another, but some differences are shown in Table 2. The analysis of COVID-19 is hypothesised on the foundation of indications of pneumonia (e.g., dry cough, lethargy, myalgia, malaise, and dyspnea similar to symptoms of SARS and MERS) as well as past travelling activities to China or acquaintance with a COVID-19 patient. The development of the diseases and their severity rely on chest imaging to acquire valuation, discovery, and identification. A portable chest X-ray (CXR) is used as the first-line modality for COVID-19 patients instead of CT scans, as CT scans are applied in specific situations. Portable chest X-ray (CXR) has the benefit of discarding patients’ need to travel from one location to another and diminish the use of personal protective equipment (PPE). The arrangement is to avoid nonessential imaging and transportations to the radiology department. Czawlytco et al. discovered that chest X-ray is insensitive in the early detection of COVID-19 with a sensitivity of only 59% [33]. Chest X-ray is not recommended for patients with flu/influenza-like symptoms. It is also not recommended to be used on confirmed COVID-19 patients with mild symptoms. Therefore, chest X-ray is designated for COVID-19 patients with acute respiratory status or COVID-19 patients with mild symptoms but has high-risk factors for developing severe disease. Chest radiography and tomography cannot be used as first-line screening or diagnosis in COVID-19, even with a normal chest X-ray and CT images, the possibility of COVID-19 cannot be ruled out as a patient might be asymptomatic, and the lung condition maintains to be expected. However, information of COVID-19 patients initially declared hostile on the virus using the real-time reverse transcriptase-polymerase chain reaction (RT-PCR) was discovered to have COVID-19 via early CT findings [32]. In the meantime, initial findings in imaging may show normal conditions of the lungs. Hence, standard chest imaging does not rule out the possibility of being infected with SARS-CoV-2 [32].

3.1. Artificial Intelligence on Chest X-Ray (CXR) and CT Scans

With the struggle against the SARS-CoV-2 rapid infection, active screening and immediate medical response for the infected patients are desperately needed. RT-PCR is a common screening application which is manual, time-consuming, intricate, and arduous with only a 63% positivity rate [34, 35]. Research regarding early identification of COVID-19 by using CXR and other imaging modalities is still in development. The Guardian reported information shared by a respiratory physician that SARS-CoV-2 pneumonia is different from common viral pneumonia cases [36]. However, the images of several viral cases of pneumonia are comparable with other infectious and inflammatory lung diseases [34]. The COVID-19 symptoms being similar to other viral pneumonia can result in wrong diagnosis and prognosis in many hospitals, especially in the emergency department which is overloaded and understaff [34].

Today, many biomedical problems and complications such as brain tumour detection, lung disease detection, breast cancer detection, and other oncological emergencies are using artificial intelligence (AI) solutions [34]. Convolutional neural network (CNN), a deep learning technique, has been advantageous in revealing image features that are not obvious in the original image [34]. The accuracy of the deep learning algorithm relies on imaging quality, and CNN can improve imaging quality in low-light images from a high-speed video endoscopy, discover pulmonary nodules through CT images, identify paediatric pneumonia from CXR images, and automatically labelling of polyps in a colonoscopy and cystoscopic image analysis from videos [34]. Hence, only confirmed positive COVID-19 patients’ images were selected. Wang et al. (2017) have shown to accumulate datasets that allow significant developments in medical imaging tools to progress in the prediction of various pneumonia and the outcome towards the infected patient [37, 38]. Rajpurkar et al. (2017) and Cohen et al. (2019) works on both organised models to foresee various pneumonia [37, 39, 40]. Deep learning models and algorithms are tools that can be developed for triage cases during the shortage of physical tests, particularly RT-PCR [37, 41, 42]. The American College Radiology (ACR) only recommended portable CXR in an ambulant care facility when required and strongly discourage CT to apply and inform decisions on a suspected COVID-19 patient and whether or not to conduct RT-PCR test, admit the patient, provide other treatment, and dissuade the patient from being quarantines or others [33]. However, deep learning models and algorithms should predict patient outcomes and permitting the physician to immediately facilitate care and management [37, 43]. COVID-19 can be considered in extraordinary extreme situations, where physicians could be faced with decisions to select which patient to assign for which healthcare resources based on the severity level [43]. The tools would serve to monitor the development of SARS-CoV-2 positive patients’ ailment evolution [37].

3.2. Approached Techniques and Convolutional Neural Network Architecture

Deep learning (DL) is a subsection of machine learning, and a convolutional neural network is a type of deep learning commonly applied in the computer vision domain. Examples of CNN architectures are LeNet, AlexNet, GoogLeNet, Visual Geometry Group (VGG) Net, ResNet, and others [44]. The goal is to apply deep learning neural network architectures to create practical applications to improve diagnosis and prognosis performance [44].

Deep CNN was created with LeNet designed to recognise handwritten digits. However, LeNet has limitations, and thus, its successor AlexNet was the first deep CNN that accomplished outstanding results for the organisation and recognition tasks on the image. Due to hardware limitations in early 2000, deep CNN architectures’ learning capacity was restricted to small sample size images. AlexNet was made applicable to all types of images—its depth was extended from LeNet’s five layers to eight layers: five convolutional layers, two fully connected hidden layers, and one fully connected output layer generalised for different image resolutions. However, it caused overfitting issues. The overfitting issue was fixed with the dropout algorithm, which arbitrarily eliminated some transformational units during the training process. DenseNet is a modern CNN architecture that requires fewer visual object recognition parameters. It is the product of the previous layer that combines with the output of a future layer. The objective of DenseNet is to recognise visual objects by densely connecting all the layers. ResNet is known as the residual net, which divides a layer into two branches, where one branch does nothing to the signal, and the other processes ResNet adds the previous layer with the future layers. Usually, a deep neural network tends to randomly overfit and sometimes produce more preliminary results than a network with a few layers.

CNN is based on biological processes of the visual cortex of the human and the animal brain. CNN consists of multiple layers where a higher layer is connected to a lower layer to study abstract features of the images by considering the spatial relationships between the receptive fields. This allows CNN to recognise patterns and identify images within the layers of images. Various CNN models apply different layers, number of neurons, and receptive fields in the respective layers and algorithm [44]. Integrating transfer learning into the technique modifies the CNN models applied to pretrain many radiology image datasets to diagnose COVID-19 problems [44]. This technique bypasses the hassle to train all the images from scratch everytime new cases or images are identified. However, this method is not valid with the amount of radiology images dataset available for the public.

Based on the studies in Table 3, several studies use deep learning for COVID-19 diagnosis using radiology images.

Table 3 includes some research conducted with deep learning models using two types of medical images, i.e., chest-X-ray (CXR) and CT images. Based on the table, the majority of the researchers used CXR images because of their availability. The CXR requires low memory space and high results performance which reassure researchers to apply these images into the respective deep learning models. There are a total of 52 researches using various deep learning methods to achieve results. Out of the 52 journals mentioned above, 34 of the studies used CXR images, 17 studies used CT images, and 4 of the studies used CT and CXR images. More CXR images from COVID-19 patients found in the public databases encouraged researchers to study deep learning utilising these images. Journals from the medical field often mentioned that CT images show higher accuracy performance, but these accuracies were debunked because it was not explicitly shown in the deep learning-based CAD systems. The nature of the CT images that produce many cross-sections just for one patient result in high memory usage for the facility to handle. In general, CT images were previously deemed more accurate than CXR images because the CT images’ cross-section images are individually labelled. Hence, studies that utilised the combination of CT and CXR images show promising results. However, studies with 3D data have lower performance than 2D data, mainly because there are primarily 2D data available for the public to use. The table also shows that deep learning models produced more stable results with more data.

3.3. COVID-19 Radiology Data Sources for Potential Modelling

This section describes the radiology imaging data source available for researchers to exploit the capabilities of deep learning techniques using CNN architectures to overcome COVID-19. The variability of the data requires different AI methods to study. Radiology images like CXR and CT images are high-dimensional data requiring CNN-based models to process the images like LeNet, AlexNet, GoogLeNet, VGGNet, and ResNet [44]. AlexNet is a category of CNN designed by Alex Krizhevsky in 2012. It is a popular CNN that sets the essential milestones to its incomers like network-in network [89] by Lin et al. [90], VGGNet [91] by Simonyan et al., and GoogLeNet (Inception v-1) by Szegedy et al.

CNN architecture application requires a large dataset for training, testing, and validating. Table 4 describes the available data sources for COVID-19 radiology images, mainly CXR and CT images.

The data sources depicted in Table 4 are the standard open-source radiology images available for the public to access, study, and characterise using CNN architectures. However, based on the table, there are minimal COVID-19 data to comprehensively utilise AI techniques to conduct an intensive study. This creates concerns and difficulties when utilising these techniques in real-world practice with a limited number of datasets available.

4. Challenges in the Interpretation and Application of Imaging Features of COVID-19 and Suggestions to Overcome

In theory, utilising AI is to eliminate fake news that can be found on the worldwide web and various social media platforms to ensure authenticity, responsible, and dependable information about the pandemic. However, scientists face many challenges and limitations shown in Table 5 below to produce ethical and reliable results for the public.

When implementing a DL model, test and train images from the same goal are used to distribute data and predict the medical images into their respective categories. The idea is impossible to achieve due to limited data availability or weak labels [9]. Despite many cases happening worldwide, we have very limited COVID-19 CXR or CT image data publicly available. Therefore, it is difficult to train the DL models and distinguish the images between COVID-19-related CXR and CT and non-SARS-CoV-2 viral, bacterial, and other pathogen-related CXR and CT images. The Radiological Society of North America (RSNA) [97] and Imaging COVID-19 AI Initiative in Europe [98] are aimed at providing easily accessible data to the public. These data allow various features across categories to enhance interclass variance, leading to better DL performance. Due to lack of data, the model will overfit and produce weakly generalised results [99]. Hence, data augmentation has been proven to be effective in training discriminative DL models. Examples of data augmentation techniques are flipping, rotating, colour jittering, random cropping, elastic distortions, and generative adversarial network- (GAN-) based synthetic data generation [100]. Medical images found in ImageNet have different visual characteristics showing high interclass similarities [101]. Thus, traditional augmentation methods that perform simple image alterations are less effective [102]. GAN refers to the specialised algorithms and cavernous learning systems towards the compelling predictions and transformation of data from one to another that produce dynamic data and images so that better recognition and analysis can be done. GAN-based DL models are applied to generate data artificially. Therefore, to overcome the data-scarce situation, GAN is used to develop effective data augmentation strategies for medical visual recognition.

Based on the journal written by Afshar et al., CNNs that were applied to identify positive COVID-19 CXR images are prone to lose spatial information between image instances and require a large dataset to compensate for the loss. Capsule networks, a.k.a COVID-CAPS, is an alternative modelling framework capable of handling small datasets. Capsule network consists of capsules in the convolutional layers. It has the potential to improve further diagnosis capabilities. Hence, using capsule network to pretrain the images is expected to improve the accuracy where each capsule in the convolutional layers represents a specific image instance at a specific location through several neurons. The routing agreement in the capsule network helps CNN models to identify spatial relations.

5. Conclusion and Future Works

COVID-19 has disrupted the lives of people worldwide. The number of casualties related to the disease cannot be contained and has increased by the thousands daily. AI technologies have existed to help us live comfortably and have many successes and contributions in streamlining processes and procedures. However, the spread of COVID-19 is exceptionally lethal as it transmits faster and broader than ever. The coronavirus is also continuously revolutionising with new spikes, and protein mutations have been reported in countries like Malaysia, United Kingdom, South America, Australia, the Netherlands, and Singapore. The clinical impact of this discovery and its infectivity or aggressiveness is still unknown. Whether or not the mutation will affect the development of radiography imaging is also still a mystery.

Based on the worldmeter website: https://www.worldometers.info/coronavirus/, some countries failed to respond to the disease, some are barely tackling the situation, and some are handling the situation much more successfully. Hence, a country that managed to have the situation under control might experience a spike increase overnight if society became lenient in taking proper measures.

Although many researchers have published their works, the number of contributions and AI applications towards tackling COVID-19 is rudimentary. With the petrifying number of deaths and infected patients discovered daily and the virus’s mutation undergoing speedily and unknowingly, we are nowhere near applying AI on radiography imaging to identify that the patient is infected with SARS-CoV-2. The development of AI and radiography imaging is slow due to the limited availability of COVID-19 datasets. With the number of people affected worldwide, AI methods require massive data and several computational models and CNN architectures to learn and acquire knowledge. The current data that most researchers acquired from open-source websites is insufficient. Even with the best available data, it is far from perfect; as the data alone cannot explain the pandemic’s whole situation. Therefore, for future research and development, in terms of acquiring radiography imaging data, the best way is to have access to reliable, global, open data, and research to build an infrastructure that allows researchers who are experts in the field of radiology, artificial intelligence, deep learning, and imaging to navigate and understand this data and its development.

Most of the COVID-19 radiography image datasets are stored in different formats, standards, sizes, and quality, which are obstacles for scientists to speed up development for COVID-19-related AI research. Therefore, in future development, COVID-19 radiography images should have standard operating procedures to allow researchers or scientists and anyone who has the passion and are interested to contribute and utilise the information freely. A future study on deep learning models identifying and distinguishing the difference between COVID-19 images and viral pneumonia is essential. The study would help radiologists and physicians understand the virus and evaluate future coronaviruses using CT and CXR images more efficiently and effectively.

Data Availability

Data analyzed in this study were a reanalysis of existing data, which are openly available at locations cited in the reference section.

Conflicts of Interest

All the authors declared that they have no conflict of interest.

Acknowledgments

This work is supported by RU Geran University of Malaya (ST014-2019).