Next Article in Journal
Autonomated Inspection Policy for Smart Factory—An Improved Approach
Next Article in Special Issue
Recognizing Human Races through Machine Learning—A Multi-Network, Multi-Features Study
Previous Article in Journal
Coexistence of Periods in Parallel and Sequential Boolean Graph Dynamical Systems over Directed Graphs
Previous Article in Special Issue
Feasibility of Automatic Seed Generation Applied to Cardiac MRI Image Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Performance Evaluation of Classification Algorithms for Clinical Decision Support Systems

1
Data Science Group, Center for Mathematical and Computational Sciences, Institute for Basic Science (IBS), Daejeon 34141, Korea
2
Department of Industrial Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(10), 1814; https://doi.org/10.3390/math8101814
Submission received: 8 September 2020 / Revised: 1 October 2020 / Accepted: 6 October 2020 / Published: 16 October 2020
(This article belongs to the Special Issue Recent Advances in Data Mining and Their Applications)

Abstract

:
Classification algorithms are widely taken into account for clinical decision support systems. However, it is not always straightforward to understand the behavior of such algorithms on a multiple disease prediction task. When a new classifier is introduced, we, in most cases, will ask ourselves whether the classifier performs well on a particular clinical dataset or not. The decision to utilize classifiers mostly relies upon the type of data and classification task, thus making it often made arbitrarily. In this study, a comparative evaluation of a wide-array classifier pertaining to six different families, i.e., tree, ensemble, neural, probability, discriminant, and rule-based classifiers are dealt with. A number of real-world publicly datasets ranging from different diseases are taken into account in the experiment in order to demonstrate the generalizability of the classifiers in multiple disease prediction. A total of 25 classifiers, 14 datasets, and three different resampling techniques are explored. This study reveals that the classifier that is likely to become the best performer is the conditional inference tree forest (cforest), followed by linear discriminant analysis, generalize linear model, random forest, and Gaussian process classifier. This work contributes to existing literature regarding a thorough benchmark of classification algorithms for multiple diseases prediction.

1. Introduction

Artificial intelligence (AI) has changed almost all aspects of our lives dramatically. Undoubtedly, it will not be surprising that manpower is likely to be replaced by AI shortly owing to the rapid development of AI. Some AI techniques, e.g., deep learning and other machine learning (ML) algorithms, have been employed in clinical applications to support an intelligent system for an early detection and diagnosis method of disease [1]. Furthermore, they assist physicians in providing the second opinion about an effective clinical diagnosis, prognosis, and other clinical-related decision tasks in order to avoid potential human errors that might bring the patient life into risk [2,3]. Figure 1 illustrates how ML algorithms are employed in a clinical decision support system (CDSS).
With the emergence of new technological advancement, a large amount of clinical data has been stored and is ready for being analyzed by the clinical researchers. For instance, a large scale of clinical data are publicly available about patients admitted to critical care units at a large tertiary care hospital [4]. Nevertheless, most physicians are still suffering from an inaccurate prediction of a disease outcome due to a lack of knowledge about the available data analytics approaches. This condition leads to a significant improvement necessity of disease prediction using advanced ML techniques. For this reason, ML techniques have grown as a well-known tool for discovering and characterizing complex patterns and relationships among them from large datasets [5].
The prediction task could be either a classification or regression techniques, relying on the type of the target variable, i.e., categorical or numerical. In contrast, the task can be categorized into two main natures, i.e., predictive and descriptive [6,7,8]. A predictive task deals with inferring a function from a set of labeled training samples by mapping data samples based on input–output pairs. Such a task is also known as learning data in a supervised manner. Neural network, classification/regression trees, and support vector classifier/regressor are examples of supervised learning algorithms. In contrast, descriptive task algorithms, e.g., clustering and association techniques, attempt to make an inference from an unlabeled input data. The goal of these approaches is to group the objects into clusters or to figure out some interesting patterns between variables in databases. Examples of these techniques include K-means clustering, hierarchical clustering, and frequent itemset rule extractors.
Among those approaches mentioned above, the classification of clinical data is a very problematic task since it might be confusing to choose the best performing classifiers available in the wild. One of the causes is that the classifiers have emerged from different families, i.e., ensembles, decision trees, rule-based, Bayesian, and neural network, to name a few. A researcher might choose the classifiers erratically due to a limited knowledge within his/her competence or point of interest. Moreover, it would be a very challenging effort as each dataset is not likely to be uniform, considering that disease type and other clinical context domains might vary immeasurably in practice. Hitherto, having a particular prediction method does not always give a significant level of accuracy under all clinical application domains because it mainly relies on the context used. To be more precise, no one can guarantee that the proposed classifier will have a good performance in all clinical datasets unless an empirical benchmark is conducted [9].
Instead of simply conducting a qualitative analysis by using a systematic mapping study about previously published works [2,6,10,11,12], this study focuses on a quantitative analysis of classification techniques for disease prediction. This empirical study helps researchers and practitioners in deciding the best classification algorithms in a clinical application setting. It is the case that most researchers in the purview of medical informatics are solely familiar with specific ML techniques; thus, picking the best performing classifiers, in many cases, is a resource-intensive task. In addition, the performance of a new proposed classifier for clinical data analysis is often justified against the classifier within the restricted group, exempting the classifiers belonging to other groups. Hence, a cross-domain comparison of the ML algorithms from different groups for different diseases is currently unexplored. To the best of our knowledge, no other quantitative benchmark of ML algorithms that focuses on clinical data has been taken into consideration to date.
While some classification algorithms achieve a superior result with a given dataset, the performance of such classifiers might be contrasting on other datasets. The behavior of the classifiers is consistent with the no-free-lunch theorem [9], where there exists no single classifier or individual method that can be a panacea for all problems. Providing a classifier benchmark for multiple diseases, the objective of this empirical study is to originally find the excellent performing classifier across some clinical datasets. It is meant to assist researchers/practitioners about a reasonable decision in picking the available classifiers for clinical prediction, enabling them to determine a possible well-performing classifier.
According to the above-mentioned issues, this paper attempts to address the two following research questions (RQs):
  • RQ 1 : What is the relative performance of classification algorithms with respect to different resampling strategies?
  • RQ 2 : Among the various families, is there a best choice in selecting classification algorithm for clinical decision support systems?

2. Related Work

Huge research interest in a systematic mapping study currently exists [2,13,14], which aims at identifying and categorizing the prior published works in order to give a visual summary of their results. However, the approach is deemed to be an unreliable barometer for being taken as a guideline to find the best performing classifiers. This is because such an approach only serves a literature review concerning the most frequent use of ML techniques for clinical data analytics. For instance, Ref. Idri et al. [11] provided a systematic map of studies regarding the application of data processing in the healthcare domain, while a similar literature review of data mining methods for traditional medicine was reported in [15]. They have recognized that support vector machines and neural networks have always been used to solve the disease prediction task.
Jothi et al. [16] reviewed the various papers in the healthcare field with respect to methods, algorithms, and results. Garciarena and Santana [17] investigated the relationship between the type of missing data, the choice of imputation method, and the effectiveness of classification algorithms that employed the imputed data. Performance of several classification algorithms for early detection of liver disease was explored in [18]. The assessment results showed that C5.0 and CHAID algorithm were able to produce rules for liver disease prediction. Kadi et al. [6] had explored a systematic literature review of data mining techniques in cardiology, while Jain and Singh [19] focused their survey study on the utilization of feature selection and classification techniques for the diagnosis and prediction of chronic diseases.
More recently, Moreira et al. [20] analyzed and summarized the current literature of a smart decision support system for healthcare according to their taxonomy, application area, year of publication, and the approaches and technologies used. Sohail et al. [21] concluded that there is no an exclusive classifier or technique available to predict all kind of diseases. They overviewed the previous research that was applied in the healthcare industry. A particular ML technique, e.g., ensemble (meta) learning for breast cancer diagnosis, had been discussed in [10]. The research emphasized ensembles techniques applied to breast cancer data using a systematic mapping study. Lastly, Nayar et al. [22] explored various applications that utilized swarm intelligence with data mining in healthcare with respect to methods and results obtained.
It is worth mentioning that some existing works have performed a comparative study of classification techniques for the effective diagnosis of diseases. However, those studies are limited to either a particular disease or ML technique. This makes their studies still questionable with respect to its generalizability of the proposed methods in other clinical data with a different context. For instance, Das [23] employed multiple classification techniques, i.e., neural networks, regression, and decision tree for the diagnosis of Parkinson’s disease. Based on the experiment, a neural network classifier was found to be the best performing classifier with an accuracy of 92%. The proposed disease detection model, called ‘HMV,’ was taken into consideration to overcome the drawbacks of a traditional heterogeneous ensemble [24]. A similar approach, called ‘HM-BagMoov,’ was proposed to solve the limitations of conventional heterogeneous ensemble [25]. These two approaches were evaluated on a different number of clinical datasets, i.e., five heart datasets, four breast cancer datasets, two diabetes datasets, two liver disease datasets, and a hepatitis dataset.
To sum up, the aforementioned existing studies possess the following limitations:
  • Most studies had simply reviewed previous publications by using a mapping study; thus, they could not be used as a support tool in providing a more informed choice to select the best performing classifier for disease prediction.
  • When comparing multiple algorithms with multiple datasets, there existed a lack of statistical significance tests. Hence, the performance difference among the classification algorithms was still unrevealed.
In this study, a broad spectrum of classification algorithms, covering different groups, i.e., meta, tree, rule-based, neural, and Bayesian, are taken into consideration. A total of 25 classifiers are involved in the comparison. In addition, 14 real-world clinical datasets with different peculiarities are included in the benchmark. In order to see the impact of different sampling strategies on the classifier’s performance, we incorporate 3 different sampling techniques, i.e., subsampling, cross-validation, and multiple rounds of cross-validation. Finally, the results of some statistical significant tests are reported in order to prove that the performance classifier algorithms are significantly different, e.g., there is at least one algorithm that does not perform equal to the others.

3. Materials and Methods

In this section, several datasets that we employed in the experiment are presented, followed by a brief review of classification algorithms. Finally, significance tests are discussed in the last section.

3.1. Datasets

We mostly obtain the datasets from the UCI machine learning repository [26]. There are 12 real-world datasets that we download from the UCI website, while two datasets, i.e., RSMH [8] and Tabriz Iran [27] are privately available (can be obtained upon request). All datasets are categorized into seven different diseases, such as diabetes (4 datasets), breast cancer (3 datasets), heart attack (3 datasets), and one dataset for each thoracic, seizure, liver, and chronic kidney. Furthermore, some datasets hold several classes in their class label attributes; thus, they can be a multi-class classification problem. Such a case of Tabriz Iran [27], where the input variables can be classified into three classes, i.e., negative, positive, and old patient; Z-Alizadeh Sani [28] possess four class attributes, i.e., normal, stenosis of the left anterior descending (LAD) artery, left circumflex (LCX) artery, and right coronary artery (RCA); Cleveland, which predicts the input attributes as being in 4 classes, consisting of an integer value ranging from 0 (no heart disease) to 4 severe heart disease; and Epileptic Seizure Recognition, where the instances labeled class 1 are diagnosed with seizure disease and the instances having any of the classes 2, 3, 4, and 5 are specified as non-seizure disease subjects. The remaining datasets have a binary class in their response attribute. In this study, all datasets having multi-class targets were transformed into binary class targets with a specific criteria, i.e., the subjects labeled class 1 are diagnosed with the disease while they are 0, otherwise.
In the experiment, each dataset undergoes a simple pre-processing step, ensuring that the response attribute of each dataset is a categorical variable with two categories. Another pre-processing step, e.g., feature selection, is not carried out. This is because of the following reasons: (i) our aim is not to achieve the best possible performance of the classifier on each dataset, yet to benchmark algorithms on each dataset, (ii) the performance result of the classifier on a subset features might be random, and (iii) feature selection is usually made on each dataset, making a significant increase to the scope of this work. Any missing values in the datasets are treated using “do-nothing” strategy, meaning that we let some classification algorithms (e.g., gradient boosting machine (GBM) and a generalized linear model (GLM), etc.) to learn the best imputation values for the missing data (e.g., a mean imputation is typically used). However, other algorithms that do not tolerate missing values, an observation that has one or more missing values is simply dropped. Finally, for each dataset, a simple transformation is applied to ensure each dataset is ready to be processed by Weka and R. Table 1 summarizes the collection of 12 datasets from UCI repository and two datasets from privately clinical domains.

3.2. Classification Algorithms

Twenty-five classification algorithms implemented in R and W e k a are included in this study. These classifiers were chosen with respect to their previous performance behavior in the CDSS domain. Note that previous works have used a variety of classifiers, ranging from tree-based learners [18] to ensemble learners [10]. All classifiers implemented in R are accessible through m l r package [39], while classifiers implemented in Weka [40] are run using a command-line of the j a v a class of each classifier. We briefly explain all classifiers as the following. All classifiers are grouped according to which family they particularly belong to. All default learning parameters are used in the experiment. For the sake of reproducibility, learning parameters of each classifier are listed in Appendix A.
a.
Tree-based algorithms: 5 learners
i.
C50 decision tree (C50)
The classifier is the extension of the C4.5 algorithm presented in [41] that possesses extra improvements such as boosting, generating smaller trees, and unequal costs for different types of errors. Tree pruning is performed by a final global pruning strategy in which the costly and complex sub-trees are removed in such a way the error rate exceeds the baseline, e.g., the standard error rate of a decision tree without pruning. C50 can generate a set of rules, as well as a classification tree.
ii.
Credal decision tree (CDT)
It takes into account imprecise probabilities and uncertainty measures for split criteria [42]. This procedure is a bit different from C4.5 or C50, where an information gain is used for a split criterion to choose the split attribute at each branching node in a tree.
iii.
Classification and regression tree (CART)
It is trained in a recursive binary splitting manner to generate the tree. Binary splitting is a numerical process in which all the values are organized, and different split points are tested using a cost function. The split with the lowest cost is chosen [43].
iv.
Random tree (RT)
It grows a tree using K randomly selected input features at each node without pruning. The cost function (error) is estimated during the training; thus, there is no accuracy estimation operation, i.e., cross-validation or train-test to obtain an estimate of the training error [44].
v.
Forest-PA (FPA)
The classifier generates bootstrap samples from the training set and trains a CART classifier on the bootstrap sample using the weights of the attributes. The weights of the attributes are then updated incrementally that are presented in the latest tree. Following this step, the weights of applicable attributes are updated using their respective weight increment values that are not present in the latest tree [45].
b.
Ensemble methods: 7 learners
i.
Random forest (RF)
It uses decision trees as a base classifier, while the tree is grown to a depth of one; then, the same procedure is replicated for all other nodes in the tree until the specified depth of the tree is reached [44].
ii.
Extra trees (XT)
It works similarly to the random forest classifier, yet the features and splits are chosen at random; thus, it is also known as extremely randomized trees. Because splits are chosen at random, the computational cost (variance) of extra trees is lower than the random forest and decision tree [46].
iii.
Rotation forest (RoF)
The classifier generates M feature subsets randomly and principal component analysis (PCA) is applied to each subset in order to restore a full feature set (e.g., using M axis rotations) for each base learner (e.g., decision tree) in the ensemble [47].
iv.
Gradient boosting machine (GBM)
The classifier is proposed to improve the performance of the classification and regression tree (CART). It constructs an ensemble serially, where each new tree in the sequence is in charge of rectifying the prior tree’s prediction error [48].
v.
Extreme gradient boosting machine (XGB)
The classifier is a state-of-the-art implementation of the gradient boosting algorithm. It shares a similar principle with GBM; however, less computational complexity is one of its advantages. In addition, XGB utilizes a more regularized model, making it able to reduce the complexity of the model while improving the prediction accuracy [49].
vi.
CForest (CF)
The classifier differs from random forest in terms of the base classifier employed and the aggregation scheme implemented. It utilizes conditional inference trees as a base learner. At the same time, the aggregation scheme works by taking the average weights obtained from each tree, not by averaging predictions directly as the random forest does [50].
vii.
Adaboost (AB)
The classifier attempts to improve the performance of a weak classifier, e.g., decision tree. The weak learner is trained sequentially on several bootstrap resamples of the learning set. Such a sequential scheme takes the results of a previous classifier into the next one to improve the final prediction by having the latter one emphasizing more on the mistakes of the earlier classifiers [51].
c.
Neural-based algorithms: 4 learners
i.
Deep learning (DL)
It derives from a multilayer feed-forward neural network, which is built based on a stochastic gradient descent algorithm of back-propagation. The primary distinction from a conventional neural network is that it possesses a large number of hidden layers, e.g., greater than or equal to four layers. In addition, some fine-tuned hyper-parameters are needed to be set properly, where a grid search is an option for obtaining the best parameter settings [52].
ii.
Multilayer perceptron (MLP)
The classifier is a fully connected feed-forward network, where the training is performed by error propagation method [53].
iii.
Deep neural network with a stacked autoencoder (SAEDNN)
It is a deep learning classifier, where the weights are initialized by a stacked autoencoder [54]. Similar to a deep belief network, it is trained with a greedy layerwise algorithm, while reconstruction error is used as an objective function.
iv.
Linear support vector machine (SVM)
A support vector machine works based on the principle of hyperplane that classifies the data in a higher dimensional space [55]. In this study, a linear implementation [56] is used with an L 2 -regularized and L 2 -loss descent method. This is because the linear implementation is computationally efficient as compared to LibSVM [57], for instance.
d.
Probability-based classifiers: 3 learners
i.
Naive Bayes (NB)
A Naive Bayes classifier performs classification based on the conditional probability of a categorical class variable. It considers each of the variables to contribute independently to the probability [58]. In many application domains, the maximum likelihood method is prevalently considered for parameter estimation. Furthermore, it can be trained very efficiently in a supervised learning task.
ii.
Gaussian process (GP)
A Gaussian process is defined by a mean and a covariance function. The function in any data modeling problem is considered to be a single sample in Gaussian distribution. In the classification task, the Gaussian process uses Laplace approximation for the parameter estimation [59].
iii.
Generalized linear model (GLM)
The classifier can be used either for classification and regression tasks. In this study, a multinomial family generalization is used as we deal with multi-class response variables. It models the probability of an observation belonging to an output category given the data [60].
e.
Discriminant methods: 3 learners
i.
Linear discriminant analysis (LDA)
The classifier assumes that any data model problem is Gaussian, and each attribute has the same variance. It estimates the mean and the variance for each class, while the prediction is made by estimating the probability of a test set belongs to each class. The output class is the one that gets the highest probability, in which the probability is estimated using Bayes theorem [61].
ii.
Mixture discriminant analysis (MDA)
This classifier is an extension of linear discriminant analysis. It is used for classification based on mixture models, while the mixture of normals is employed to get a density estimation for each class [62].
iii.
K-nearest neighbor (K-NN)
The classifier performs prediction on each row of the test set by finding the k nearest (measured by Euclidean distance) training set vectors. The classification is then made by majority voting with ties broken at random [63].
f.
Rule-based algorithm: 3 learners
i.
Repeated incremental pruning (RIP)
This classifier was originally developed to improve the performance of the I R E P algorithm. The classifier constructs a rule by taking into account the following two procedures: (1) data samples are randomly divided into two subsets, i.e., a growing set and a pruning set, and (2) a rule is grown using the FOIL algorithm. After generating a rule, the rule is straightaway pruned by eliminating any final sequence of conditions from the rule [64]. In this study, we employ a Java implementation of the RIP algorithm, so-called JRIP.
ii.
Partial decision tree (PART)
This classifier is a rule-induction procedure that avoids global optimization. It combines the two major rule generation techniques, i.e., decision tree (C4.5) and RIP. PART produces rule sets that are as accurate and of a similar size to those produced by C4.5, and more accurate than the RIP classifier [65].
iii.
OneR (1-R)
This is a straightforward classifier that produces one rule for each predictor in the training set and selects the rule with a minimum total error as its final rule. A rule for a predictor is produced by generating a frequency table for each predictor feature against the target feature [66].

3.3. Resampling Procedures

Several different resampling procedures, i.e., subsampling, cross-validation, and multiple-round of cross-validation, are included in this study. The objective of using different resampling methods is to ensure that the performance of classifiers is not obtained randomly. A generic resampling procedure is illustrated in Algorithm 1 [67]. Subsampling is a repeated hold-out, where the original dataset D is split into two disjoint parts with a specified proportion, i.e., in this study, 80% for the training set and 20% for the testing set. The procedure is then replicated ten times. Cross validation divides the dataset into k (10 in our case) equally disjoint parts (subsets) and employs k-1 parts to build the model, while the remaining part is used for validation. This step is repeated k rounds, where the test subset is different in each step. Lastly, multiple-round cross-validation is carried out by reproducing twofold cross-validation for five times. This procedure maintains the equal number of performance values as in the two other resampling procedures. We take the average value for each resampling procedure.
Algorithm 1 General resampling strategy
Input: A dataset D of n observation d 1 to d n , the number of subsets k, and a loss function L .
Process:
1. Generate k subsets of D named D ( 1 ) to D ( k )
2. S
3. for i 1 to k do
4.   D ¯ ( i ) D D ( i )
5.   f ^ F i t M o d e l ( D ( i ) )
6.   s i ( x y ) D ¯ ( i ) L ( y , f ^ ( x ) )
7.   S S { s i }
8. end
9. Aggregate S , i.e., m e a n ( S )
Output: Summary of validation statistics.

3.4. Significance Tests

In order to demonstrate an extensive empirical evaluation of classifiers, it is essential to utilize statistical tests in order to verify the significant performance difference of classifiers that are measurable [68]. Several tests are briefly discussed as follows.
  • A non-parametric Friedman test [69] is exploited to inspect whether there exist significant differences between the classifiers with respect to the performance metrics as mentioned earlier [70]. The null hypothesis ( H 0 ): no performance differences between classifiers exist, i.e., the expected difference μ d that is equal to zero is observed against the alternative hypothesis ( H A ): at least one group of classifiers does not have the same performance, i.e., the expected difference μ d is not equal to zero. The statistic of the Friedman test is calculated according to Equation (1):
    X R 2 = 12 v w ( w + 1 ) i = 1 w R i 3 v ( w + 1 )
    where v denotes the number of datasets (14 in our case), w denotes the number of classifiers (25 in our case) to be compared, and the average rank of classifier algorithm is R i = 1 v i v r i j .
  • Finner test [71] is an p-value adjustment in a step-down manner. Let p 1 , p 2 , , p w 1 be the ordered p-values in increasing order, so that p 1 p 2 p w 1 , and H 1 , H 2 , , H w 1 are the respective hypotheses. It rejects H to H if w is the smallest integer. Due to its simplicity and power, Finner test is a good choice in general.
  • Nemenyi test [72] works by calculating the average rank of each benchmarked algorithm and taking their differences. In case such average differences are larger than or equal to a critical difference (CD), the performances of the algorithms are significantly different. CD can be obtained using the following formula:
    C D = q * w ( w + 1 ) 6 v
    where q * is the Studentized statistic divided by 2 .
More specifically, the procedures of significance tests can be broken down as follows.
  • Calculate the classifiers’ rank for each dataset using Friedman rank with respect to their area under ROC curve (AUC) metric in increasing order, from the best performer to the worst performer.
  • Calculate the average rank of each classifier over all datasets. The best-performing classifier is determined by the lowest value of Friedman rank. Note that the merit is inversely proportional to numeric value.
  • Calculate p-value using an omnibus test, e.g., Friedman.
  • If the Friedman test demonstrates significant results (p-value < 0.05 in our case), run the Finner’s method. It is carried out based on a pairwise comparison, where the best-performing algorithm is used as a control algorithm for being compared with the remaining algorithms.
  • Perform a Nemenyi test to compare the performance of classifiers by each family.

4. Results and Discussion

4.1. Overall Analysis

In this research, we implement 25 algorithms over 14 datasets and three different validation strategies, providing 1050 combinations of algorithm-datasets-validation methods. Three different validation procedures are taken into account to anticipate a poor bias and variance due to a small size of samples in each dataset. Furthermore, different validation procedures ensure that the experimental results were not obtained by chance. The test results are the average of 10 elements at each resampling method. All classifiers’ performance are assessed in terms of AUC metric. By referring the contingency table shown in Figure 2, AUC value of a classification algorithm can be calculated as:
A U C = 0 1 T P T P + T N d F P F P + F N = 0 1 T P P d F P N
To maintain the readability of this paper, all performance results are provided in Appendix B. In Figure 3, Figure 4 and Figure 5, the AUC value of each classification algorithm for each validation method is firstly shown. The boxplots show the distributions of data according to the AUC values obtained for each dataset. More specifically, they indicate the performance variability of classification algorithms relative to each dataset. The mean AUC values are grouped based on each resampling technique. It can be observed that there is a greater variability for FPA, RoF, RF, GBM, SVM, LDA, AB, and 1-R, regardless of validation methods considered. On the other hand, MLP and SAEDNN have less variability, meaning that such classifiers’ performances are consistent, in spite of the clinical dataset used. Overall, the six best-performing algorithms (in descending order of mean AUC value) are apparently CF, RF, FPA, RoF, DL, and GBM. Furthermore, since simply taking the average performance value might lead to bias, a Friedman ranking is adopted for assessing the classifier’s performance (see Figure 6). Instead of analyzing the explicit performance, Friedman rank analysis is based on the rank of each classifier on each dataset. To address the RQ 1 , we analyze the relative performance of classifiers in different resampling strategies using the Friedman rank. With reference to 10cv, the Friedman rank can confirm that the six top-performing classifiers, in ascending order, from the best-performer to the worst performer are LDA, CF, GLM, RoF, RF, and GBM with average rank 5.04, 5.21, 5.32, 6.32, 6.46, and 6.47, respectively.
According to Friedman rank and 10ho, the top-5 superior performers are CF, followed by DL, GLM, GP, and LDA with average rank 5.61, 5.82, 5.86, 5.93, and 5.96, respectively. Subsequently, with respect to 5 × 2cv, CF is on the topmost performance with average rank 4.82. This is succeeded by GLM, RF, GP, and LDA, with average rank 5.11, 5.14, 5.71, and 5.89, respectively. For the sake of an inclusive evaluation, the behavior of the top-performing classifiers can be discussed as follows:
  • Overall, it is worth mentioning that, over the three resampling techniques, CF have performed best with average AUC 0.857 and rank 5.16. The result is reasonably unexpected since a conditional inference tree model can outperform other gradient-based ensemble algorithms, i.e., XGB.
  • CF is as good as RF since CF works similar to RF [73]. Therefore, it is not surprising that both CF and RF are not significantly different.
  • Other ensemble learners, i.e., RF and RoF have performed better than other ensemble models, i.e., FPA and XGB.
  • Regarding a highly performance of RF, it is obvious that RF is built based on an ensemble of decision trees. The randomness of each tree split usually provides better prediction performance. In addition, RF is resilient in order to deal with imbalance datasets [74]. Note that several datasets employed in this experiment are highly imbalanced (see Table 1).
  • LDA is listed in the top-5 best performing classifiers. LDA has been known as a simple but robust predictor when the dataset is linearly separable.
Based on our experimental results, the worst performer over three resampling techniques is SAEDNN with average rank 22.74 and average AUC 0.538. This is not surprising because a deep neural network always requires a large number of training samples to build the model. Moreover, neural-based classifiers are nonlinear classifiers, meaning that they are more affected by some hyperparameter-tuning such as learning rate, epochs, number of hidden layers, etc. The lowest AUC might also be the result of an insufficiency of training data samples when constructing the classification model. Moreover, it can be observed that the second and the third worst models are 1-R, SVM, and MLP.
A post-hoc test, i.e., Finner, is carried out after an omnibus Friedman test. If the Friedman test rejects the null hypothesis (there is no performance difference in at least one of the classifiers and others), the post-hoc test is applied. The result of statistical significance tests for each resampling technique is given in Table 2, Table 3 and Table 4. Concerning the post-hoc test, several options are prevalently available such as a pair-wise comparison, comparison with control classifier, and all pair-wise comparisons. In this study, all pair-wise comparison is adopted. The p-value < 0.01 is set as a significant threshold. The results of Table 2, Table 3 and Table 4 are further discussed. According to Finner test, low-performing classifiers, i.e., RIP, SAEDNN, SVM, and MLP are significantly different compared to the rest algorithms. In order to inspect how the three resampling procedures have an impact on the classifier’s performance, we extend our comparative analysis in the following section.

4.2. Analysis by Each Family

In this section, in order to answer RQ 2 , we focus on the benchmark of best performing classifiers on each family, as well as the effect of different resampling strategies on the classifier’s performance. As a result, six top performers, corresponding to six different families, are included in the analysis. Among the tree-based classification algorithms, FPA is the best classifier, whilst CF and DL become the two leading classifiers among the ensemble and neural-based algorithms, respectively. GLM has performed best among the probability-based classifiers, whilst LDA is superior compared to other discriminant methods. Finally, PART is an outstanding classifier among the rule-based family. Figure 7, Figure 8 and Figure 9 expose the CD plots using the Nemenyi test for each resampling technique. It is obvious that, with respect to 10cv, PART and DL are significantly different than the rest of the algorithms since their rank is greater than the CD. The results of 10ho and 5 × 2cv are quite similar, where PART is the only algorithm that has a significant performance difference compared to the rest algorithms.

5. Conclusions

Rather than simply carrying out a mapping study, this study provided a more informed option about choosing the best classifier for disease prediction. This study demonstrated a thorough benchmark of 25 classification techniques corresponding to six families over 14 real-world disease datasets, as well as three different resampling procedures. Based on the experimental results, CF had shown its superiority in comparison with other classifiers. It achieved an average AUC 0.857 over all resampling techniques with an average Friedman rank is 5.16. The two worst performers, however, were from neural-based classifier family such as SAEDNN and MLP. These classifiers were not competitive since SAEDNN particularly required a sufficient training set to create the model. In general, the other top-4 classifiers that were very powerful for clinical decision support systems were LDA, GLM, RF, and GP. The following section presents the two RQs and provides answer to them.
  • RQ 1 : What is the relative performance of classification algorithms with respect to different resampling strategies? According to the Friedman rank, different resampling techniques had no significant impact on several classifiers.
  • RQ 2 : Among the various families, is there a best choice in selecting a classification algorithm for a clinical decision support system? This study revealed that choosing the classification algorithms for disease prediction highly depended on types of practical problems, i.e., imbalanced dataset, linear or nonlinear separable, and expert knowledge regarding data and domain. Therefore, it can be concluded that CF, LDA, GLM, RF, and GP were the best choices so far in clinical decision support system field since they are resilient to imbalanced datasets.
Among the potential approaches to extend this study, we believe that including more clinical datasets would be the most interesting since this might help researchers/clinical practitioners in selecting suitable classifiers in different application domains. For future work, it would be interesting to deal with the main drawback of this study about the performance AUC of state-of-the-art classifiers, i.e., XGB and SAEDNN. To this end, while a large amount of datasets are taken into consideration, the performance of deep structured learning might be improved. Taking into account that deep learning has played a significant role in classifying medical images, acoustic signals, or biosignals detected from medical devices, it would be meaningful to understand the performance of machine learning and deep learning applied on those clinical datasets.

Author Contributions

Conceptualization, B.A.T. and S.L.; methodology, B.A.T.; validation, B.A.T.; investigation, S.L.; writing—original draft preparation, B.A.T. and S.L.; writing—review and editing, B.A.T. and S.L.; visualization, B.A.T.; supervision, S.L.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1F1A1059346). This work was supported by the 2020 Research Fund (Project Number 1.180090.01) of UNIST (Ulsan National Institute of Science and Technology).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. List of Learning Parameters Used in the Experiment

In the following, tuned parameters of 25 classifiers were briefly shown. Their names and implementations are specified using n a m e _ i m p l e m e n t a t i o n , where the implementation can be r (e.g., in R using m l r ) and w (e.g., Weka).
  • C50_r
    Confidence factor: 0.25; smallest number of samples: 2; fuzzy threshold: No; sample: 0.
  • CDT_w
    k-th root attribute: 1; maximum tree depth: no restriction; minimum variance proportion: 0.001; imprecise Dirichlet model: 1.0.
  • CART_w
    Minimal cost-complexity pruning: yes; number of folds in the internal cross-validation: 5; heuristic search for binary split: yes.
  • RT_w
    Maximum depth of the tree: unlimited; the number of randomly chosen attributes: 0; amount of data used for backfitting: 0.
  • FPA_w
    Number of trees to build the forest: 100; number of pruning folds: 2; minimum number of objects: 2.
  • RF_r
    The implementation of H 2 O is used. Number of trees to build the forest: 100; maximum depth of the tree: unlimited; number of randomly chosen attributes: 0; number of iterations: 100.
  • XT_r
    Minimum number of instances required at a node: 2; number of attributes to randomly choose at a node: −1.
  • RoF_w
    Base classifier: C4.5; number of iterations: 10; projection filter: principle component analysis.
  • GBM_r
    The implementation of H 2 O is used. Fold assignment: Auto; number of trees: 100; maximum depth: 5; minimum observations in a leaf: 10; number of bins: 20; number of bins at the root level: 1024; number of bins for categorical columns: 1024; learning rate: 0.1; learning rate annealing: 1; sample rate: 1; column sample rate: 1; column sample rate per tree: 1; minimum split improvement: 1 × 10 5 .
  • XGB_r
    Maximum depth: 6; learning rate: 0.3; minimum sum of instance weight: 1; subsample ratio: 1; number of trees to grow per round: 1; number of trees: 100.
  • CF_r
    The m t r y argument: 5; number of trees: 100, minimum criterion: 0.
  • AB_w
    Number of instances to process: 100; base classifier: decision stump; number of iterations: 10; resampling is used: no; weight threshold for weight pruning: 100.
  • DL_r
    The implementation of H 2 O is used. L1 and L2 regularization: 0; Hidden layer dropout ratio: 0.5; input dropout ratio: 0; number of training samples for which momentum increases: 1,000,000; learning rate decay: 1; learning rate annealing: 1 × 10 6 ; learning rate: 0.005; adaptive learning rate smoothing factor: 1 × 10 8 ; adaptive learning rate time decay: 0.99; adaptive rate: yes; number of epochs: 100; hidden layer sizes: c(200, 200), activation function: rectifier; maximum relative size of the training data: 5.
  • MLP_r
    Maximum iterations: 100; number of units in the hidden layer: 200; learning function: standard backpropagation, parameter for the learning function: 0.2; activation function for hidden layer: logistic, number of hidden layer: 1.
  • SAEDNN_r
    Number of units in the hidden layers: 200; activation function: sigmoid; learning rate for gradient descent: 0.8; momentum rate for gradient descent: 0.5; learning rate scale: 1; number of epochs: 100, function of output unit: sigmoid; drop out fraction for hidden layer: 0, number of hidden layers: 2.
  • SVM_w
    Cost parameter: 1; bias: 1; ϵ : 0.001; ϵ parameter of the ϵ insensitive loss function: 0.1; number of iterations: 100.
  • NB_w
    Use kernel estimator: no; use supervised discretization: no.
  • GP_r
    Data transformation: normalize training data; kernel type: polynomial kernel; exponent value: 1; level of Gaussian noise: 1.
  • GLM_r
    Family: Gaussian; tweedie variance power: 0; tweedie link power: 1; θ : 1 × 10 10 ; solver: auto; α : 0; number of lambdas to be used in a search: 100; missing value handling: mean imputation;
  • LDA_r
    Prior: proportion; tolerance: 1 × 10 4 ; degrees of freedom: t distribution.
  • MDA_r
    Number of iterations: 100; number of sub-classes per class: 3; regression method used in optimal scaling: polynomial regression.
  • K-NN_w
    Number of neighbours to use: 2; distance weighting method: no; nearest neighbours search method: Euclidean distance.
  • RIP_w
    The amount of data used for pruning: 3; minimum total weight: 2; number of optimizations: 2; use pruning: yes.
  • PART_r
    Confidence factor: 0.25; the amount of data used for reduced-error pruning: 3; reduced-error pruning: no; binary splits: no; MDL correction: yes.
  • 1-R_w
    The minimum bucket size used for discretizing numeric attributes: 6.

Appendix B. Performance Results of All Benchmarked Classifiers

Table A1. Performance results of all classification algorithms w.r.t. 10cv.
Table A1. Performance results of all classification algorithms w.r.t. 10cv.
DATASETC50XTCDTCARTRTFPARoFRFGBMXGBCFABDLMLPGPGLMk-NNSVMLDAMDANB1-RPARTSAEDNNRIP
Breast cancer (diagnostic)0.9650.9290.9640.9380.9290.9860.9920.9900.9910.9700.9900.9890.9920.4950.9920.9930.9800.5610.9930.9690.9760.8870.9370.5050.953
Breast cancer (original)0.9690.9180.9620.9600.9600.9900.9880.9900.9900.9780.9910.9860.9910.9870.9830.9940.9790.9610.9950.9900.9830.9080.9470.5040.964
Pima Indian0.7840.6470.7790.7270.6840.8230.8330.8170.8090.7830.8290.8010.7770.4970.8320.8320.7170.5960.8320.7960.8190.6490.7940.4990.734
Statlog0.8160.7440.8000.7970.7870.9000.8850.8970.8830.8310.8980.8780.8620.5000.9050.9000.8180.7780.9030.8820.8980.7060.7360.4980.778
Wisconsin prognostic0.6020.4860.4940.4750.5790.6290.7300.6170.7410.6820.6420.6950.6960.4970.6400.7390.6170.5040.7920.7220.6420.5040.5000.4870.638
RSMH0.8780.9050.9020.8910.8930.9640.9450.9620.9600.9200.9660.9740.9740.9810.9800.9740.9570.9400.9740.9570.9770.8170.8780.9780.917
Tabriz Iran0.5580.5570.5450.5290.5690.7540.7120.7080.7200.6630.7200.7270.6800.5030.7240.7210.5450.5000.7210.5340.7530.4990.6130.5020.528
Thoracic surgery0.4880.5670.5280.5000.5190.5820.5670.6570.6060.5510.6550.4760.5410.5210.6090.6200.5480.5030.6450.5270.6420.4960.4990.5000.496
Diabetic retinopathy0.6830.6160.6650.6680.6160.7310.8210.7540.7700.6680.7700.6520.7860.5080.7640.7680.6480.6430.7960.7610.6820.5280.6940.4990.637
ILPD0.6640.5950.6680.5080.5880.7270.7080.7510.7250.6650.7340.6040.6930.5240.6780.7380.6370.5440.7150.7130.6380.5330.6660.5040.549
Seizure0.9660.9030.9550.9420.8970.9900.9950.9950.9950.8900.9920.8980.9790.5050.4850.5210.9000.6090.5290.5280.9840.7460.9500.5050.937
Chronic kidney0.9820.9730.9910.9950.9981.0001.0000.9991.0000.9690.9981.0000.9950.6870.9940.9980.9700.6890.9980.9661.0000.9210.9730.5000.976
Cleveland0.8060.7520.8090.8100.7100.8980.9000.9040.8920.8230.9060.8940.8820.5010.9040.9020.8410.7390.9040.8960.8940.7240.7730.5070.810
Z-Alizadeh0.7640.7480.8380.7990.6380.9140.8930.9190.9000.8230.9240.8980.8920.5060.9240.9270.8280.5260.8980.8670.8770.6770.7560.4970.728
AVERAGE0.7800.7390.7790.7530.7410.8490.8550.8540.8560.8010.8580.8190.8390.5870.8150.8310.7850.6500.8350.7930.8400.6850.7650.5350.760
Table A2. Performance results of all classification algorithms w.r.t. 10ho.
Table A2. Performance results of all classification algorithms w.r.t. 10ho.
DATASETC50XTCDTCARTRTFPARoFRFGBMXGBCFABDLMLPGPGLMk-NNSVMLDAMDANB1-RPARTSAEDNNRIP
Breast cancer (diagnostic)0.9550.9140.9450.9230.9250.9870.9900.9910.9930.9520.9880.9870.9950.5000.9910.9940.9680.5240.9920.9670.9710.8810.9350.5000.939
Breast cancer (original)0.9700.9230.9530.9410.9250.9910.9890.9920.9900.9710.9930.9900.9940.9910.9830.9950.9860.9630.9950.9930.9890.9000.9630.5590.954
Pima Indian0.7680.6520.7370.7390.6670.8230.8260.8200.8240.7830.8300.7980.8330.5230.8360.8290.7470.5610.8290.7780.8090.6650.7820.5000.709
Statlog0.8140.7210.7550.7870.7050.8960.8850.8900.8720.8430.8950.8860.8870.4500.9130.9070.8280.7750.9090.8970.9140.6580.7620.5240.749
Wisconsin prognostic0.6280.5280.6040.5140.5570.6450.7340.6480.7590.6310.6710.7200.7420.5150.6520.7420.6100.4800.8310.6990.6360.5080.5920.5000.613
RSMH0.9000.8740.8980.8980.8830.9550.9580.9570.9530.9250.9530.9570.9610.9640.9640.9610.9520.9320.9600.9580.9620.8210.9020.9650.897
Tabriz Iran0.5360.6080.5330.5000.5620.7530.7110.6990.6500.6310.7090.7090.6950.5180.7470.7170.5880.5000.7300.5760.7400.4990.6790.4920.529
Thoracic surgery0.4970.5550.4960.5000.5140.6600.5780.6870.6210.5840.6920.5250.6690.5600.6410.6750.5630.5190.6950.6000.6960.5070.5590.5000.502
Diabetic retinopathy0.6810.6050.6550.6470.6020.7140.7320.7470.7600.6700.7480.6520.7970.4980.7610.7670.6400.6760.7930.7620.6770.4840.6800.5000.633
ILPD0.6770.5710.6220.5140.6020.7370.7370.7360.7230.6540.7400.7000.7310.4740.6740.7240.6120.5040.7030.7050.7260.5340.6530.5000.550
Seizure0.9650.8950.9490.9380.9000.9940.9940.9940.9950.8930.9920.9050.9920.5460.9950.5150.8930.6170.5300.5280.9530.7530.9450.5130.931
Chronic kidney0.9710.9680.9910.9890.9921.0001.0001.0001.0000.9650.9990.9990.9980.6790.9980.9990.9730.6940.9980.9840.9960.9210.9780.5110.954
Cleveland0.8100.7400.7890.8070.7350.8970.8870.8990.8810.8470.9010.8920.8820.4930.8920.8920.8400.6830.8970.8850.8850.7250.8010.4990.797
Z-Alizadeh0.8110.7320.8800.8420.7140.9380.9250.9360.9230.8690.9360.9110.9100.5050.9320.9290.8370.5380.8990.8450.8790.6410.7800.5000.788
AVERAGE0.7850.7350.7720.7530.7350.8560.8530.8570.8530.8010.8610.8310.8630.5870.8560.8320.7880.6400.8400.7980.8450.6780.7870.5400.753
Table A3. Performance results of all classification algorithms w.r.t. 5 × 2cv.
Table A3. Performance results of all classification algorithms w.r.t. 5 × 2cv.
DATASETC50XTCDTCARTRTFPARoFRFGBMXGBCFABDLMLPGPGLMk-NNSVMLDAMDANB1-RPARTSAEDNNRIP
Breast cancer (diagnostic)0.9560.9120.9450.9230.9190.9860.9890.9880.9890.9590.9880.9870.9930.4950.9900.9930.9770.5010.9910.9750.9760.8870.9330.5000.928
Breast cancer (original)0.9670.9260.9590.9520.9380.9900.9890.9890.9890.9690.9900.9880.9920.9880.9840.9940.9840.9580.9940.9920.9870.9050.9560.5160.951
Pima Indian0.7450.6270.7560.7130.6570.8180.8150.8120.7950.7620.8190.7960.8000.5120.8270.8250.7180.5650.8260.7880.8090.6520.7550.5030.703
Statlog0.8170.7410.7570.7610.7260.8980.8880.8980.8810.8240.8980.8810.8750.4740.9040.8960.8200.6950.8950.8810.8980.7050.7800.5050.749
Wisconsin prognostic0.5770.5290.5230.5210.5330.6250.6640.6040.6580.6030.6500.6500.7030.4790.6000.7180.5570.5010.7620.7060.6250.4970.5690.4930.591
RSMH0.8750.8790.8920.8750.8860.9630.9520.9660.9570.8920.9630.9590.9690.9770.9750.9700.9540.9380.9700.9640.9760.8310.8900.9690.858
Tabriz Iran0.5620.5700.5760.5000.5650.7080.6950.6840.6710.6060.7160.7040.6810.5460.7160.7050.5660.5000.7000.6440.7270.5040.6020.4920.538
Thoracic surgery0.5020.5310.5100.5000.5150.6220.5920.6650.5960.5800.6510.5650.5880.5150.6740.6420.5270.5150.6400.5690.6240.5180.5490.5000.501
Diabetic retinopathy0.6690.6050.6700.6670.6150.7130.7670.7490.7620.6710.7430.6600.7390.4970.7430.7630.6340.6700.7830.7300.6780.5040.6670.5080.634
ILPD0.6350.6020.6290.5040.5870.7270.6890.7260.7120.6540.7260.6950.6980.5180.6490.7370.5920.5190.7140.7150.7260.5460.6490.5080.567
Seizure0.9500.8930.9430.9250.8920.9930.9930.9940.9940.8900.9930.9030.9910.5440.9930.5160.8760.5960.5270.5140.9530.7550.9330.5110.928
Chronic kidney0.9700.9460.9530.9550.9650.9960.9940.9980.9970.9650.9970.9960.9960.6590.9950.9950.9740.7090.9960.9850.9930.9200.9600.5250.953
Cleveland0.8280.7420.7770.7740.7330.9000.8880.8980.8810.8320.8980.8880.8730.4980.9000.8960.8250.6750.8960.8800.8920.7390.7810.5060.784
Z-Alizadeh0.8170.7070.8160.8320.6760.9030.8980.9160.8920.8300.9150.8800.8930.5050.8850.9120.8260.5390.8640.8210.8640.6140.7550.4990.744
AVERAGE0.7760.7290.7650.7430.7290.8460.8440.8490.8410.7880.8530.8250.8420.5860.8450.8260.7740.6340.8260.7970.8380.6840.7700.5380.745

References

  1. Lim, S.; Tucker, C.S.; Kumara, S. An unsupervised machine learning model for discovering latent infectious diseases using social media data. J. Biomed. Inform. 2017, 66, 82–94. [Google Scholar] [CrossRef] [PubMed]
  2. Esfandiari, N.; Babavalian, M.R.; Moghadam, A.M.E.; Tabar, V.K. Knowledge discovery in medicine: Current issue and future trend. Expert Syst. Appl. 2014, 41, 4434–4463. [Google Scholar] [CrossRef]
  3. Abdar, M.; Zomorodi-Moghadam, M.; Zhou, X.; Gururajan, R.; Tao, X.; Barua, P.D.; Gururajan, R. A new nested ensemble technique for automated diagnosis of breast cancer. Pattern Recognit. Lett. 2018, 132, 123–131. [Google Scholar] [CrossRef]
  4. Johnson, A.E.; Pollard, T.J.; Shen, L.; Li-Wei, H.L.; Feng, M.; Ghassemi, M.; Moody, B.; Szolovits, P.; Celi, L.A.; Mark, R.G. MIMIC-III, a freely accessible critical care database. Sci. Data 2016, 3, 160035. [Google Scholar] [CrossRef] [Green Version]
  5. Firdaus, M.A.; Nadia, R.; Tama, B.A. Detecting major disease in public hospital using ensemble techniques. In Proceedings of the 2014 International Symposium on Technology Management and Emerging Technologies, Bandung, Indonesia, 27–29 May 2014; pp. 149–152. [Google Scholar]
  6. Kadi, I.; Idri, A.; Fernandez-Aleman, J. Knowledge discovery in cardiology: A systematic literature review. Int. J. Med Inform. 2017, 97, 12–32. [Google Scholar] [CrossRef]
  7. Tama, B.A.; Rhee, K.H. In-depth analysis of neural network ensembles for early detection method of diabetes disease. Int. J. Med Eng. Inform. 2018, 10, 327–341. [Google Scholar] [CrossRef]
  8. Tama, B.A.; Rhee, K.H. Tree-based classifier ensembles for early detection method of diabetes: An exploratory study. Artif. Intell. Rev. 2019, 51, 355–370. [Google Scholar] [CrossRef]
  9. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  10. Hosni, M.; Abnane, I.; Idri, A.; de Gea, J.M.C.; Alemán, J.L.F. Reviewing Ensemble Classification Methods in Breast Cancer. Comput. Methods Programs Biomed. 2019, 177, 89–112. [Google Scholar] [CrossRef]
  11. Idri, A.; Benhar, H.; Fernández-Alemán, J.; Kadi, I. A systematic map of medical data preprocessing in knowledge discovery. Comput. Methods Programs Biomed. 2018, 162, 69–85. [Google Scholar] [CrossRef]
  12. Idrissi, T.E.; Idri, A.; Bakkoury, Z. Systematic map and review of predictive techniques in diabetes self-management. Int. J. Inf. Manag. 2019, 46, 263–277. [Google Scholar] [CrossRef]
  13. Petersen, K.; Feldt, R.; Mujtaba, S.; Mattsson, M. Systematic Mapping Studies in Software Engineering. In Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering, Bari, Italy, 26–27 June 2008; Volume 8, pp. 68–77. [Google Scholar]
  14. Kitchenham, B.A.; Budgen, D.; Brereton, O.P. Using mapping studies as the basis for further research—A participant-observer case study. Inf. Softw. Technol. 2011, 53, 638–651. [Google Scholar] [CrossRef] [Green Version]
  15. Arji, G.; Safdari, R.; Rezaeizadeh, H.; Abbassian, A.; Mokhtaran, M.; Ayati, M.H. A systematic literature review and classification of knowledge discovery in traditional medicine. Comput. Methods Programs Biomed. 2019, 168, 39–57. [Google Scholar] [CrossRef] [PubMed]
  16. Jothi, N.; Husain, W. Data mining in healthcare—A review. Procedia Comput. Sci. 2015, 72, 306–313. [Google Scholar] [CrossRef] [Green Version]
  17. Garciarena, U.; Santana, R. An extensive analysis of the interaction between missing data types, imputation methods, and supervised classifiers. Expert Syst. Appl. 2017, 89, 52–65. [Google Scholar] [CrossRef]
  18. Abdar, M.; Zomorodi-Moghadam, M.; Das, R.; Ting, I.H. Performance analysis of classification algorithms on early detection of liver disease. Expert Syst. Appl. 2017, 67, 239–251. [Google Scholar] [CrossRef]
  19. Jain, D.; Singh, V. Feature selection and classification systems for chronic disease prediction: A review. Egypt. Inform. J. 2018, 19, 179–189. [Google Scholar] [CrossRef]
  20. Moreira, M.W.; Rodrigues, J.J.; Korotaev, V.; Al-Muhtadi, J.; Kumar, N. A comprehensive review on smart decision support systems for health care. IEEE Syst. J. 2019, 13, 3536–3545. [Google Scholar] [CrossRef]
  21. Sohail, M.N.; Jiadong, R.; Uba, M.M.; Irshad, M. A comprehensive looks at data mining techniques contributing to medical data growth: A survey of researcher reviews. In Recent Developments in Intelligent Computing, Communication and Devices; Springer: Berlin/Heidelberg, Germany, 2019; pp. 21–26. [Google Scholar]
  22. Nayar, N.; Ahuja, S.; Jain, S. Swarm intelligence and data mining: A review of literature and applications in healthcare. In Proceedings of the Third International Conference on Advanced Informatics for Computing Research, Shimla, India, 15–16 June 2019; pp. 1–7. [Google Scholar]
  23. Das, R. A comparison of multiple classification methods for diagnosis of Parkinson disease. Expert Syst. Appl. 2010, 37, 1568–1572. [Google Scholar] [CrossRef]
  24. Bashir, S.; Qamar, U.; Khan, F.H.; Naseem, L. HMV: A medical decision support framework using multi-layer classifiers for disease prediction. J. Comput. Sci. 2016, 13, 10–25. [Google Scholar] [CrossRef]
  25. Bashir, S.; Qamar, U.; Khan, F.H. IntelliHealth: A medical decision support application using a novel weighted multi-layer classifier ensemble framework. J. Biomed. Inform. 2016, 59, 185–200. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Asuncion, A.; Newman, D. UCI Machine Learning Repository. 2007. Available online: http://www.ics.uci.edu/mlearn/MLRepository.html (accessed on 16 October 2020).
  27. Heydari, M.; Teimouri, M.; Heshmati, Z.; Alavinia, S.M. Comparison of various classification algorithms in the diagnosis of type 2 diabetes in Iran. Int. J. Diabetes Dev. Ctries. 2016, 36, 167–173. [Google Scholar] [CrossRef]
  28. Alizadehsani, R.; Zangooei, M.H.; Hosseini, M.J.; Habibi, J.; Khosravi, A.; Roshanzamir, M.; Khozeimeh, F.; Sarrafzadegan, N.; Nahavandi, S. Coronary artery disease detection using computational intelligence methods. Knowl. Based Syst. 2016, 109, 187–197. [Google Scholar] [CrossRef]
  29. Aličković, E.; Subasi, A. Breast cancer diagnosis using GA feature selection and Rotation Forest. Neural Comput. Appl. 2017, 28, 753–763. [Google Scholar] [CrossRef]
  30. Zheng, B.; Yoon, S.W.; Lam, S.S. Breast cancer diagnosis based on feature extraction using a hybrid of K-means and support vector machine algorithms. Expert Syst. Appl. 2014, 41, 1476–1482. [Google Scholar] [CrossRef]
  31. Maglogiannis, I.; Zafiropoulos, E.; Anagnostopoulos, I. An intelligent system for automated breast cancer diagnosis and prognosis using SVM based classifiers. Appl. Intell. 2009, 30, 24–36. [Google Scholar] [CrossRef]
  32. Huang, Y.P.; Basanta, H.; Wang, T.H.; Kuo, H.C.; Wu, W.C. A Fuzzy Approach to Determining Critical Factors of Diabetic Retinopathy and Enhancing Data Classification Accuracy. Int. J. Fuzzy Syst. 2019, 21, 1844–1857. [Google Scholar] [CrossRef]
  33. Raza, K. Improving the prediction accuracy of heart disease with ensemble learning and majority voting rule. In U-Healthcare Monitoring Systems; Elsevier: Amsterdam, The Netherlands, 2019; pp. 179–196. [Google Scholar]
  34. Abdar, M.; Książek, W.; Acharya, U.R.; Tan, R.S.; Makarenkov, V.; Pławiak, P. A new machine learning technique for an accurate diagnosis of coronary artery disease. Comput. Methods Programs Biomed. 2019, 179, 104992. [Google Scholar] [CrossRef]
  35. Amin, M.S.; Chiam, Y.K.; Varathan, K.D. Identification of significant features and data mining techniques in predicting heart disease. Telemat. Inform. 2019, 36, 82–93. [Google Scholar] [CrossRef]
  36. Mangat, V.; Vig, R. Novel associative classifier based on dynamic adaptive PSO: Application to determining candidates for thoracic surgery. Expert Syst. Appl. 2014, 41, 8234–8244. [Google Scholar] [CrossRef]
  37. Andrzejak, R.G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C.E. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64, 061907. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Polat, H.; Mehr, H.D.; Cetin, A. Diagnosis of chronic kidney disease based on support vector machine by feature selection methods. J. Med. Syst. 2017, 41, 55. [Google Scholar] [CrossRef] [PubMed]
  39. Bischl, B.; Lang, M.; Kotthoff, L.; Schiffner, J.; Richter, J.; Studerus, E.; Casalicchio, G.; Jones, Z.M. mlr: Machine Learning in R. J. Mach. Learn. Res. 2016, 17, 5938–5942. [Google Scholar]
  40. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
  41. Quinlan, J.R. C4.5: Programs for Machine Learning; Elsevier: Amsterdam, The Netherlands, 1992. [Google Scholar]
  42. Abellán, J.; Moral, S. Building classification trees using the total uncertainty criterion. Int. J. Intell. Syst. 2003, 18, 1215–1225. [Google Scholar] [CrossRef] [Green Version]
  43. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Chapman and Hall/CRC: Boca Raton, FL, USA, 1984. [Google Scholar]
  44. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  45. Adnan, M.N.; Islam, M.Z. Forest PA: Constructing a decision forest by penalizing attributes used in previous trees. Expert Syst. Appl. 2017, 89, 389–403. [Google Scholar] [CrossRef]
  46. Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef] [Green Version]
  47. Rodriguez, J.J.; Kuncheva, L.I.; Alonso, C.J. Rotation forest: A new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1619–1630. [Google Scholar] [CrossRef]
  48. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  49. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  50. Hothorn, T.; Lausen, B.; Benner, A.; Radespiel-Tröger, M. Bagging survival trees. Stat. Med. 2004, 23, 77–91. [Google Scholar] [CrossRef] [PubMed]
  51. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of online learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
  52. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
  53. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386. [Google Scholar] [CrossRef] [Green Version]
  54. Bengio, Y.; Lamblin, P.; Popovici, D.; Larochelle, H. Greedy layer-wise training of deep networks. In Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference; Mit Press: Cambridge, MA, USA, 2007; pp. 153–160. [Google Scholar]
  55. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  56. Fan, R.E.; Chang, K.W.; Hsieh, C.J.; Wang, X.R.; Lin, C.J. LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res. 2008, 9, 1871–1874. [Google Scholar]
  57. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 27. [Google Scholar] [CrossRef]
  58. John, G.H.; Langley, P. Estimating continuous distributions in Bayesian classifiers. In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1995; pp. 338–345. [Google Scholar]
  59. Williams, C.K.; Barber, D. Bayesian classification with Gaussian processes. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1342–1351. [Google Scholar] [CrossRef] [Green Version]
  60. Nelder, J.A.; Wedderburn, R.W. Generalized linear models. J. R. Stat. Soc. Ser. A (Gen.) 1972, 135, 370–384. [Google Scholar] [CrossRef]
  61. Mika, S.; Ratsch, G.; Weston, J.; Scholkopf, B.; Mullers, K.R. Fisher discriminant analysis with kernels. In Proceedings of the Neural networks for signal processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (cat. no. 98th8468), Madison, WI, USA, 25 August 1999; pp. 41–48. [Google Scholar]
  62. Hastie, T.; Tibshirani, R. Discriminant analysis by Gaussian mixtures. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 155–176. [Google Scholar] [CrossRef]
  63. Ripley, B.D.; Hjort, N. Pattern Recognition and Neural Networks; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  64. Cohen, W.W. Fast effective rule induction. In Machine Learning Proceedings; Elsevier: Amsterdam, The Netherlands, 1995; pp. 115–123. [Google Scholar]
  65. Frank, E.; Witten, I.H. Generating accurate rule sets without global optimization. In Proceedings of the Fifteenth International Conference on Machine Learning (ICML), Morgan Kaufmann, Madison, WI, USA, 24–27 July 1998; pp. 144–151. [Google Scholar]
  66. Holte, R.C. Very simple classification rules perform well on most commonly used datasets. Mach. Learn. 1993, 11, 63–90. [Google Scholar] [CrossRef]
  67. Bischl, B.; Mersmann, O.; Trautmann, H.; Weihs, C. Resampling methods for meta-model validation with recommendations for evolutionary computation. Evol. Comput. 2012, 20, 249–275. [Google Scholar] [CrossRef] [PubMed]
  68. García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  69. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  70. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  71. Finner, H. On a monotonicity problem in step-down multiple test procedures. J. Am. Stat. Assoc. 1993, 88, 920–923. [Google Scholar] [CrossRef]
  72. Nemenyi, P. Distribution-free multiple comparisons. Biometrics 1962, 18, 263. [Google Scholar]
  73. Mogensen, U.B.; Ishwaran, H.; Gerds, T.A. Evaluating random forests for survival analysis using prediction error curves. J. Stat. Softw. 2012, 50, 1. [Google Scholar] [CrossRef]
  74. Khoshgoftaar, T.M.; Golawala, M.; Van Hulse, J. An empirical study of learning from imbalanced data using random forest. In Proceedings of the 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2007), Patras, Greece, 29–31 October 2007; Volume 2, pp. 310–317. [Google Scholar]
Figure 1. Machine learning is an integral part of the clinical decision support system.
Figure 1. Machine learning is an integral part of the clinical decision support system.
Mathematics 08 01814 g001
Figure 2. A contingency table for a binary classification problem.
Figure 2. A contingency table for a binary classification problem.
Mathematics 08 01814 g002
Figure 3. Performance AUC for each classifier w.r.t mean 10cv.
Figure 3. Performance AUC for each classifier w.r.t mean 10cv.
Mathematics 08 01814 g003
Figure 4. Performance AUC for each classifier w.r.t mean 10ho.
Figure 4. Performance AUC for each classifier w.r.t mean 10ho.
Mathematics 08 01814 g004
Figure 5. Performance AUC for each classifier w.r.t mean 5 × 2cv.
Figure 5. Performance AUC for each classifier w.r.t mean 5 × 2cv.
Mathematics 08 01814 g005
Figure 6. Average Friedman rank of all algorithms for each resampling technique.
Figure 6. Average Friedman rank of all algorithms for each resampling technique.
Mathematics 08 01814 g006
Figure 7. Critical difference plot of selected classifiers in terms of 10cv.
Figure 7. Critical difference plot of selected classifiers in terms of 10cv.
Mathematics 08 01814 g007
Figure 8. Critical difference plot of selected classifiers in terms of 10ho.
Figure 8. Critical difference plot of selected classifiers in terms of 10ho.
Mathematics 08 01814 g008
Figure 9. Critical difference plot of selected classifiers in terms of 5 × 2cv.
Figure 9. Critical difference plot of selected classifiers in terms of 5 × 2cv.
Mathematics 08 01814 g009
Table 1. Recapitulation of 14 datasets used in this benchmark study.
Table 1. Recapitulation of 14 datasets used in this benchmark study.
IDDatasetDisease#Instances#Input Variables#Class Label%Majority ClassPublication
1Breast cancer (diagnostic)Breast cancer56931262.70Aličković and Subasi [29]
2Breast cancer (original)Breast cancer69910265.50Zheng et al. [30]
3Breast cancer (prognostic)Breast cancer19834276.26Maglogiannis et al. [31]
4Pima IndianDiabetes7688265.10Tama and Rhee [8]
5Tabriz IranDiabetes253613364.87Heydari et al. [27]
6RSMHDiabetes43511279.31Tama and Rhee [8]
7Diabetic RetinopathyDiabetes115118253.08Huang et al. [32]
8StatlogHeart disease26113256.32Raza [33]
9Z-Alizadeh SaniHeart disease30354486.14Abdar et al. [34]
10ClevelandHeart disease30313254.13Amin et al. [35]
11Thoracic SurgeryLung cancer47016285.11Mangat and Vig [36]
12Epileptic Seizure RecognitionSeizure disease11,500178580.0Andrzejak et al. [37]
13ILPDLiver disease5839271.18Abdar et al. [3]
14Chronic KidneyChronic kidney40024262.5Polat et al. [38]
Table 2. Results of post-hoc test using Finner correction w.r.t 10cv (bold indicates significance at p-value < 0.01).
Table 2. Results of post-hoc test using Finner correction w.r.t 10cv (bold indicates significance at p-value < 0.01).
C50XTCDTCARTRTFPARoFRFGBMXGBCFABDLMLPGPGLMk-NNSVMLDAMDANB1-RPARTSAEDNNRIP
C50n/a0.3630.9430.5120.4380.0020.0020.0020.0020.5940.0010.1230.0270.1460.0060.0010.7840.1490.0000.2790.0130.0470.6840.0350.581
XT0.363n/a0.3300.7910.8900.0000.0000.0000.0000.1370.0000.0090.0010.6170.0000.0000.2390.6210.0000.0340.0000.3360.6100.2830.713
CDT0.9430.330n/a0.4710.4020.0030.0020.0030.0030.6370.0010.1400.0320.1290.0080.0010.8340.1310.0010.3110.0160.0400.6420.0290.538
CART0.5120.7910.471n/a0.8980.0000.0000.0000.0000.2250.0000.0210.0030.4590.0010.0000.3570.4650.0000.0680.0010.2210.7750.1800.907
RT0.4380.8900.4020.898n/a0.0000.0000.0000.0000.1800.0000.0140.0020.5320.0000.0000.2990.5380.0000.0490.0010.2750.6920.2250.809
FPA0.0020.0000.0030.0000.000n/a0.9340.9710.9710.0160.6770.1940.4890.0000.7840.6980.0060.0000.6420.0770.6370.0000.0010.0000.000
RoF0.0020.0000.0020.0000.0000.934n/a0.9620.9620.0130.7270.1680.4440.0000.7270.7520.0050.0000.6920.0640.5880.0000.0000.0000.000
RF0.0020.0000.0030.0000.0000.9710.962n/a1.0000.0150.6980.1820.4710.0000.7600.7210.0060.0000.6630.0710.6170.0000.0000.0000.000
GBM0.0020.0000.0030.0000.0000.9710.9621.000n/a0.0150.6980.1820.4710.0000.7600.7210.0060.0000.6630.0710.6170.0000.0000.0000.000
XGB0.5940.1370.6370.2250.1800.0160.0130.0150.015n/a0.0040.3360.1110.0390.0350.0050.7670.0400.0030.5880.0640.0090.3530.0060.275
CF0.0010.0000.0010.0000.0000.6770.7270.6980.6980.004n/a0.0790.2700.0000.5120.9710.0010.0000.9520.0250.3830.0000.0000.0000.000
AB0.1230.0090.1400.0210.0140.1940.1680.1820.1820.3360.079n/a0.5690.0010.3110.0850.2080.0020.0690.6690.4240.0000.0450.0000.029
DL0.0270.0010.0320.0030.0020.4890.4440.4710.4710.1110.2700.569n/a0.0000.6550.2830.0550.0000.2430.3180.8090.0000.0070.0000.004
MLP0.1460.6170.1290.4590.5320.0000.0000.0000.0000.0390.0000.0010.000n/a0.0000.0000.0810.9900.0000.0070.0000.6420.3110.5740.396
GP0.0060.0000.0080.0010.0000.7840.7270.7600.7600.0350.5120.3110.6550.000n/a0.5320.0150.0000.4750.1400.8160.0000.0010.0000.001
GLM0.0010.0000.0010.0000.0000.6980.7520.7210.7210.0050.9710.0850.2830.0000.532n/a0.0020.0000.9250.0280.4020.0000.0000.0000.000
k-NN0.7840.2390.8340.3570.2990.0060.0050.0060.0060.7670.0010.2080.0550.0810.0150.002n/a0.0830.0010.4120.0290.0230.5170.0160.418
SVM0.1490.6210.1310.4650.5380.0000.0000.0000.0000.0400.0000.0020.0000.9900.0000.0000.083n/a0.0000.0070.0000.6370.3130.5690.402
LDA0.0000.0000.0010.0000.0000.6420.6920.6630.6630.0030.9520.0690.2430.0000.4750.9250.0010.000n/a0.0210.3530.0000.0000.0000.000
MDA0.2790.0340.3110.0680.0490.0770.0640.0710.0710.5880.0250.6690.3180.0070.1400.0280.4120.0070.021n/a0.2170.0010.1290.0010.087
NB0.0130.0000.0160.0010.0010.6370.5880.6170.6170.0640.3830.4240.8090.0000.8160.4020.0290.0000.3530.217n/a0.0000.0030.0000.002
1-R0.0470.3360.0400.2210.2750.0000.0000.0000.0000.0090.0000.0000.0000.6420.0000.0000.0230.6370.0000.0010.000n/a0.1290.9070.180
PART0.6840.6100.6420.7750.6920.0010.0000.0000.0000.3530.0000.0450.0070.3110.0010.0000.5170.3130.0000.1290.0030.129n/a0.1000.862
SAEDNN0.0350.2830.0290.1800.2250.0000.0000.0000.0000.0060.0000.0000.0000.5740.0000.0000.0160.5690.0000.0010.0000.9070.100n/a0.143
RIP0.5810.7130.5380.9070.8090.0000.0000.0000.0000.2750.0000.0290.0040.3960.0010.0000.4180.4020.0000.0870.0020.1800.8620.143n/a
Table 3. Results of a post-hoc test using Finner correction w.r.t 10ho (bold indicates significance at p-value < 0.01).
Table 3. Results of a post-hoc test using Finner correction w.r.t 10ho (bold indicates significance at p-value < 0.01).
C50XTCDTCARTRTFPARoFRFGBMXGBCFABDLMLPGPGLMk-NNSVMLDAMDANB1-RPARTSAEDNNRIP
C50n/a0.1900.6280.3630.2400.0020.0060.0020.0090.7600.0010.1080.0020.1440.0020.0020.9720.1130.0020.1650.0180.0250.9550.0190.312
XT0.190n/a0.4410.7150.9000.0000.0000.0000.0000.1010.0000.0020.0000.8920.0000.0000.1800.7980.0000.0040.0000.4350.2090.3830.783
CDT0.6280.441n/a0.6920.5280.0000.0010.0000.0020.4210.0000.0260.0000.3590.0000.0000.6080.2970.0000.0480.0030.1060.6640.0860.628
CART0.3630.7150.692n/a0.8060.0000.0000.0000.0000.2120.0000.0070.0000.6200.0000.0000.3480.5400.0000.0140.0010.2360.3960.2020.924
RT0.2400.9000.5280.806n/a0.0000.0000.0000.0000.1320.0000.0030.0000.7900.0000.0000.2260.7080.0000.0060.0000.3590.2650.3120.885
FPA0.0020.0000.0000.0000.000n/a0.7760.9900.6920.0070.8850.2090.9320.0000.9550.9390.0020.0000.9630.1410.5510.0000.0020.0000.000
RoF0.0060.0000.0010.0000.0000.776n/a0.7680.9080.0180.6700.3480.7150.0000.7370.7210.0070.0000.7440.2430.7520.0000.0050.0000.000
RF0.0020.0000.0000.0000.0000.9900.768n/a0.6860.0060.8920.2050.9390.0000.9630.9460.0020.0000.9720.1380.5470.0000.0020.0000.000
GBM0.0090.0000.0020.0000.0000.6920.9080.686n/a0.0250.5920.4170.6330.0000.6570.6410.0100.0000.6640.3010.8420.0000.0070.0000.000
XGB0.7600.1010.4210.2120.1320.0070.0180.0060.025n/a0.0040.2020.0050.0700.0050.0050.7830.0500.0060.2920.0470.0090.7210.0070.180
CF0.0010.0000.0000.0000.0000.8850.6700.8920.5920.004n/a0.1580.9460.0000.9240.9390.0010.0000.9150.1040.4550.0000.0010.0000.000
AB0.1080.0020.0260.0070.0030.2090.3480.2050.4170.2020.158n/a0.1800.0010.1900.1820.1160.0010.1930.8420.5470.0000.0960.0000.005
DL0.0020.0000.0000.0000.0000.9320.7150.9390.6330.0050.9460.180n/a0.0000.9720.9900.0020.0000.9630.1190.5000.0000.0010.0000.000
MLP0.1440.8920.3590.6200.7900.0000.0000.0000.0000.0700.0000.0010.000n/a0.0000.0000.1350.9080.0000.0020.0000.5280.1620.4690.686
GP0.0020.0000.0000.0000.0000.9550.7370.9630.6570.0050.9240.1900.9720.000n/a0.9800.0020.0000.9900.1270.5210.0000.0020.0000.000
GLM0.0020.0000.0000.0000.0000.9390.7210.9460.6410.0050.9390.1820.9900.0000.980n/a0.0020.0000.9720.1210.5070.0000.0010.0000.000
k-NN0.9720.1800.6080.3480.2260.0020.0070.0020.0100.7830.0010.1160.0020.1350.0020.002n/a0.1060.0020.1770.0200.0230.9320.0180.297
SVM0.1130.7980.2970.5400.7080.0000.0000.0000.0000.0500.0000.0010.0000.9080.0000.0000.106n/a0.0000.0020.0000.6080.1270.5470.608
LDA0.0020.0000.0000.0000.0000.9630.7440.9720.6640.0060.9150.1930.9630.0000.9900.9720.0020.000n/a0.1290.5280.0000.0020.0000.000
MDA0.1650.0040.0480.0140.0060.1410.2430.1380.3010.2920.1040.8420.1190.0020.1270.1210.1770.0020.129n/a0.4170.0000.1470.0000.010
NB0.0180.0000.0030.0010.0000.5510.7520.5470.8420.0470.4550.5470.5000.0000.5210.5070.0200.0000.5280.417n/a0.0000.0150.0000.000
1-R0.0250.4350.1060.2360.3590.0000.0000.0000.0000.0090.0000.0000.0000.5280.0000.0000.0230.6080.0000.0000.000n/a0.0290.9320.281
PART0.9550.2090.6640.3960.2650.0020.0050.0020.0070.7210.0010.0960.0010.1620.0020.0010.9320.1270.0020.1470.0150.029n/a0.0230.342
SAEDNN0.0190.3830.0860.2020.3120.0000.0000.0000.0000.0070.0000.0000.0000.4690.0000.0000.0180.5470.0000.0000.0000.9320.023n/a0.240
RIP0.3120.7830.6280.9240.8850.0000.0000.0000.0000.1800.0000.0050.0000.6860.0000.0000.2970.6080.0000.0100.0000.2810.3420.240n/a
Table 4. Results of post-hoc test using Finner correction w.r.t 5x2cv (bold indicates significance at p-value < 0.01).
Table 4. Results of post-hoc test using Finner correction w.r.t 5x2cv (bold indicates significance at p-value < 0.01).
C50XTCDTCARTRTFPARoFRFGBMXGBCFABDLMLPGPGLMk-NNSVMLDAMDANB1-RPARTSAEDNNRIP
C50n/a0.3010.8430.3300.2970.0010.0060.0000.0060.4850.0000.0700.0050.1280.0010.0000.8840.1270.0010.1090.0060.0680.9910.0200.381
XT0.301n/a0.4050.9460.9910.0000.0000.0000.0000.0690.0000.0020.0000.6690.0000.0000.2340.6620.0000.0050.0000.4780.2970.2660.876
CDT0.8430.405n/a0.4370.4000.0000.0030.0000.0030.3810.0000.0420.0030.1970.0000.0000.7360.1930.0000.0690.0030.1060.8350.0370.485
CART0.3300.9460.437n/a0.9370.0000.0000.0000.0000.0800.0000.0030.0000.6240.0000.0000.2660.6210.0000.0060.0000.4420.3280.2340.929
RT0.2970.9910.4000.937n/a0.0000.0000.0000.0000.0680.0000.0020.0000.6750.0000.0000.2300.6690.0000.0050.0000.4850.2920.2690.867
FPA0.0010.0000.0000.0000.000n/a0.6210.8110.6240.0120.7360.1900.6450.0000.9460.8030.0020.0000.9910.1270.6240.0000.0010.0000.000
RoF0.0060.0000.0030.0000.0000.621n/a0.4700.9910.0550.4150.4310.9630.0000.5780.4630.0090.0000.6140.3300.9910.0000.0060.0000.000
RF0.0000.0000.0000.0000.0000.8110.470n/a0.4730.0050.9200.1120.4860.0000.8590.9910.0010.0000.8180.0730.4730.0000.0000.0000.000
GBM0.0060.0000.0030.0000.0000.6240.9910.473n/a0.0540.4210.4260.9720.0000.5840.4700.0090.0000.6210.3281.0000.0000.0060.0000.000
XGB0.4850.0690.3810.0800.0680.0120.0550.0050.054n/a0.0040.3060.0490.0200.0090.0050.5780.0190.0110.4050.0540.0080.4860.0020.099
CF0.0000.0000.0000.0000.0000.7360.4150.9200.4210.004n/a0.0920.4370.0000.7850.9290.0000.0000.7430.0580.4210.0000.0000.0000.000
AB0.0700.0020.0420.0030.0020.1900.4310.1120.4260.3060.092n/a0.4090.0000.1650.1110.0990.0000.1860.8520.4260.0000.0720.0000.004
DL0.0050.0000.0030.0000.0000.6450.9630.4860.9720.0490.4370.409n/a0.0000.6070.4850.0080.0000.6380.3090.9720.0000.0050.0000.000
MLP0.1280.6690.1970.6240.6750.0000.0000.0000.0000.0200.0000.0000.000n/a0.0000.0000.0960.9910.0000.0010.0000.7690.1270.4850.571
GP0.0010.0000.0000.0000.0000.9460.5780.8590.5840.0090.7850.1650.6070.000n/a0.8520.0010.0000.9540.1110.5840.0000.0010.0000.000
GLM0.0000.0000.0000.0000.0000.8030.4630.9910.4700.0050.9290.1110.4850.0000.852n/a0.0010.0000.8110.0720.4700.0000.0000.0000.000
k-NN0.8840.2340.7360.2660.2300.0020.0090.0010.0090.5780.0000.0990.0080.0960.0010.001n/a0.0940.0010.1460.0090.0470.8930.0120.306
SVM0.1270.6620.1930.6210.6690.0000.0000.0000.0000.0190.0000.0000.0000.9910.0000.0000.094n/a0.0000.0010.0000.7770.1250.4860.564
LDA0.0010.0000.0000.0000.0000.9910.6140.8180.6210.0110.7430.1860.6380.0000.9540.8110.0010.000n/a0.1250.6210.0000.0010.0000.000
MDA0.1090.0050.0690.0060.0050.1270.3300.0730.3280.4050.0580.8520.3090.0010.1110.0720.1460.0010.125n/a0.3280.0000.1110.0000.008
NB0.0060.0000.0030.0000.0000.6240.9910.4731.0000.0540.4210.4260.9720.0000.5840.4700.0090.0000.6210.328n/a0.0000.0060.0000.000
1-R0.0680.4780.1060.4420.4850.0000.0000.0000.0000.0080.0000.0000.0000.7690.0000.0000.0470.7770.0000.0000.000n/a0.0660.6750.400
PART0.9910.2970.8350.3280.2920.0010.0060.0000.0060.4860.0000.0720.0050.1270.0010.0000.8930.1250.0010.1110.0060.066n/a0.0190.376
SAEDNN0.0200.2660.0370.2340.2690.0000.0000.0000.0000.0020.0000.0000.0000.4850.0000.0000.0120.4860.0000.0000.0000.6750.019n/a0.197
RIP0.3810.8760.4850.9290.8670.0000.0000.0000.0000.0990.0000.0040.0000.5710.0000.0000.3060.5640.0000.0080.0000.4000.3760.197n/a
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tama, B.A.; Lim, S. A Comparative Performance Evaluation of Classification Algorithms for Clinical Decision Support Systems. Mathematics 2020, 8, 1814. https://doi.org/10.3390/math8101814

AMA Style

Tama BA, Lim S. A Comparative Performance Evaluation of Classification Algorithms for Clinical Decision Support Systems. Mathematics. 2020; 8(10):1814. https://doi.org/10.3390/math8101814

Chicago/Turabian Style

Tama, Bayu Adhi, and Sunghoon Lim. 2020. "A Comparative Performance Evaluation of Classification Algorithms for Clinical Decision Support Systems" Mathematics 8, no. 10: 1814. https://doi.org/10.3390/math8101814

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop