1 Introduction

Due to rapid technology advancement, an enormous amount of data is stored in databases. It becomes hard to make decisions for industrial intelligence by analyzing the stored data. Data mining is a process of acquiring information and knowledge from such huge data [1]. Feature selection (FS) is an important preprocessing step in the field of data mining and machine learning [2]. Its purpose is to eliminate the redundant and irrelevant features to compress the original data into a low-dimensional space, reduce the computational complexity, and increase the classification accuracy [3,4,5]. In essence, the process of FS is to select the optimal feature subset from the original dataset. In other words, it can be regarded as a combinatorial optimization task [6].

FS methods explicitly or implicitly combine some subset search mechanism and subset evaluation mechanism, which can be divided into three categories: filter, wrapper, and embedding [7]. The filter method performs FS on the dataset based on correlation statistics and then trains the learning model. There is no interaction between the process of FS and the process of training the learning model [8]. The wrapper method evaluates the selected feature subset based on the performance of the learning model. In other words, the purpose of the wrapper method is to select the optimal feature subset for a given learning model [9]. Therefore, the wrapper method usually achieves better results than the filter method. However, since the learning model needs to be trained many times in the FS process, the computational overhead of the wrapper method is usually much higher than that of the filter method [10]. For the embedding method, its idea is to embed the FS process into the construction of the learning model. Because of the complexity of the concepts, it is not easy to construct such models. In addition, it is also hard to improve the learning model to get better results [11]. After comparison and consideration, the wrapper-based FS is used in this paper.

In general, learning tasks are divided into two categories: unsupervised learning and supervised learning. The unsupervised learning does not know the label of each training sample (i.e., the class of each training sample) in advance. For supervised learning, the training samples include inputs and outputs (i.e., features and class labels), which results in a better result than unsupervised learning in most cases [12]. The supervised algorithm commonly used includes decision tree (DT) [13], naïve Bayes (NB) [14], k-nearest neighbor (kNN) [15,16,17], neural networks (NNs) [18, 19], and support vector machine (SVM) [20,21,22]. Among them, SVM was first formally proposed by Cortes and Vapnik in 1995. Based on the statistical learning theory, SVM minimizes the structural risk to design the learning model. In addition, SVM has been used to solve the various artificial intelligence enabled applications due to excellent learning ability and generalization ability [23], such as face recognition [24], text classification [25], handwriting character recognition [26], and bioinformatics [27]. Although SVM has many advantages, it also has some limitations. For instance, it is sensitive to the initial values of parameters. These parameters include the penalty factor and the kernel parameters. The setting of these parameters can affect the generalization performance of SVM. The details of the SVM classifier will be shown in Sect. 3 of this paper. It is worth noting that the performance of SVM, like many other wrapper methods, also depends on the selected feature subset. The better feature subset can be obtained by an excellent search mechanism, which is crucial to improve the computational efficiency and classification accuracy [28, 29].

The curse of dimensionality (CoD) is the main obstacle to big data classification [30]. If a dataset contains N features, the number of available solutions increases exponentially with the number of features, resulting in 2N solutions being generated and evaluated. This requires high computational cost, making researchers spend too much time to get a result [31]. Traditional dimension reduction methods cannot solve this problem well because of some limitations in hardware. Based on published high-quality papers, a new trend to solve this problem is developed. Researchers introduce metaheuristic algorithms (MAs) to solve the FS problem in classification tasks. MAs do not provide an exact solution but only an estimated result in a feasible time. According to the number of solutions, MAs can be divided into single-point search and population-based methods [32]. The single-point search method describes the search trajectory of a solution in the search space, such as Tabu search and simulated annealing [33]. Meanwhile, the population-based method describes the evolution process of a set of points in the search space, such as swarm intelligence (SI) algorithm and evolutionary algorithm (EA) [34].

So far, many MAs have been proposed. Barnacle mating optimizer (BMO) is a newly proposed bio-inspired EA, originally designed by Sulaiman in 2020 [35]. BMO has the features of fewer parameters and can search promising regions of the search space. However, in the field of machine learning, the no free lunch (NFL) theorem logically proves: there is no algorithm for solving all optimization problems [36]. In other words, it is pointless to discuss which algorithm is better without the specific problem. This is the motivation of this research, as well as the NFL theorem, whereby we use Gaussian mutation, logistic model, and refraction-learning to improve the performance of BMO for the first time. Generally, an improved algorithm can help evaluate the potential features from the pool of features of a given machine learning problem. It can improve the performance and computation speed of the given machine learning models. Or, it is used to resolve the parameters tuning problem with most machine learning models. To realize a simultaneous optimization process, the proposed IBMO finally helps the SVM classifier find the optimal feature subset and parameters at the same time. In terms of experiments, a set of 23 classical benchmark functions are used to verify the impact of control parameters and introduced strategies. In addition, IBMO-SVM is also applied to 20 real-world datasets, including 4 high-dimensional datasets, and compared with other 6 state-of-the-art methods. They are particle swarm optimization (PSO) [37], grasshopper optimization algorithm (GOA) [38], slap swarm algorithm (SSA) [39], Harris hawks optimization (HHO) [40], teaching–learning-based optimization (TLBO) [41], and hypergraph-based genetic algorithm (HG-GA) [42]. The effectiveness and superiority of IBMO-SVM are evaluated by classification accuracy, selection size, fitness value, running time, Wilcoxon rank-sum test, and Friedman’s test. Finally, the experimental results are more comprehensive and convincing through comparison with other 4 classifiers. They are logistic regression (LR), decision tree (DT), feedforward neural network (FNN), and k-nearest neighbor (kNN).

The rest of this paper is organized as follows: Sect. 2 presents the previous related works. Section 3 introduces some preliminary knowledge, including a brief overview of BMO and SVM. Section 4 highlights the details of the proposed method. Experiments are implemented, and results are analyzed in Sect. 5. Finally, in Sect. 6, conclusions and future works are given.

2 Related works

The learning algorithms combining with the machine learning techniques are currently used for classification tasks. Wan et al. proposed a novel manifold learning algorithm based on local structure, namely two-dimensional maximum embedding difference (2DMED). This method directly extracted the optimal projective vectors from 2D image matrices. In addition, it successfully avoided computing inverse matrices by virtue of difference trace. Experimental results showed that 2DMED got better recognition rates on face database and handwriting digital database [43]. Fuzzy 2D discriminant locality preserving projections (F2DDLPP) is a novel combination of 2D discriminant locality preserving projections (2DDLPP) and fuzzy set theory. This method enhanced the discriminant power in mapping into a low-dimensional space. Through comparison and analysis, F2DDLPP can select the most useful features for classification [44]. In 2017, the maximum margin criterion and fuzzy set theory were used to extend the development of locally graph embedding algorithms. It was an effective face recognition technique [45]. For other supervised learning problems, there are also many learning algorithms.

SVM has some parameters to control different aspects of algorithm performance. Generally, there are three basic methods for tuning these parameters. Some researchers try different values to tune these parameters by orthogonal experiments. The manual selection method needs to know the influence of parameters on model capacity in advance. When there are three or fewer parameters, another common method is grid search. This method is very slow due to a large number of parameter combinations. The third method is to use MAs. The parameter search problem can be transformed into an optimization problem. In this case, decision variables are parameters, and the cost of optimization is the fitness value of the fitness function. To build an efficient classification model, FS can help improve the accuracy of the model. Some distinguished lines of researches perform FS and simultaneously consider parameters of SVM. Such examples are presented as follows.

In [37], Huang et al. combined discrete PSO with continuous PSO to simultaneously perform the feature subset selection and SVM parameter setting. Additionally, PSO-SVM was implemented with a distributed parallel architecture to reduce the computational time. A hybrid method based on the GOA was presented by Aljarah et al. [38] to achieve the same goal in 2018. The experimental results revealed that GOA was superior to grid search, PSO, genetic algorithm (GA), multi-verse optimizer (MVO), gray wolf optimizer (GWO), firefly algorithm (FF), bat algorithm (BA), and cuckoo search (CS) on improving the SVM classifier accuracy. In 2020, Al-Zoubi et al. applied the SSA-SVM method to 3 widespread medical cases. Compared with other methods, this model had better performance in accuracy, recall, and precision, and was an effective method to solve popular diagnosis problems [39]. Recently, Houssein et al. have hybridized HHO with SVM and kNN for chemical descriptor selection and compound activities. Compared with competitor methods, HHO-SVM had higher performance. In addition, when the number of iterations increases, HHO-SVM obtained better results than HHO-kNN [40]. Examples of such native MAs which are applied for this optimization field are also GA [46], ant colony algorithm optimization (ACO) [47], teaching–learning-based optimization (TLBO) [41], brain storm optimization (BSO) [48], etc. A hypergraph framework was added to GA (called HG-GA) by Gauthama Raman et al. [42]. By using the hyperclique property of hypergraph to generate the initial population, the search for the optimal solution was accelerated, and trapping at the local optimum was prevented. To deal with an intrusion detection system (IDS), the HG-GA-SVM model was used and compared with GA-SVM, PSO-SVM, BGSA-SVM, random forest, and Bayes net. In terms of classifier accuracy (approximately increase 2%), detection rate, false alarm rate, and runtime, HG-GA-SVM achieved overwhelming performance. Baliarsingh et al. [49] proposed a method known as memetic algorithm-based SVM (M-SVM), which was inspired by embedding social engineering optimizer (SEO) in emperor penguin optimizer (EPO). SEO was considered a local search strategy, and EPO was used as a global optimization framework. The experiment was analyzed from two aspects, including binary-class datasets and multi-class datasets. It is observed from statistical results that the proposed method over other competent methods for gene selection and classification of microarray data. Based on the literature review, it can be found that researchers have never stopped exploring. According to the NFL theorem, it motivated us to propose a novel method to better tackle this problem.

3 Preliminary knowledge

3.1 Barnacle mating optimizer

Barnacles are microorganisms that attach themselves to objects in the water. The long penis is their main feature. Their mating group includes all neighbors and competitors within reach of their penis. Barnacle mating optimizer is inspired by the mating process of barnacles. By simulating three processes (i.e., initialization, selection process, and reproduction), the practical optimization problem is solved. Details are described as follows [35]:

Firstly, it is assumed that the candidate solution is barnacles, where the matrix of the population can be expressed using Eq. (1). The evaluation of the population and sorting process are done to locate the best solution so far at the top of \(X\). Then, the parents to be mated are selected by Eqs. (2) and (3).

$$X = \left[ {\begin{array}{*{20}c} {x_{1}^{1} } & \ldots & {x_{1}^{n} } \\ \vdots & \ddots & \vdots \\ {x_{N}^{1} } & \ldots & {x_{N}^{n} } \\ \end{array} } \right]$$
(1)
$$barnacle\_d = randperm(N)$$
(2)
$$barnacle\_m = randperm(N)$$
(3)

where \(N\) is the number of barnacle population, \(n\) is the number of control variables, and \(barnacle\_d\) and \(barnacle\_m\) represent the parents to be mated.

Since there are no specific equations to derive the reproduction process of barnacles, BMO emphasizes the genotype frequencies of parents to produce the offspring based on the Hardy–Weinberg principle [50, 51]. It is worth highlighting that the length of their penises (\(pl\)) plays an important role in determining the exploitation and exploration processes. Assuming \(pl = 7\), it can be seen from Fig. 1 that barnacle #1 can only mate with one of the barnacles #2-#7. If the selection of barnacles to be mated is within the range of \(pl\) of \(Dad\) barnacle, the exploitation process is occurred. Equation (4) is proposed to produce new variables of offspring from barnacle parents.

$$x_{i}^{N\_new} = px_{barnacle\_d}^{N} + qx_{barnacle\_m}^{N}$$
(4)

where \(p\) is the normally distributed random number between [0, 1], \(q = (1 - p)\), \(x_{barnacle\_d}^{N}\) and \(x_{barnacle\_m}^{N}\) represent the variables of \(Dad\) and \(Mum\) barnacles that have been selected in Eqs. (2) and (3). \(p\) and \(q\) represent the genotype frequencies of \(Dad\) and \(Mum\) barnacles in the new offspring.

Fig. 1
figure 1

Selection of mating process of BMO [35] (image of barnacles adopted from [52])

If barnacle #1 selects barnacle #8, it is over the limit. Thus, the normal mating process does not occur. At this time, the offspring is produced by the sperm cast process. In BMO, the sperm cast is regarded as the exploration process, which is expressed as follows.

$$x_{i}^{n\_new} = rand() \times x_{barnacle\_m}^{n}$$
(5)

where \(rand()\) is the random number between [0, 1].

It can be seen from Eq. (5) that the new offspring is produced by \(Mum\) barnacle since it obtains the sperms that are released into the water by other barnacles elsewhere. During the iteration, the position of the barnacle is updated according to Eq. (4) or Eq. (5). Finally, the BMO can be defined to approximate the global optimum for optimization problems.

3.2 Support vector machine

For linear separable problems, the core idea of SVM is to find an optimal hyperplane that maximizes the margin between two classes. In this case, the generalization ability of the model is the strongest, and the classification result is the most robust. Some concepts in SVM are shown in Fig. 2.

Fig. 2
figure 2

Linear classification based on SVM

If the given data set is \(D = (x_{i} ,y_{i} ),i = 1,...,N,x \in R^{d} ,y \in \left\{ { \pm 1} \right\}\), the hyperplane is:

$$h(x) = \omega^{T} x + b$$
(6)

Further, the maximizing margin is equivalent to minimizing \(\parallel \omega \parallel^{2}\). Introducing the slack variable \(\xi\), \(\xi > 0\) represents that there are a small number of outliers. The penalty factor \(c\) is one of the critical parameters that represent the tolerance to outliers. The standard SVM model is as follows:

$$\left\{ \begin{gathered} \mathop {\min }\limits_{{\omega , \, \xi_{i} }} \, \frac{1}{2}\left\| \omega \right\|^{2} + c\sum\limits_{i = 1}^{N} {\xi_{i} } \hfill \\ s.t. \, y_{i} (\omega^{T} x + b) \ge 1 - \xi_{i} ,i = 1,2, \ldots ,N \hfill \\ \end{gathered} \right.$$
(7)

where \(\omega\) is the inertia weight, and \(b\) is a constant.

For the nonlinear case, SVM maps the data in the input space to the high-dimensional feature space. This idea is vividly shown in Fig. 3. The inner product of feature vectors needs to be calculated in nonlinear transformation. To avoid this obstacle, the kernel function \(k( \cdot , \cdot )\) is introduced to express the result of the inner product. In this case, the SVM model can be transformed into the following dual problem:

$$\left\{ \begin{gathered} \mathop {\min }\limits_{\alpha } \, \frac{1}{2}\sum\limits_{i = 1}^{N} {\sum\limits_{j = 1}^{N} {\alpha_{i} } } \alpha_{j} y_{i} y_{j} k(x_{i} ,x_{j} ) - \sum\limits_{i = 1}^{N} {\alpha_{i} } \hfill \\ s.t. \, \sum\limits_{i = 1}^{N} {a_{i} y_{i} = 0,0 \le \alpha_{i} \le c,} \, i = 1,2, \ldots ,N \hfill \\ \end{gathered} \right.$$
(8)

where \(\alpha\) represents the Lagrange multiplier.

Fig. 3
figure 3

Nonlinear classification based on SVM

In this paper, a widely applicable radial basis function (RBF) kernel is adopted, whose expression is:

$$k(x_{i} ,x_{j} ) = e^{{( - \gamma \left\| {x_{i} - x_{j} } \right\|^{2} )}}$$
(9)

where \(\gamma\) represents the width of the RBF kernel.

The penalty factor \(c\) and kernel parameter \(\gamma\) directly affect the generalization ability and complexity of SVM.

4 Application of proposed IBMO for FS and SVM optimization

In this section, the proposed model followed to use IBMO for FS and SVM optimization is described in detail. Firstly, two equation issues are addressed, including the representation of the solution and the definition of the fitness function. Secondly, the improvement ideas of IBMO are elaborated. In addition, the pseudocode and flowchart of IBMO are also presented. Finally, the flowchart of the proposed application model is given.

4.1 Two equation issues

4.1.1 Representation of the solution

In FS tasks, the solution is represented in binary form. Each variable is limited between [0, 1]. If the value is within (0.5, 1], it is mapped to bit "1." Bit "1" means the corresponding feature is reserved. If the value is within [0, 0.5], it is mapped to bit "0." Bit "0" means the corresponding feature is rejected. As shown in Fig. 4, the solution contains 8 variables (i.e., 8 features). The 1st, 5th, and 6th features are selected.

Fig. 4
figure 4

A sample solution with 8 variables

In this paper, the first two variables of the solution are defined as the penalty factor \(c\) and kernel parameter \(\gamma\). Other variables correspond to the selected features. In other words, each solution has \(n\) variables in Eq. (1). After redefinition, each new solution, as shown in Eq. (10), has \(n + 2\) variables.

$$x_{i}^{n + 2} = [c \gamma F_{1} F_{2} \ldots F_{n} ]$$
(10)

4.1.2 Definition of the fitness function

In this paper, a fitness function is required to evaluate the solution. FS is a multi-objective optimization problem, which needs to achieve fewer selected features and higher classification accuracy. To balance the relationship between the two, the fitness function in Eq. (11) is defined as follows:

$$Fitness \, function = \min (\alpha \gamma_{R} (D) + \beta \frac{\left| R \right|}{{\left| N \right|}})$$
(11)

where \(\gamma_{R} (D)\) is the error rate of the SVM classifier, \(\left| R \right|\) is the number of selected features, \(\left| N \right|\) is the total number of original features, \(\alpha\) and \(\beta\) are two parameters corresponding to the impact of classification performance and feature size, \(\alpha \in [0,1]\) and \(\beta { = }(1 - \alpha )\).

4.2 Description of IBMO

4.2.1 Strategy 1: Gaussian mutation

A well-designed optimizer should make full use of and generalize random operators in the early phase. In this way, the diversity of the population can be enhanced, and solutions can deeply explore each region of the feature space. At the same time, the tail of the Gaussian distribution is narrow, so the mutation has a higher probability to generate a new solution in the vicinity of the original position. Hence, the search process utilizes smaller steps to search each position in the solution space. The Gaussian density function is defined as follows [53]:

$$f(x) = \frac{1}{{\sqrt {2\pi \sigma^{2} } }}e^{{ - \frac{{(x - \mu )^{2} }}{{2\sigma^{2} }}}}$$
(12)

where \(\mu\) represents expected value, \(\sigma^{2}\) represents the variance. Assuming \(\mu { = }0, \, \sigma^{2} = 1\), this equation is reduced to the generated random variable. The mutant position of barnacles can be expressed by Eq. (13).

$$x_{i}^{ * } = x_{i} + G(\partial ) \cdot x_{i}$$
(13)

where \(G(\partial )\) corresponds to the Gaussian step vector created by Eq. (12), \(\partial\) is the Gaussian random value between [0,1].

4.2.2 Strategy 2: conversion parameter based on logistic model

The well-organized optimizer should achieve a high level of exploration at the beginning of the search and more exploitation in the last phase. In BMO, the value of \(pl\) plays an important role in determining the exploitation and exploration processes. The original paper concluded through experiments that when the value of \(pl\) is small, too many exploration processes occurred. Instead, too much exploitation occurred. It is suggested that the selection of \(pl\) can be set between 50% and 70% of the total population size. In the original paper, the value of \(pl\) is set to a constant.

We bring out a mathematical model to change the value of \(pl\) so that it can be adjusted dynamically with the lapse of iteration. Thus, the logistic model is finally adopted, and its mathematical expression is [54]:

$$\left\{ \begin{gathered} \frac{dpl(t)}{{dt}} = \lambda \cdot (1 - \frac{pl(t)}{{pl_{\max } }}) \cdot pl(t) \hfill \\ pl(0) = pl_{\min } \hfill \\ \end{gathered} \right.$$
(14)

where \(pl_{\max }\) and \(pl_{\min }\) represent and the maximum and minimum values of \(pl\), respectively, \(t\) represents the number of iteration, and \(\lambda\) represents the initial decay rate. Using the method of variable separation to solve Eq. (14), Eq. (15) is obtained.

$$pl(t) = \frac{{pl_{\max } }}{{1 + (\frac{{pl_{\max } }}{{pl_{\min } }} - 1) \cdot e^{ - \lambda t} }}$$
(15)

It can be seen from Eq. (15) that the conversion parameter \(pl(t) = pl_{\min }\) when t = 0; while \(t \to \infty\), \(pl(t) = pl_{\max }\). The influence of the conversion parameter on the optimization process is analyzed as follows. As mentioned above, a high level of exploration is required in the early phase, and a small value of \(pl\) can help the exploration process occur. Therefore, when t = 0, \(pl(t) = pl_{\min }\). As the search progresses, the exploitation phase is normally performed after the exploration phase. When the number of iterations increases, the value of \(pl\) also increases according to Eq. (15). A larger value of \(pl\) is beneficial to the exploitation process. By dynamic conversion parameter, a reasonable and fine balance between the exploration and exploitation is achieved.

4.2.3 Strategy 3: refraction-learning

In Fig. 5, some concepts about refraction are noted [55]. \(x \in [a,b]\). \(o\) is the center point of \([a,b]\). The refraction index \(\eta\) is calculated by Eq. (16).

$$\eta { = }\frac{{\sin \theta_{1} }}{{\sin \theta_{2} }} = \frac{((a + b)/2 - x)/h}{{(x^{^{\prime}} - (a + b)/2)/h^{^{\prime}} }}$$
(16)
Fig. 5
figure 5

Refraction-learning process in one-dimensional space [55]

Let the rate \(k = \frac{h}{{h^{^{\prime}} }}\), Eq. (16) can be transformed into the following form:

$$x^{{^{\prime}}} { = }(a + b)/2 + (a + b)/(2k\eta ) - x/k\eta$$
(17)

where \(a\) represents the upper bound and \(b\) represents the lower bound.

\(x^{^{\prime}}\) is called the opposite solution of \(x\) based on refraction-learning. Generally, Eq. (17) can be extended to n-dimensional space.

$$x_{j}^{{^{\prime}}} { = }(a_{j} + b_{j} )/2 + (a_{j} + b_{j} )/(2k\eta ) - x_{j} /k\eta$$
(18)

where \(a_{j}\) represents the jth dimension of upper bound, \(b_{j}\) represents the jth dimension of the lower bound. \(x_{j}\) and \(x_{j}^{{^{\prime}}}\) are the jth dimension of \(x\) and \(x^{{^{\prime}}}\), respectively.

More exploitation are often required in the last phase. But there is the possibility of trapping in the local optimum. In the last phase of BMO, the refraction-learning strategy is introduced to overcome this drawback. The global optimal solution is carried out refraction-learning strategy to generate the opposite solution by Eq. (18). Then, they will be evaluated and updated.

4.2.4 Additional details on IBMO

The native BMO has some drawbacks such as low search accuracy and easy to trapped in the local optimum. In this paper, three strategies are introduced to improve the performance of the algorithm. Firstly, Gaussian mutation is applied to initial barnacles to enhance the diversity of the population. Secondly, the logistic model is adopted to realize the dynamic conversion of the important parameter \(pl\), so as to achieve a fine balance between exploration and exploitation. Finally, the global optimal solution is carried out the refraction-learning strategy to generate the opposite solution. By evaluating and updating them, the algorithm has a higher probability of escaping the local optimum. These strategies are considered from different levels of the algorithm. A more detailed analysis has also been mentioned above. The pseudocode of IBMO is described in Algorithm 1. The intuitive and detailed process of IBMO is shown in Fig. 6.

Fig. 6
figure 6

Flowchart of the IBMO algorithm

figure c

4.2.5 Computational complexity analysis of IBMO

The computational complexity of IBMO is mainly related to dimension (D), population size (N), maximum iteration times (T), and cost of fitness function (F). To sum up, the computational complexity analysis focuses on four components: initialization, fitness evaluation, sorting, and barnacle updating. Note that the computational complexity of initialization is \(O(N)\), fitness evaluation is \(O(T \times N \times F)\), sorting is \(O(T \times NlogN)\), and barnacle updating is \(O(T \times N \times D)\). Hence, the overall computational complexity of IBMO can be expressed as follows:

$$O\left( {IBMO} \right) = \left( \begin{gathered} O\left( {initialization} \right) + O\left( {fitness \, evaluation} \right) \hfill \\ + O\left( {sorting} \right) + O\left( {barnacle \, updating} \right) \hfill \\ \end{gathered} \right)$$
(19)
$$O\left( {IBMO} \right) = \left( \begin{gathered} O\left( N \right) + O\left( {T \times N \times F} \right) + \hfill \\ O\left( {T \times NlogN} \right) + O\left( {T \times N \times D} \right) \hfill \\ \end{gathered} \right) = O\left( {N \times (1 + T \times (F + logN + D))} \right)$$
(20)

4.3 IBMO for FS and SVM optimization

The proposed method commences by dividing the preprocessed dataset into training and testing sets. After that, the most optimal model is achieved by using tenfold cross-validation. IBMO starts executing the random vector generated by Eq. (10). Then, SVM begins its training process by running the training set with selected features. During this phase, the inner cross-validation is carried out to produce a more robust model and avoid over fitting. IBMO will receive the fitness function value at the end of the training process. All the previous steps are repeated until the termination criterion (i.e., the maximum number of iterations) is met. Finally, the proposed method reports the optimal individual. The final selected individuals are applied to the testing phase. Figure 7 shows the framework of the proposed method.

Fig. 7
figure 7

Flowchart of the IBMO application model

5 Experimental design and results

5.1 Preparatory works

To validate the efficiency of the proposed method, 20 standard datasets from UCI are utilized [56]. Table 1 reports the details of the selected datasets, such as the number of features, instances, and classes. As can be seen, some datasets are considered high-dimensional datasets because they have thousands of features. It will make our work more challenging and generate more comprehensive results. Before using the datasets, it is essential to preprocess them. This process is divided into two steps. Firstly, all the features are converted into numeric form. For example, in the Hepatitis dataset, males and females can be converted into 0 and 1, respectively. Then, the min–max normalization is used to scale the features to [0, 1]. In this way, the effect of numeric magnitude on feature weights can be alleviated. Equation (21) is provided.

$$F_{norm} = \frac{{F - F_{\min } }}{{F_{\max } - F_{\min } }}$$
(21)

where \(F_{norm}\) represents the normalized feature, and \(F_{\min }\) and \(F_{\max }\) are the minimum and maximum values of the targeted feature \(F\), respectively.

Table 1 Description of datasets

LIBSVM is used for the SVM classifier [57]. Tenfold cross-validation is used to obtain unbiased classification results. This method divides each dataset into ten equal parts. Nine folds are used for training and the rest of one fold for testing. Then, this process is repeated ten times to ensure that each part is used as the testing set. Figure 8 shows the diagram of tenfold cross-validation for a single run.

Fig. 8
figure 8

Diagram of tenfold cross-validation

The proposed method is compared with 6 state-of-the-art methods, including PSO [37], GOA [38], SSA [39], HHO [40], TLBO [41], and HG-GA [42], based on some evaluation metrics. The maximum of iterations for all algorithms is 100, and the population size is 30. We follow the same parameters in the original papers. The parameter settings of algorithms are shown in Table 2. Moreover, the parameter \(\alpha\) in the fitness function is set to 0.99, the parameter \(\beta\) is set to 0.01 according to domain-specific knowledge [58, 59]. In the same experimental conditions, the fairness of comparison is guaranteed. Table 3 shows these details. To prevent the random nature of the test results, each experiment is run 10 times independently.

Table 2 Parameter settings of algorithms
Table 3 Details of experimental conditions

5.2 Evaluation metric

  • Classification accuracy this metric evaluates the accurate of the classifier in predicting the right class using selected feature subsets.

  • Selection size this metric evaluates the size of the optimal feature subset obtained by the search algorithm.

  • Fitness value this metric combines the above two factors as the fitness function in FS optimization problems.

  • Running time this metric reflects the execution speed of the method.

  • P-value this metric is used to detect significant differences between two methods based on two nonparametric statistical tests (i.e., Wilcoxon rank-sum test and Friedman’s test).

5.3 Simulation results and discussions

5.3.1 Impact of control parameters

As discussed in Sect. 4.2, the conversion parameter strategy based on the logistic model allows IBMO to smoothly transit between exploration and exploitation. The refraction-learning strategy is more effective to enhance exploitation during the evolution. However, some control parameters are crucial to improve the performance of the algorithm. The purpose of this subsection is to analyze the sensitivity of these control parameters and to provide the theoretical basis for the following experiments.

In Eq. (15), the parameter \(\lambda\) controls the changing trend of the \(pl\) value. For intuitive comparison, Fig. 9 shows the fixed \(pl\) value in BMO and different \(pl\) values in IBMO with \(\lambda { = }0.1, \, 0.05, \, 0.03\). In BMO, the original paper suggests that the \(pl\) value is set to 70% of the population size. In IBMO, the \(pl_{\min }\) and \(pl_{\max }\) values are set to 50% and 80% of the population size, respectively. To investigate the sensitivity of the parameter \(\lambda\), 23 classical benchmark functions from Tables 4, 5, 6 are implemented to evaluate the performance of IBMO with different \(\lambda\). Table 7 summarizes the average fitness values of IBMO using different \(\lambda\) for 23 functions. Table 7 shows that there is no regular increase or decrease in the average as the \(\lambda\) changes. IBMO with \(\lambda { = }0.05\) can get better results except for function F8. This is because the conversion parameter strategy based on the logistic model with \(\lambda { = }0.05\) makes IBMO more effective in the transition between global and local terms.

Fig. 9
figure 9

Comparison of the control parameter \(\lambda\)

Table 4 Unimodal benchmark functions
Table 5 Multimodal benchmark functions
Table 6 Fixed-dimension multimodal benchmark functions
Table 7 Average fitness values of IBMO using different λ

In Eq. (18), the refraction index \(\eta\) and the rate \(k\) affect the position of the opposite solution in the search space. The refraction index \(\eta\) is studied using 4 different values (\(\eta { = }1, \, 10, \, 100, \, 1000\)). The rate \(k\) is also set to the same values (\(k{ = }1, \, 10, \, 100, \, 1000\)). Different types of functions are tested to find the optimal combination of parameters \(\eta\) and \(k\). Table 8 gives the results of average fitness values. As can be inferred from Table 8, IBMO with \(\eta { = }1\) and \(k{ = }1\) obtains relatively weak results. Some similar results are obtained by other cases. Figure 10 is used to explain the impact of parameter combinations on the refraction-learning strategy. The current solution, the opposite solution, and the optimal solution are shown in Fig. 10. When \(\eta { = }1\) and \(k{ = }1\), Eq. (18) can be simplified to \(x_{j}^{{^{\prime}}} { = }a_{j} + b_{j} - x_{j}\), and the opposite solution corresponding to the current solution \(x\) is \(x_{1}^{{^{\prime}}}\). By tuning parameters \(\eta\) and \(k\), the opposite solution \(x_{2}^{{^{\prime}}}\) can be closer to the optimal solution. The proper combination of parameters increases the probability of escaping the local optimum. In addition, the larger \(\eta\) and \(k\) values result in unchanged in the performance of the algorithm. We finally use the values of 100 and 1000 for \(\eta\) and \(k\), respectively.

Table 8 Average fitness values of IBMO using different combinations of η and k
Fig. 10
figure 10

Comparison of the control parameters \(\eta\) and \(k\)

5.3.2 Impact of three strategies

The purpose of this subsection is to study the impact of each improvement strategy. Five different types of algorithms are shown in Table 9. If the corresponding strategy is used in BMO, it is represented by "1." Otherwise it is represented by "0." 23 classical benchmark functions are implemented to evaluate performance. We report the average (avg) and standard deviation (std) of fitness values in Table 10. The best results are displayed in bold. By referring to Table 10, it can be found that IBMO’s avg and std are the smallest in most cases. BMO-1, BMO-2, and BMO-3 are also smaller than the native BMO. These promising results show that each strategy can improve the performance of the native algorithm, and the combination effect is better. Convergence accuracy and stability are the main gains. To visualize the data, Fig. 11 shows the trend of fitness values of F1, F10, and F14. In Sect. 4.2, the gain effect of each strategy has been analyzed and elaborated. Now, it is further confirmed by the convergence curve. To sum up, IBMO can achieve excellent performance on almost all benchmark functions, which can be concluded that the results are not accidental, and the improvement is significant.

Table 9 Various BMOs with three strategies
Table 10 Results of fitness values of various BMOs
Fig. 11
figure 11

Convergence curves of various BMOs on F1, F10, and F14

5.3.3 Results on low-dimensional datasets

Sixteen low-dimensional datasets are used in this subsection to compare the performance of the proposed IBMO-SVM with novel compared algorithms. The quantitative and qualitative analyses are as follows. Table 11 shows the average and standard deviation of classification accuracy. Inspecting the results in this table, it can be observed that IBMO-SVM performs better than others. In terms of average, IBMO obtains the highest results on 68.75% of the datasets, while SSA, HHO, and HG-GA can outperform IBMO on 12.5%, 12.5%, and 6.25% of the datasets, respectively. In terms of standard deviation, IBMO-SVM obtains the smallest results on 62.5% of the datasets. Both optimizers obtain the same std value on one dataset (i.e., ILPD). Figure 12 exhibits the results of box charts of eight algorithms on Iris, Wine, Parkinsons, and Sonar. In these figures, it can be seen that IBMO can achieve higher and more centralized data, and no many outliers. The metric of classification accuracy proves the stability of IBMO and the capability to search the promising regions in the search space.

Table 11 Comparison each algorithm based on classification accuracy
Fig. 12
figure 12

Box charts of each algorithm on Iris, Wine, Parkinsons, and Sonar

The number of selected features is another important metric for wrapper FS methods. Table 12 shows a comparison for the average number of selected features on all datasets. Further analyzing reported results, IBMO can select the most significant features on 11 out of 16 datasets. But for the Breast Cancer dataset, our method also ranks the second. Based on the results obtained, it can be observed that IBMO significantly outperforms others in minimizing the number of selected features.

Table 12 Comparison each algorithm based on the average number of selected features

The fitness function involves two metrics: classification accuracy and feature selection ratio. Table 13 presents the best, worst, avg, and std of fitness values of eight algorithms. IBMO contributes to the best fitness values on 56.25% of the datasets, the lowest avg values on 68.75% of the datasets, and the lowest std values on 75% of the datasets. Thus, IBMO perceives the most consistent results. Figure 13 compares the convergence behavior of different algorithms. As can be seen from Fig. 13, IBMO provides the lowest position curves compared with other state-of-the-art algorithms, and occasionally escapes from the local optimum to continue searching effective spaces. Overall, IBMO-SVM shows the best convergence behavior on real-world datasets. This also indicates the substantial impact of the proposed improvements on the native BMO.

Table 13 Comparison each algorithm based on fitness values
Fig. 13
figure 13

Convergence curves of each algorithm on ILPD, Zoo, Lymphography, Flags, Ionosphere, and Lung cancer

Running time metric indicates the execution speed of an algorithm. The average running time (in second) is given in Table 14. Taking Zoo dataset as an example, the running time is sorted as follows: TLBO > SSA > GOA > BMO > IBMO > PSO > HG-GA > HHO. Table 14 shows that for almost all datasets, the running time by the proposed method is ranked in the middle of the eight algorithms. In addition, the running time of IBMO is slightly higher than that of BMO. We have analyzed the time complexity in Sect. 4.2.5, and the combination of three strategies leads to these slight changes. To improve the overall performance of BMO, it cannot guarantee to obtain all optimal parameters on all cases. So the running time of IBMO is acceptable.

Table 14 Comparison each algorithm based on average running time (in second)

To detect significant differences between proposed IBMO-SVM versus compared algorithms, we apply a statistical test based on the Wilcoxon rank-sum test. The null hypothesis \(H_{0}\) represents the statement of no difference, whereas the alternative hypothesis \(H_{1}\) represents the presence of significant differences. A p-value represents the probability of observing given results at the 0.05 significance level. The p-value less than 0.05 represents a strong evidence against \(H_{0}\) [60, 61]. Table 15 exhibits the results, where the p-value greater than 0.05 is bold. According to this table, the superiority of IBMO-SVM is statistically significant on most the datasets because most of the p-values are less than 0.05. On the whole, it is observed from the above study that the overall performance of IBMO-SVM is better than other compared algorithms for all evaluation metrics on the low-dimensional datasets.

Table 15 P-values of the IBMO with compared algorithms

5.3.4 Results on high-dimensional datasets

After analyzing the above results, four high-dimensional datasets are implemented to further evaluate the overall performance of the proposed algorithm. It is a challenging task that can make the experiments more comprehensive and the results more convincing.

For high-dimensional datasets, the dimension of feature vectors is often larger than the capacity of available training samples. In the classification task, it often leads to the curse of dimensionality or empty space phenomenon [30]. Only a few of the thousands of features are important. Many classification methods with good performance become poor or even fail on testing high-dimensional datasets. This is the motivation and design purpose of this subsection. Further, the brief description of four high-dimensional datasets is shown in Table 16.

Table 16 Description of four high-dimensional datasets [56]

Table 17 compares the average and standard deviation of classification accuracy based on four high-dimensional datasets. Figure 14 also shows the feature selection ratio. Observing the results in Table 17 and Fig. 14, it can be seen that IBMO is far superior to other competitors in dealing with high-dimensional datasets. Taking the Gastrointestinal lesions dataset as an example, the accuracy of IBMO is improved by 3.59% based on the native algorithm. Compared with PSO, IBMO is no less than 10% higher. Analyzing the number of features, for the Arcene dataset, the feature selection ratio of IBMO is 0.51 and is ranked first. Generally, HHO is also a good FS method with strong competitiveness. The fitness function is a comprehensive measure of the above two metrics. These results are shown in Table 18. It is not hard to see that the results are consistent and significant, and IBMO is still the champion algorithm.

Table 17 Comparison each algorithm based on classification accuracy on high-dimensional datasets
Fig. 14
figure 14

Comparison each algorithm based on feature selection ratio on high-dimensional datasets

Table 18 Comparison each algorithm based on fitness values on high-dimensional datasets

Friedman’s test is a nonparametric statistical inference technique. It involves first ranking the data and then testing to see whether \(k\) (\(k \ge 3\)) samples are significantly different. Equation (22) is used to compute the Friedman statistic \(S\) for \(k\) samples with m sample size. \(R\) represents the rank obtained. \(S\) follows \(\chi^{2}\) distribution with degrees of freedom \(k - 1\). When \(S \ge \chi_{(k - 1)}^{2}\), the null hypothesis \(H_{0}\) can be rejected at 0.05 significance level [61].

$$S = \frac{12}{{mk(k + 1)}}\left[ {\sum\limits_{j = 1}^{k} {R_{j}^{2} } } \right] - 3m(k + 1)$$
(22)

Using the data obtained above as input, Table 19 provides the results of additional statistics structure, and Table 20 shows the ranking obtained by Friedman’s test. When the degree of freedom is 7 and the significance level is 0.05, the critical value of the test statistic is 14.067. The calculated Chi-square statistic is greater than 14.067, so the null hypothesis \(H_{0}\) can be rejected. Moreover, small p-values cast doubt on the validity of \(H_{0}\). In terms of the ranking obtained, IBMO has obtained the highest ranking and always shows excellent performance.

Table 19 Results of additional statistics structure
Table 20 Results of ranking values based on Friedman’s test

5.3.5 Comparison with other classifiers

To comprehensively verify the effectiveness, the proposed model is further compared with 4 other classifiers, including logistic regression (LR) [62], decision tree (DT) [13], feedforward neural network (FNN) [18], and k-nearest neighbor (kNN) [16]. To achieve a fair comparison, IBMO is also used for other classifiers with default parameter values to find feature subsets. k = 5 for kNN is used in this work. For each method, accuracy, sensitivity, and specificity are used to evaluate the performance. The sensitivity can describe the proportion of the identified positive classes to all positive classes, so it is also called the true positive rate. The sensitivity can present the proportion of the identified negative classes to all negative classes, so it is also called the true negative rate. They are defined in Eqs. (23) and (24), respectively.

$$Sensitivity{ = }\frac{TP}{{TP + FN}}$$
(23)
$$Specificity = \frac{TN}{{TN + FP}}$$
(24)

where \(TP\) represents the true positive, \(FN\) represents the false negative, \(TN\) represents the true negative, and \(FP\) represents the false positive.

Tables 21, 22, 23 report experimental results based on 10 binary-class datasets. Regarding accuracy, our proposed method accomplishes the higher results on all datasets in comparison with 4 other classifiers. In terms of sensitivity, our proposed method accomplishes the higher results on 70% of the datasets. On the Ionosphere dataset, even that our proposed method does not achieve better than kNN, but it ranks second. While looking at the specificity, our proposed method outperforms others on 90% of the datasets and achieves the best results with 1.000 of sensitivity on the DBWorld e-mails dataset. To sum up, our proposed method proves highly competitive results, and can more accurately identify positives and negatives.

Table 21 Comparison each classifier based on average accuracy on bi-class datasets
Table 22 Comparison each classifier based on average sensitivity on bi-class datasets
Table 23 Comparison each classifier based on average specificity on bi-class datasets

6 Conclusions and future works

This paper proposes a novel classification model using IBMO for FS and parameter setting in SVM. The Gaussian mutation strategy is used to enhance population diversity. The conversion parameter strategy based on the logistic model is used to achieve a fine balance between exploration and exploitation. The refraction-learning strategy helps the algorithm escape the local optimum. Thus, different strategies are designed at different evolution phases. To verify the impact of control parameters and introduced strategies, some experiments are done on 23 classical benchmark functions. In addition, the proposed method is compared with 6 state-of-the-art methods such as PSO, GOA, SSA, HHO, TLBO, and HG-GA based on 20 datasets where 4 datasets are high-dimensional. The comparisons and extensive results reveal that IBMO-SVM outperforms other wrapper methods using different evaluation metrics. According to accuracy, sensitivity, and specificity, the proposed IBMO-SVM achieves superiority over the competitor classifiers.

Different directions for future work are suggested. Other real-world datasets can be further tested, such as the coronavirus disease (COVID-19) dataset. IBMO can also be explored in other optimization domains. Internet of Things, computer vision, and cloud computing are all the focus.