Introduction

Recently, fog computing joint with cloud computing to cover its deficit such as intrinsic latency and to serve different industries. Since a fog server can process data gathered by IoT devices independently from cloud computing, it can efficiently save network communication bandwidth, cloud storage space, and reserving resources for mission-critical applications [1]. Also, fog supports unifying edge and cloud resources for customers. Fog computing facilitates deployment of IoT applications in vicinity of source data. Therefore, it reduces network load and guarantees on-time service delivery. However, deployment, management, and updating of IoT application lead new challenges in such layered environment. Fog computing in larger scale includes numerous heterogeneous computing nodes with separate processing, memory, and storage. In addition to, workload on each node is completely dynamic. Also, each IoT application has its own requirement in terms of sensitivity on latency, computing requirement, and privacy constraints. Therefore, the deployment of application components must be properly done on fog nodes; at the same time the application requirement, software and hardware features, bandwidth and tolerable latency between components on fog infrastructure must be taken into account [2]. Deployment of an application components on a single node yields maximize resource utilization, decrease in power consumption, and optimizing network bandwidth as well. Nevertheless, when a fog node which hosts all of the components associated with an application crashes, the application cannot work properly in which it affects the reliability of customer applications. For this reason, it is clear-cut to take an efficient policy for a suitable and reliable components deployment scheme.

There are miscellaneous mapping possibilities in distribution of application components on fog nodes in which one of the most appropriate and optimal amongst them should be selected. For a small application with low number of components, there are several feasible solutions to deploy components on different fog nodes. Therefore, with the increase the number of application components and the number of fog nodes regarding to its heterogeneity, finding the optimal deployment scheme is computationally complex and there is not any exact solution for this. So, this problem belongs to NP-Hard class [3]. Recently, researches have been done in literature in regards to component distribution over fog and cloud computing nodes. A unified fog computing platform was proposed by Hong et al. [4] in year 2018 for dynamic component deployment on fog devices. In their proposed approach, it paid on distribution of component over more than one fog node to avoid single point of failure. Another algorithm for distribution of IoT application components with regards to application sensitivity on latency and efficient network resource usage viewpoints has been proposed by Taneja et al. [5] in year 2017. A general and extensible description model was proposed to specify QoS-aware IoT application deployment on fog infrastructure proposed by Brogi et al. [6] in 2017. Review on literature reveals that there are clear lack in component placement of IoT applications with two different viewpoints at the same time. In the other words, this paper presents power-aware and latency-aware algorithm for reliable component deployment on fog infrastructure. The former awareness is for provider as a prominent stakeholder and latter awareness is considered for service customer as another prominent stakeholder side viewpoints. To this end, this paper presents two new models in IoT-Fog environment in regards to application modules deployment viewpoint. The accurate models indicate whether the proposed algorithm is effective or not. So, after presenting two intricate new models namely power and reliability models for IoT components deployment on fog platforms, the multi-objective cuckoo search algorithm is extended which exploits Pareto dominance and crowding distance concepts for both gaining the set of non-dominated solutions and diversity in search space. Since the stated problem is a discrete optimization in nature, the CSA algorithm that permutes search space efficiently has been selected. Also, its operators are conducted in such a way that the good adjustment and balance between exploration and exploitation is achieved in which the final simulation results endorse it although there is no guarantee in stochastic approaches to reach optimal point.

Therefore, the main contributions of the current paper are as follow:

  1. 1.

    To reach the optimal power consumption, a Fullmesh sub networks is extracted from whole fog network by a proposed heuristic algorithm; among Fullmesh sub networks, the most appropriate one is selected for distribution of application components.

  2. 2.

    To mitigate the effect of single point of failure in application components deployment, the fault tolerance policy against failure is provided for each application to improve reliability; to this end, the minimum number of fog nodes for components deployment can be bounded to the maximum number of existing nodes in Fullmesh sub network.

  3. 3.

    The overall latency concept is modeled. In the process of application components deployment, efficient utilization of fog bandwidth resource is increased by minimizing overall latency. This can be potentially decrease resource wastage and power consumption.

  4. 4.

    The deployment of application components over fog nodes is formulated to a multi-objective optimization problem with minimization of both power consumption and overall latency viewpoints. To solve this combinatorial problem, a multi-objective cuckoo search optimization algorithm (MOCSA) is presented which compromises objectives and considers reliability in its constraints.

The rest of the paper is structured as follows. Related works are placed in Sect. “Related works”. Some models associated to problem statement are presented in Sect. “Proposed framework and models”. Section “Problem statement” states the problem under study. Proposed MOCSA is presented in Sect. “Proposed MOCSA algorithm for component deployment problem”. This algorithm is validated in simulation and evaluation section which is placed in Sect. “Simulation and evaluation”. Section “Conclusion and future direction” concludes this paper along with future direction.

Related works

This section investigates related works to find research gap in component deployment problem. A cloud service management standard named TOSCA was proposed for IoT component placement [7]. The main objective of this paper was to deploy components automatically by using application components description commensurate with fog nodes. The aid of this standard was to improve portability of applications in heterogeneous environment such as in cloud and fog environment. In proposed standard, a model for description of service structure and service process management was presented. In this model, placement of application components is automatically done by applying conceptual description of components topology and related application deployment.

An approach has been propounded in literature for latency-aware application component management in fog environment [8]. In this work, latency of service access, service delivery time, and internal communication latency have been considered. The objective was to guarantee the service delivery deadline and efficient resource utilization in fog environment. To optimize the number of utilized fog nodes for hosting application components, this exploits forward and reallocation strategy for application components. In addition, to cope with limitations of fog environment such as management overhead, single point of failure, redundant communications, and latency in decision, the decentralized organizing is proposed for substitution and forwarding the components.

A platform was proposed for a dynamic distribution of application components on fog sub networks [4]. In proposed approach, all requests are submitted to a server; then, the requests are registered in a database. Each request is split to multiple components which are encapsulated to a Docker or Container. Afterwards, a heuristic algorithm is run to determine components placement plan. The obtained plan is sent to fog platform for component distribution. The main goal is to maximize of generating successful placement plans for user applications.

A DIANE framework has been presented by Vogler et al. [9] in 2015 for producing optimal deployment topology of cloud-based IoT applications commensurate with existing infrastructures. To increase the flexibility of application that their deployment topologies undergo evolution during the time, separation of executing components is necessary. The application deployment topology changes may be for deployment requirement of new application, changes in edge network physical infrastructure such as add/remove sensors and gateways, environmental changes such as customer request patterns, and evolutionary changes in business logic during its life cycle. In production process of deployment topology, some parameters such as time needed for deployment, time and bandwidth request for application running, and exploitation of edge devices are evaluated.

A distributed programming interface was presented for colony of fog computing nodes so-called Foglets by Saurez et al. [10] in 2016. Foglets automatically detect fog computing resources in network hierarchy and deploys application components on fog nodes with tolerable latency requirement of each component.

An approach was devised for component deployment of IoT services on M2M platform to reduce traffic from the network to cloud datacenter because IoT application are made on M2M platforms [11].

A network-aware algorithm in regarding to optimal utilizing of resource was presented by Taneja et al. [5] in 2017. This algorithm detects fog nodes based on their capacity and application components requirement. If requirement is met, the mapping of components over fog nodes is done.

To facilitate deployment of applications on cloud2fog environment, a platform as a service (PaaS) architecture was propounded by Yangui et al. [12] in year 2016. In this architecture, engaging and execution of application components, SLA meeting evaluation and component migration via management interface are met. Accordingly, exploitation and execution of application components with regards to the objectives are detected, configured, and initiated.

Table 1 summarizes comparison of related works associated to IoT application component deployment on fog and a cloud infrastructure.

Table 1 Summary of the literature study

Review of literate illustrates that published works have been formulated to optimization problems with different viewpoints. Generally, optimization problems are categorized in two classes: single objective and multi-objective problems. Since the majority of optimization problems belong to NP-Hard category problems, the heuristics (or exact algorithms) and the meta-heuristic algorithms are engaged to solve these kind of problems. In single objective problems, only one objective function must be optimized. For instance, Refs. [13,14,15,16,17] were presented in literature to solve single objective engineering problems with heuristic and exact approaches. Some meta-heuristics GA-based [18,19,20,21,22,23], PSO-based [3, 24, 25], SA-based [26,27,28] have been developed to solve optimization problems in engineering domain. In addition, multi-objective optimization algorithms such as NSGA-II [29], MOPSO [30], MOGA [18, 31], MOBA [32], and MOGWO [33] among others have been extended in literature to solve multi-objective optimization problems which need to make a trade-off between conflicting objectives at the same time. In this line, several techniques were presented in literature to improve the quality of multi-objective optimization problems [34,35,36,37,38]. Specially, these methods were tested in some famous and applicable engineering benchmarks [34,35,36,37,38]. Since the modules placement associated to IoT application in fog environment is a discrete optimization problem, it urges to utilize an efficient discrete optimization algorithm this the reason to select CSA algorithm which permutes search space efficiently.

Overall investigation of reviewed literature also reveals that the majority of published works scarcely have paid on single point of failure avoidance and its effect on how to distribute application components over fog nodes and at the same time how to optimize bandwidth utilization. The distinction point of the current paper in comparison to other literatures revolves around the fact that the current paper strives in enhancement of user application’s reliability in regards to tolerance against failure and to present traffic-aware deployment to optimize network bandwidth utilization in component distribution process.

It is worth noting that presenting the accurate models indicate whether the proposed algorithm is effective or not. So, this paper presents two intricate new models namely power and reliability models for IoT components deployment on fog platforms to cover literature shortcomings. Then, it is formulated to multi-objective optimization problem.

Proposed framework and models

This section presents system framework and associated models. Then, all of them are engaged in problem statement. For the sake of simplicity, Table 2 illustrates utilized nomenclature in presented models.

Table 2 Nomenclature utilized in proposed models

System framework

The proposed target system framework is depicted in Fig. 1. As this figure shows, an organizer is placed in top level of fog layer. One of its most missions is to extract Fullmesh sub networks of fog nodes known as a Mega Node. The Mega Node architecture is similar to wireless mesh network (WMN) presented by Akyildiz et al. [39] in year 2005. Its computing pattern differs from traditional mesh networks in which it utilizes network of fog nodes such as switches and routers in distribution operation of inside the network. After the Mega Nodes extraction, the suitable Mega Node is adopted and organizer makes decision for component deployment in selected Mega Node in regards to application components features and requirements. Conceptually, the organizer is centralized, but it can be distributedly implemented for the sake of avoidance from the single point of failure phenomenon.

Fig. 1
figure 1

Proposed system framework and associated mega nodes

In the proposed framework, the high priority is to extract deployment plan based on selected Mega Node; then, the components are distributed based on extracted plan. Only the components which are not time-sensitive or are executed periodically for information processing are deployed on cloud infrastructure. In this regards, a deployment planner framework is used to manage and run suitable application components deployment regarding to system performance.

As Fig. 2 demonstrates, planner module contains application component manager and associated collaborative components. Beside deployment planner, some modules are placed for storage and retrieval of information associated to the network and other Mega Node’s resources.

Fig. 2
figure 2

Management framework for application components

The integrated information is used for management of application components and presenting favorite deployment plan via deployment planner. In the following, the proposed framework’s modules are clarified.


Application component manager This is a main module amongst others, which decides how to deploy application components on fog or cloud nodes. In a multi-component application, for the sake of dependency between its components, decision of deployment strongly depend on several issues such as resource availability, network structure, QoS requirement of applications, load sharing and etc. the deployment of components can be done based on objectives such as power consumption reduction, minimizing communication and reduction of overall traffic owing to running of applications.


Component resource information It extracts processing and memory requirement associated to application components from user submitted request. Then, it delivers this information to application component manager for decision making on deployment plan.


Components communication information Since communication plays a major role in resource consumption of fog nodes in running IoT applications, the management of application components on fog nodes includes optimizing usage of computing resources, memory, and communications at the same time. To this end, this section extracts communication information of application components from user requests and delivers it to application component manager.


Mega node resource discovery This module manipulates Mega Node’s information repository which is obtained via application component manager. Then, it sends back the information of favorite Mega Node for application components deployment.


Mega node manager Based on information received from fog nodes, the Fullmesh sub networks of fog nodes are extracted; then, information of Fullmesh sub networks, known as Mega Nodes, are saved in a repository. In addition to, it validates status of existing Mega Nodes by periodically monitoring of fog infrastructure.

Fog model

This article assumes there exists a network of N number of fog nodes which are heterogeneous in terms of processing capacity and power consumption; all of them are enable to store and execute application components. These fog nodes belong to one or more Mega Node sets. Each node in a Mega Node can directly or indirectly access to different kind of sensors via wired or wireless connections. A fog node \(fn\in F\) is introduced by a vector (\(\mathrm{id},\mathrm{mid},H,S,sensorlist\)) where \(\mathrm{id},\mathrm{mid},H,S,\) and \(sensorlist\) are fog node identifier, Mega Node id, hardware, software, and available sensors respectively. The components which are distributed among Mega Node’s processors can avail to the software and sensors of that same Mega Node. In this regards, the communication link can be modeled by a vector (L,B) where L and B are latency and bandwidth respectively. The details of a Mega Node is elaborated in Fig. 3.

Fig. 3
figure 3

Mega node specification and its belonged fog nodes

In this line, the communication network is modeled by a graph G =  < FN,D > where FN = {\({fn}_{1},{fn}_{2},\dots ,{fn}_{N}\)} is a set of fog nodes and edge \({d}_{\mathrm{ij}}\in D\) shows distance between nodes \({fn}_{\mathrm{i}}\) and \({fn}_{\mathrm{j}}\). Matrix D in Eq. (1) is dedicated for distance between each pair of fog nodes. In each Mega Node, if all components are placed on single node, then, \({d}_{ij}=0\); otherwise \({d}_{ij}=1\). In addition, the Fig. 4 illustrates a communication network in a Mega Node with three different fog nodes.

(1)
Fig. 4
figure 4

Communication network in a mega node

Application model

In recent years, regarding to the nature of users requests and new expectations on internet-based services, the design of applications which manipulate users’ data is constantly fluctuated based on changing requests; then, to meet user requirement, the multi-component structure approach is utilized [40]. So, application components are dependent and cooperate with each other to meet users’ requirements. For instance, take a company that serves a smart health care service in a small IoT application for surveillance of aged people. This application includes three different components that Fig. 5 illustrates.

Fig. 5
figure 5

Specification of application components


Status manager (cmp1) This component monitors aged and disabled people; it alarms the nearest medical and healthcare center once it detects a disorder in physical or mental behavior.


Control center (cmp2) This component is used for interpret of integrated data and manual control of the system.


Machine Learning (cm3) This component is utilized to save data history of individuals and to estimate future wellbeing and health provided it is not latency-sensitive which can be deployed on cloud datacenter or fog infrastructure.

Figure 5 also depicts hardware resources along with software capabilities required for each component. Communication between components are drawn by special links. To manage on time status of aged people, component cmp1 must avail to needed sensors (physical state controller sensors) and an actuator which activates initial operation mechanism and announcement to medicine centers; this must be done during 10 ms. from deployed component cmp3 to the place of installed sensors and actuators. Furthermore, it is expected that the fog or cloud nodes can remotely access to existing neighbor things via APIs provided by fog middleware [41]. The problem that should be solved for application components deployment is how to place components so that the requested resources are met. Even for this simple example, different deployment plans must be evaluated for finding an optimal component mapping for this application because more than one component can be deployed on a fog node based on existing resources. Finding favorite and optimal deployment is impractical when the number of components and fog nodes are significantly increased. Then, this combinatorial problem must be solved by intricate meta-heuristic algorithms.

This paper assumes that there are R number of IoT applications each of \(\mathrm{r}\in R\) is shown by a vector (M,cmplist). Each application has M number of components listed in cmplist. Also, each component is shown by a vector (\(k,\mathrm{h},s,sensorlist\)) (see Fig. 5).

User applications are modeled by a graph \(\mathrm{G}=\left(cmplist,T\right)\) where \(\mathrm{cmplist}=\left\{{cmp}_{1},{cmp}_{2},\dots ,{cmp}_{m}\right\}\) and \({T=t}_{ij}\) shows the traffic matrix (TM) between components \({cmp}_{i}\) and \({cmp}_{j}\). Equation (2) demonstrates traffic matrix and the Fig. 6 illustrates components communication graph.

(2)
Fig. 6
figure 6

Components communication graph

Reliability model

Deployment of an application’s components on the minimum number of fog nodes leads to reach the goals such as reduction in power consumption and efficient utilization of cloud computing resources, but one of the confronting challenges is the acceleration of the single point of failure phenomenon in users’ applications. Therefore, for the sake of meeting both optimization objective functions of cloud computing owners and to decrease the degree of applications’ vulnerability in centralized distribution in fog infrastructure, the threshold parameter is considered for the number of fog nodes in distribution of applications’ components. To this end, in the worst case, at most number of needed nodes for components distribution is bounded to the number of available nodes in selected Mega Node. In the other words, the best effort is bounded to Mega Node capacity.

Deployment model

To deploy components, one of the Mega Nodes regarding to claimed requirement is selected among the list of extracted Mega Nodes. In each Mega Node, if all components are placed on single node, then, \({d}_{ij}=0\); otherwise \({d}_{ij}=1\). Fog nodes in a Mega Node meet all of components resource requirements in terms of latency, bandwidth, and sensors. In this paper, we assume that all of sensors or software request for application components cab be shared by fog nodes associated to Mega Node. In distribution process of application components on fog nodes, the computing resources, fog nodes distance, and QoS parameter requested for application components must be taken into consideration. To reduce traffic load, the distance matrix which is used for each pair of fog nodes in network graph and also traffic pattern matrix between each pair of components must be calculated. Note that, communication links between fog nodes \({fn}_{m}\) and \({fn}_{n}\) have constant capacity in terms of latency and bandwidth. Therefore, traffic rate between application components is bounded to fog nodes’ capacity. So, this limitation is shown in Eq. (3).

$$ \mathop \sum \limits_{{cmp_{i} \in { }fn_{m} }} \mathop \sum \limits_{{cmp_{j} \in { }fn_{n} }} b_{ij} \times l_{ij} < { }B_{mn} \times { }L_{mn} $$
(3)

where \({b}_{ij}\) and \({l}_{ij}\) are favorite bandwidth and latency between components \({cmp}_{i}\) and \({cmp}_{j}\). Also, parameters \({B}_{mn}\) and \({L}_{mn}\) are bandwidth and latency between fog nodes \({fn}_{m}\) and \({fn}_{n}\) respectively. Note that, a component can be deployed on a fog node provided this node is active. For this reason, decision variable \({y}_{fn}\) is set to one when fog node \(fn\) is an active node to adopt a component. Equation (4) shows this decision variable.

$$ x_{cmp,fn} \le { }y_{fn} ,{ }\forall cmp \in UApp,\,\,{ }fn \in F $$
(4)

Furthermore, the requested hardware associated to components cannot exceed the capacity of underlying fog nodes. Therefore, Eq. (5) is used to show this constraints.

$$ \mathop \sum \limits_{cmp \in UApp} x_{cmp,fn} \cdot {\text{hw}}_{cmp} \le {\text{HW}}_{fn} ,\,\,\forall fn \in F $$
(5)

In Eq. (5), parameter \({\text{HW}}_{fn}\) is relevant to fog node capacity in term of hardware and \({\text{hw}}_{cmp}\) is requetsed resources relevant to components.

As assumed all software resources are available for each node in Mega Node, the software limitation is drwan in Eq. (6).

$$ \mathop \sum \limits_{cmp \in UApp} x_{cmp,fn} \cdot {\text{sw}}_{cmp} \le {\text{SW}}_{{\text{Mega Node}}} ,\forall {\text{Mega Node}} \in F $$
(6)

where the term \({\text{SW}}_{{\text{Mega Node}}}\) is software capacity of Mega Node and \({\text{sw}}_{cmp}\) is the requested software by application components. Also, another constraint on requetsed sensors for application components cannot exceed from Mega Node’s capacity in term of number of its availabe sensors. This is elaborated in Eq. (7).

$$ \mathop \sum \limits_{cmp \in UApp} x_{cmp,fn} \cdot {\text{s}}_{cmp} \le S_{{\text{Mega Node}}} ,\forall {\text{Mega Node}} \in F $$
(7)

A decision variable \(x_{cmp,fn}\) is used to determine whether component cmp is placed on fog node fn or nor. Equation (8) is dedicated to this issue.

$$ x_{{{\text{cmp}},fn}} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {{\text{application's~}}cmp{\text{~~is~placed~on~fog~node~}}fn} \hfill \\ 0 \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right. $$
(8)

Furthermore, each component is only placed on one fog node in which Eq. (9) depicts.

$$ \mathop \sum \limits_{fn \in F} x_{cmp,fn} = 1{ },\,\,\forall cmp \in UApp $$
(9)

Problem statement

In this paper, deployment of IoT application components is formulated to a multi-objective optimization problem. To address the issue, two objective functions and problem formulation are presented.

Overall latency

One of the most prominent objective functions of deployment problem is to minimize system overall latency which has drastic impact on average QoS degradation. So, the amount of latency owing to dependent components of an application which are placed on two different fog nodes in a Mega Node, is calculated via Eq. (10).

$$ {\text{Latency}}_{mn} = { }\mathop \sum \limits_{{cmp_{i} \in { }fn_{m} }} \mathop \sum \limits_{{cmp_{j} \in { }fn_{n} }} { }L_{mn} $$
(10)

The latency between each pair of dependent components depends on latency between fog nodes which are hosting separate components. Note that, the amount of latency is ignored when two dependent components are placed on the same node. The overall latency of the system, owing to deployment of all applications and related components, is measured via Eq. (11).

$$ UApp_{{{\text{latency}}}} = { }\mathop \sum \limits_{{fn_{m,n} \in {\text{Mega Node}}}} {\text{Latency}}_{mn} $$
(11)

Power consumption

The effective subjects on fog nodes’ power consumption are load of computation, communication technology, the transfer data traffic volume, distance between nodes and etc. To calculate the power consumption of a fog node, power consumption owing to both application’s components processing and data transfer between nodes should be taken into account. Literature review proves that the power consumption of a processing node has linearly relation to its resource utilization [42]. So, the average normalized resource utilization associated to each fog node is measured via Eqs. (12).

$$ U_{{fn_{i} }}^{{{\text{Res}}}} = \frac{{W_{1} \cdot \mathop \sum \nolimits_{j}^{{fn_{i} }} \frac{{R_{{{\text{Com}}_{j} }}^{{{\text{CPU}}}} }}{{R_{{fn_{i} }}^{{{\text{CPU}}}} }} + W_{2} \cdot \mathop \sum \nolimits_{j}^{{fn_{i} }} \frac{{R_{{{\text{Com}}_{j} }}^{{{\text{RAM}}}} }}{{R_{{fn_{i} }}^{{{\text{RAM}}}} }}}}{2} $$
(12)

where parameters \(W_{1}\) and \(W_{2}\) are two coefficients that show the importance of them in fog node’s power consumption. Note that, their values are 0 ≤ \(W_{1}\) ≤ 1, 0 ≤ \(W_{2}\) ≤ 1, and \(W_{1}\) + \(W_{2}\) = 1. Since the power consumption of processing units outwieghts versus the main memory, the processor utilization is taken for power consumption; consequently, the parameters are set as \(W_{1}\) = 0.9 and \(W_{2}\) = 0.1 [42]. The Eq. (13) measures the power consumption owing to utilized resources relevant to each node that hosts different components.

$$ P_{fn}^{{{\text{Res}}}} = y_{fn} \times \left( {P_{{{\text{max}}}} - P_{{{\text{min}}}} } \right) \times U_{fn}^{{{\text{Res}}}} + P_{{{\text{min}}}} $$
(13)

where parameters \(P_{{{\text{min}}}}\) and \(P_{{{\text{max}}}}\) are used to indicate the minimum and maximum power consumption of each processing node in the minimum and maximum utilization conditions respectively. In addition to, decision binary varibale \(y_{fn}\) is used to show whether the processing node is active or not. Moreover, the power consumption owing to data transfer via communication links are obtained by Eq. (14).

$$ P_{{fn}}^{{{\text{Tr}}}} = {\text{~}}\mathop \sum \limits_{{fn_{i} \ne fn_{j} }} t_{{{\text{Com}}_{i} ,{\text{Com}}_{{j}} }} \times P_{{{\text{Tr}}}} $$
(14)

The parameter \(P_{Tr}\) is of prower consumption unit for traffic trasfer. Note that, this power is taken in case the components are placed on different computing nodes. Cosequently, the total power consumption is obtained via Eq. (15). The first section is for resource utilization and the second section is for traffic transfering power consumption.

$$ P_{fn} = P_{fn}^{{{\text{Res}}}} + { }P_{fn}^{{{\text{Tr}}}} $$
(15)

Problem formulation

The deployment of IoT application components by distributing over fog nodes is formulated to a multi-objective optimization problem. After definition of objective functions, this formulation is brought in Eqs. (16)–(24).

$$ {\text{min TPC }} = {\text{Min}}\mathop \sum \limits_{fn \in F} P_{fn} $$
(16)
$$ {\text{min }}UApp_{{{\text{latency}}}} = {\text{Min }}\mathop \sum \limits_{{fn_{m,n} \in {\text{Mega Node}}}} {\text{Latency}}_{mn} $$
(17)

Subject to:

$$ \mathop \sum \limits_{{cmp_{i} \in { }fn_{m} }} \mathop \sum \limits_{{cmp_{j} \in { }fn_{n} }} b_{ij} \times l_{ij} < { }B_{mn} \times { }L_{mn} $$
(18)
$$ \mathop \sum \limits_{{{\text{cmp}} \in UApp}} x_{cmp,fn} \cdot {\text{hw}}_{cmp} \le {\text{HW}}_{fn} ,\quad \forall fn \in F $$
(19)
$$ \mathop \sum \limits_{cmp \in UApp} x_{cmp,fn} \cdot {\text{sw}}_{cmp} \le {\text{SW}}_{{\text{Mega Node}}} ,\quad \forall {\text{Mega Node}} \in F $$
(20)
$$ \mathop \sum \limits_{cmp \in UApp} x_{cmp,fn} \cdot s_{cmp} \le S_{{\text{Mega Node}}} ,\quad \forall {\text{Mega Node}} \in F $$
(21)
$$ x_{cmp,fn} \le { }y_{fn} ,{ }\forall cmp \in UApp,{ }\,\,fn \in F $$
(22)
$$ \mathop \sum \limits_{fn \in F} x_{cmp,fn} = 1{ },\quad \forall cmp \in UApp $$
(23)
$$ x_{cmp,fn} \in \left\{ {0,1} \right\},{ }\quad y_{fn} \in \left\{ {0,1} \right\} $$
(24)

In the aforementioned problem formulation, the Eqs. (16, 17) are objective functions to be minimized at the same time the constraints drawn in Eqs. (1824) must be met. To solve this combinatorial optimization problem, an intricate multi-objective optimization algorithm is presented.

Proposed MOCSA algorithm for component deployment problem

As the stated problem is a multi-objective optimization problem, we extend a multi-objective optimization algorithm in regards to two equal important objectives. A multi-objective optimization algorithm differs from a single objective optimization algorithm because in multi-objective optimization algorithm a trade-off between objectives must be done. To this end, the dominance concept is utilized [24, 31, 42]. The multi-objective optimization algorithm must be conducted in search space to find non-dominated solutions known as Pareto front [31]. Regarding to the discrete nature of the search space associated to stated problem, the cuckoo search algorithm (CSA) is adopted for the sake of its performance and adaptation with discrete search space. The CSA was firstly introduced in literature by Yang and Deb [43] at year 2009. It had successful outcome in different optimization domains such as in [44,45,46]. To solve deployment problem, a multi-objective version of CSA known (MOCSA) is extended which inherits strength of both CSA and NSGA-II algorithms [29].

The CSA mimics its behavior from cuckoo birds. This kind of bird has an aggressive attitude in which it even lays eggs in the other birds’ nests along with throwing away their eggs. In CSA, every egg in a nest is a candidate solution. When a cuckoo lays one egg in a nest in fact it produces a new solution. In this regards, a single objective CSA utilizes three rules:

At first, each cuckoo lays one egg in a randomly selected nest.

Secondly, better nests holding eggs (solutions) with better quality remain for next generation.

Thirdly, number of existing nests are fix; and a host nest, a cuckoo can detect strange egg with the probability \({p}_{a}\in \left[\mathrm{0,1}\right]\); in this case, the host bird can either smash the egg or leave the nest for constructing completely new nest in the new place.

To construct MOCSA with k objective functions, three mentioned rules of canonical CSA needs to be customized in regards to objective functions. New rules are:

In each iteration, each cuckoo lays k eggs in a randomly selected nest in which the i-th egg is representative of the i-th objective function. In regard to similarity and discrepancy between eggs, each nest is left with probability \({p}_{a}\) and the new nest is constructed with k new eggs. In addition to, some operations can be defined to permute search space efficiently. Mathematically, the first rule can utilize Random Walk or Levy flight approaches (c.f. Eqs. (25, 26) to uniformly permute (traverse) search space for generating new solutions. The second rule is an elitism based approach so that better solutions remain in next generation. In this line, selection of better solutions generates the suitable convergence of algorithm. The third rule can be taken as a mutation approach so the worse solutions are probabilistically omitted and the new solutions are generated in regards to similarities the solutions with other solutions. This mutation approach is done by vector operator via combined Levy flight and quality differential of solutions. Figure 7 draws block diagram of proposed algorithm.

Fig. 7
figure 7

Block diagram of proposed algorithm

This algorithm receives problem specifications and execution’s settings as input such as information about requested resources for applications, number of components and their communication details, number of fog nodes and associated network information, number of initial solutions, and number of maximum iterations. Then, it returns a set of non-dominated solutions as deployment plans.

Problem encoding

One of the most important issues in CSA algorithm is the concept of nest which is a candidate solution. Encoding on nest has intensive impact on algorithm performance. There are miscellaneous encoding viewpoint for different problems. The art is to find the most appropriate one. Each nest is a possible solution for IoT application components deployment on fog nodes. A nest contains |M| number of eggs each of which is representative of a component. The number assigned to each egg is drawn from [1...|N|] interval which indicates the fog node number hosting that component. Figure 8 depicts encoding of an example for deployment of 10 components on 3 fog nodes.

Fig. 8
figure 8

An example for deployment encoding and associated Nest

Proposed MOCSA

In single objective optimization cuckoo search algorithm, the population is partitioned into two superior and inferior nests with predetermined probability based on their fitness value. In the other words, the determined parameter Pa is the fraction of population which are placed in the inferior nests whereas the rest are placed in the superior nests after sorting population based on their fitness values. In each generation, iteration, the algorithm works in two stages. At first stage, for each individual of inferior nests, each new position is generated by Levy Flight distribution; then, the old individual is directly constituted by the new generated one. At the second stage, for each individual in superior nests, each new position is generated by Levy Flight distribution; if the new generated individual is better than the old version in term of fitness value, the old version is substituted by the new generated one. Since the multi-objective optimization algorithm differs from a single objective, we have customized CSA to MOCSA algorithm to gain non-dominated solutions. The general behavior is the same, but the differences are in the ranking and partitioning processes. For ranking, we utilize non-dominated and crowding distance concepts. Once it is needed to partition population into two parts, we utilize non-dominated sorting strategy based on Algorithm 6; then from the worst ranking to best ranking, the solutions are directly copied to inferior nests; in this direction according to the probability Pa, if the solutions associated to the k-th ranking value overflows the inferior nests, the crowding distance values are considered. In the other words, the rest individuals with the worst crowding distance values are selected to be copied to fulfill the rest of inferior nests. Afterwards, the rest populations are copied to superior nests. It is worth mentioning that, in the second stage when the new individual is generated for each individual in the superior nests, if the new individual dominates the old version in regards to two objective functions, the old individual is substituted by the new generated solution.

The proposed MOCSA algorithm is elaborated in Algorithm 1 which deploys IoT application components efficiently on fog nodes in regards to objective functions. As mentioned earlier, Algorithm 1 receives the problem specifications as input and returns non-dominated solutions in regards to two prominent objective functions. It is iterated until the termination criterion is met. Here, the condition of termination is to execute MaxIteration times. Before the Algorithm 1 starts in its main loop which is between lines 14 through 27, it performs preprocessing stages. Algorithms 2 and 3 are dedicated to extract Mega Nodes and desired Mega Nodes which are explained in preprocessing stages. New solutions are generated in line 5 from extracted desired Mega Nodes. In line 7, Algorithm 4 is called to check and correct infeasible solutions. Then, the associated Data Structure is updated in line 8. Algorithm 5 is called to assign two fitness values to each individual based on Eqs. (16, 17) since it is a multi-objective problem. The main loop of proposed MOCSA starts in line 14 and ends in line 27. In the proposed algorithm in each generation the population is partitioned into two inferior and superior nests. As explained earlier, the main loop runs two stages. At first, the worst solutions in inferior nests are updated and at the second stage the better solutions in superior nests are updated provided the new generated solutions dominate the old version otherwise no update is done. In line 9, all fitness values associated to all solutions are assigned by calling Algorithm 5. In lines 10–11 the Algorithms 6–7 are called to make Pareto fronts and crowding distance for current solutions. In the main loop, Pa percent of solutions associated to the worst ranking is copied in the inferior nests by utilizing Pareto front and crowding distance values and the rest is copied to superior nests. Before algorithm plummets into the main loop, in line 12 the current solutions are sorted based on ranking concepts. Then, the first ranking solutions are kept in Pareto-Set repository in line 13. As mentioned earlier, in line 15, the Algorithm 8 is called to update solutions in inferior nests; afterwards, the second stage is started where the solutions pertained to superior nests are to be updated. If the new changes dominate the old version, the old version is substituted by the new generated solution in superior nests. This change is done by calling Algorithm 9 in line 16. In line 17, Algorithm 4 is called to check and correct infeasible solutions. Then, the associated Data Structure is updated in line 18. In line 19, the fitness values of all updated solutions are calculated by calling Algorithm 5; then, the non-dominated solutions and crowding distance are calculated by calling Algorithms 6 and 7 respectively. The current solution is then sorted by their rank values. The temporary solutions are made by merging the current solutions and the last Pareto-Set values. The temporary solutions are sorted based on rank values. From the first ranking to the last are copied to the current solutions variable by considering crowding distance values if needed. In addition, the first rank is directly copied in Pareto-Set variable. After the last iteration is done. The final values in Pareto-Set containing the first ranking solutions of the last operation is return as final non-dominated solutions.

figure a

Preprocessing

In this stage, the preprocessing is performed to extract desired Mega Nodes. Algorithm 2 selects different Mega Nodes from input fog network. The Mega Node characteristics was clarified earlier which is abstracted to clique in graph theory. It returns all cliques with K-nodes. Mega Node extraction brings some merits; firstly the search space reduction for finding optimal deployment plan; secondly, providing common sensors and software associated to Mega Node for requested components. In Algorithm 2, in the while-loop between lines 3 through 11, firstly all nodes which are connected are extracted; each pair of connected nodes is placed in a row in Mega_Nodes array. In lines 13 through 20, in the for-loop, each fog node i is compared with each row in Mega_Nodes array that does not containing node i. If node i is connected with all nodes in that row, then the node i is added to that row. In each iteration, the repeated row is omitted. The main loop is iterated until the last array of Mega_Nodes which contains the set of Mega Nodes is delivered.

figure b

After Mega Nodes extraction, some Mega Nodes are selected by Algorithm 3 in regards to meeting of constraints in Eqs. (1821) in the stated problem. In this algorithm, if latency and bandwidth are provisioned by the Mega Node in the current row, then, Latency_BW_status variable is set to true. In addition to, if hardware, software, and sensors can be provided by the current Mega Node, the amount of HW_Status, SW_Status, and S_Status are set to true. If a current Mega Node can fulfill all required resources, it is added to selected Mega Node list.

The termination criterion of Algorithm 2 is the number of desired clique size (K). In the other words, the main loop is iterated K times. Since the effective statements of Algorithm 2 are in the while-loop, its time complexity is O (K∙\({N}^{2}\)) where K < N. Also, Algorithm 3’s time complexity is O (N + M) because the main work is done in the for-loop between lines 1 through 9.

figure c

Initialization step

Similar to other meta-heuristic algorithms, the CSA starts with initialization phase in which line 5 of Algorithm 1 performs this. It randomly generates individuals from search space. To reduce MOCSA’s time complexity, the value domain of eggs are confined to the proposed encoding approach. Since some solutions may violate problem constraints during the individual productions, the Check&Correct algorithm is designed which Algorithm 4 shows. Indeed, Algorithm 4 is presented to exploit maximum benefit from produced population for utilizing them in optimal solutions.

figure d

Time complexity of Algorithm 4 is O (N∙PopSize) because two nested for-loop are the most effective statements.

Fitness function

Generally, one of the most important things in evolutionary computation is to evaluate solutions. This is done by fitness functions in regards to problem’s objective functions. In this paper, fitness function is adjusted based on total power consumption and overall latency which are in Eqs. (16) and (17). The proposed fitness function is depicted in Algorithm 5.

figure e

It is clear-cut that its time complexity of Algorithm 5 is O (PopSize).

Non-dominated sorting

In multi-objective optimization algorithms the goal is to omit unfavorable solutions and to select superlative solutions with special strategy in such a way that solutions in lower levels are omitted at the same time the better solutions are remained until the final solution is obtained step by step. In the proposed MOCSA, we apply non-dominated sorting algorithm to find Pareto front. This algorithm investigates the state of current solutions in term of dominance concept regarding to objective functions. In fact, it classifies solutions in different Pareto levels so that all solutions in the same ranking level cannot dominate each other whereas the solutions in upper levels dominate solutions in downer level. The favorable non-dominated solutions belong to the first ranking level. Algorithm 6 finds non-dominated solutions.

figure f

Since the effective statements of Algorithm 6 are in nested For-loop, its time complexity is O (\({\mathrm{PopSize}}^{2}\)).

Crowding distance

Finding efficient solutions strongly depends on the strategy that the algorithm takes. The best strategy must be conducted in such a way that explore search space efficiently. More distribution in search space, more contingent to gain better and logical solutions. Diverse solutions in larger district are preferable against denser solutions in smaller region the reason why we apply crowding distance algorithm to investigate solutions in term of density in a district search area. This way avoids to integrate solutions locally. Algorithm 7 elaborates crowding distance procedure.

figure g

It is clear that the time complexity of Algorithm 7 is O (PopSize).

Inferior nests update

In this process, the fraction of worse solutions by probability Pa are detected and amended. This operation is similar to mutation in GA [43,44,45,46]. Since our algorithm works in multi-objective domain, the worst solutions are selected from the worst ranking frontier; also, the crowding distance is called where needed. The modification of worse solutions are done by walking around approach. Algorithm 8 is dedicated to do so. In line 4, the invalid solutions are amended. Then, updated solutions as new solutions are returned.

figure h

Time complexity of Algorithm 8 is θ (Pa∙PopSize); therefore is O (PopSize) because of its only one for-loop and the fact that Pa < 1.

Superior nests possibly updates

To produce next generation solutions, the elitism mechanism is applied so the better solutions are transferred to the next generation. The favorable trait of each meta-heuristic algorithm is how to make balance between exploration and exploitation in search space, but some of them fail to make a balance; for instance, PSO suffers from earlier convergence [24, 25] or simulated annealing (SA) suffers from not to be strong in exploration phase [26,27,28]. Fortunately, our proposed MOCSA makes a good adjustment between exploitation and exploration. Once it exchanges a random solution with the best so far if it is better, it tries in exploitation phase such as in Algorithm 9. For exploration, it utilizes uniform distribution in search space to explore search space globally such as in Algorithm 8. A prominent part of CSA is to utilize Levy Flight for both local and global searching; it uses random walk which is characterized by probabilistically instantaneous jumping in search space [47]. To do so, by utilizing Levy Flight approach [44], the new generation individuals are produced in line 2; if each new generated individual dominates the previous generation individual then the old generation is substituted by new one. It is well depicted in lines 4–6 of Algorithm 9. As the obtained values in new solutions are continuous, these values are amended commensurate with the problem conditions in line 3 of Algorithm 9. Then, the new obtained solution is added to the list of next generation solutions.

figure i

In line 2, Algorithm 9 produces a number y as a random nest number from Levy distribution based on Eq. (25).

$$ y = \left( {1 - u} \right)^{{ - \frac{1}{\alpha }}} $$
(25)

where the variable u is a uniform variable in [0...1] interval and the parameter \(\alpha\) is obtained by Eq. (26).

$$ \alpha = G^{1/6} $$
(26)

where the parameter G is the generation number [44]. After that, line 3 updates the obtained solutions according to boundary of problem domain. Time complexity of Algorithm 9 is O(PopSize) because of its only for-loop.

Simulation and evaluation

To assess the effectiveness of proposed MOCSA algorithm in solving multi-objective optimization problem of components deployment on fog nodes, experiments are defined, executed and evaluated. To reach concrete results different scenarios are conducted. Also, the performance of proposed MOCSA is compared with four prominent and successful multi-objective optimization algorithms, namely, MOGWO [33], MOPSO [30], MOBA [32] and NSGA-II [29]. In this comparison, the evaluation metrics are total power consumption and overall latency which are relevant to stated problem’s objective functions. As mentioned earlier, the total power consumption is sum of processing power consumption owing to resource utilization and power consumption owing to data transfer between fog nodes via communication links. In addition to, the overall latency is sum of latency obtained from communication components which are placed on different fog nodes. Furthermore, the Pareto front relevant to each algorithm are compared. Also, final deployment that MOCSA gives is dawn.

Note that, Mirjalili et al. [33] in year 2016 added two new modules to canonical GWO algorithm to make multi-objective version of GWO algorithm. The first is Archive module that is used to save non-dominated solutions so far and the second is for leader wolf to select alpha, beta, and delta wolves; this is used for updating position of omega wolves in the course of optimization. The aforementioned features are utilized to keep current solutions and gradually update them toward final Pareto front. In this line, Coello et al. [30] proposed MOPSO which utilizes history record for saving the best solution experienced by an particle and save it for non-dominated solutions of previous rounds. This mechanism works similar to elitism of evolutionary computation. It also use a global repository so that each particle keeps experience during its flight. This repository is used for leader selection to guide other particles in search space. Accordingly, each particle can select different leaders. The MOPSO works based on generating different hypercube which divide search space in several sections [30]. One of the most successful meta-heuristic algorithm is bat optimization algorithm (BOA) which was firstly introduced by Yang [48] in 2010. Afterwards, in 2011, he proposed multi-objective bat optimization algorithm by incorporating dominance concepts to solve multi-objective optimization problems [32]. One of the famous and applicable multi-objective optimizer which is based on genetic algorithm is NSGA-II that was firstly introduced by Deb et al. [29] in 2002. NSGA-II generates population then calls fast non-dominated sorting algorithm to place solutions in different ranks. All solutions in the same rank cannot dominate each other, but they can dominate the solutions placed in lower ranks. By utilizing canonical crossover and mutation, the new generated solutions may dominate the solutions associated to previous solutions. In this case, the dominated solutions are omitted. This procedure is repeated until the termination criteria is met. Finally, the non-dominated solution of the first rank is returned.

Experimental settings

To evaluate the proposed approach, different scenarios are conducted in which the number of requested components and fog nodes increase gradually. Table 3 elaborates scenarios in details. Note that, the scenarios (5–8) are defined for scalability testing of comparative algorithms where the size of inputs are significantly increased. All experiments are executed on a dual core Intel Corei3 380 M platform with 2.53 GHZ clock rate, four logical processors, and 8 GB as main memory.

Table 3 Different scenarios of simulation

Since fog computing is ad-hoc and there is not abundant datasets in literature, we produce dataset by uniform distribution fashion such as in Tables 4, 5, 6, 7, 8 and 9. In addition to, the fog is completely heterogeneous in terms of resources and their speed the reason why we consider fluctuations in produced dataset. Tables 4, 5 and 6 gives underlying fog computing specifications for an example with 5 fog nodes. In this regards, Table 4 shows fog nodes specifications in terms of CPU clock rate, main memory and their threshold, minimum and maximum power consumption (idle vs full-loaded), kind of supported sensors and software, and power consumption of data transfer. In this table, the zero value indicates lack of support. The value 1 and 2 indicates the type of sensors. Tables 5 and 6 show bandwidth and latency between direct communications of fog nodes. The values was normalized in [0...1] interval. In Table 5, the value zero means that there is not any connection between nodes whereas the value one indicates the nodes are the same; this concept is reverse in Table 6.

Table 4 Fog nodes resources
Table 5 Bandwidth between fog nodes
Table 6 Latency between fog nodes
Table 7 Resource requested for application components
Table 8 Bandwidth requested for application components
Table 9 Latency requested for application components

In this regards, Tables 7, 8, and 9 draw an example of resources requested for applications containing 5 different components. Table 7 is used for CPU, RAM, kind of sensors, and software requests for components. Table 8 is used for bandwidth requested for each pair of components. Also, Table 9 is utilized for the least latency tolerable between each pair of components.

For simulations and comparisons, parameter settings of algorithms MOCSA, MOGWO, MOPSO, MOBA, and NSGA-II are brought in Table 10.

Table 10 Setting parameters of different algorithms in simulation

Experimental results

In this section, the comparison between proposed MOCSA and other algorithms are based on Pareto front, two objective functions values, and elapsed time. Also, we utilize another versions of MOGWO algorithms known as MOGWO-I. In the second version, two operators crossover and mutation of genetic algorithm are applied for exploring the search space. In addition to, optimal deployment plan is drawn and the hosting node of application components is drawn in red color.

First scenario: 10 fog nodes and 20 application components

Figure 9 demonstrates performance comparison of different algorithms in a scenario with 20 requested components to be placed on 10 underlying fog nodes. Figure 9a draws Pareto frontiers derived from different algorithms. As this figure shows, MOCSA outperforms against others. Mega Node which MOCSA extracts is depicted in Fig. 9b; it shows the optimal deployment plan and the selected fog nodes are 1, 4, 5, 9, and 10. In addition to, Fig. 9c and d depict comparison of different algorithms’ performance in terms of the first objective (total power consumption based on Eq. (16)) and the second objective (overall latency based on Eq. (17)).

Fig. 9
figure 9

Performance comparison of different algorithms in scenario with 20 components on 10 fog nodes

Table 11 compares algorithms’ performance in term of elapsed time. This Table shows that MOCSA falls in the second place after MOPSO that is the fastest between all, but the quality of non-dominated solutions of MOCSA are better than others.

Table 11 Performance comparison of algorithms in term of elapsed time

Second scenario: 15 fog nodes and 25 application components

Figure 10 demonstrates performance comparison of different algorithms in a scenario with 25 requested components to be placed on 15 underlying fog nodes. Figure 10a draws Pareto frontiers derived from different algorithms. As this figure shows, MOCSA outperforms against others. Mega Node which MOCSA extracts is depicted in Fig. 10b; it shows the optimal deployment plan and the selected fog nodes are 3, 5, 9, 10, and 13. In addition to, Fig. 10c and d depict comparison of different algorithms’ performance in terms of the first objective (total power consumption based on Eq. (16)) and the second objective (overall latency based on Eq. (17)).

Fig. 10
figure 10

Performance comparison of different algorithms in scenario with 25 components on 15 fog nodes

Table 12 compares algorithms’ performance in term of elapsed time. This Table shows that MOCSA falls in the second place after MOPSO that is the fastest between all, but the quality of non-dominated solutions of MOCSA are better than others. In term of execution time, the proposed MOCSA competes marginally with NSGA-II that is in the third place.

Table 12 Performance comparison of algorithms in term of elapsed time

Third scenario: 20 fog nodes and 30 application components

Figure 11 demonstrates performance comparison of different algorithms in a scenario with 30 requested components to be placed on 20 underlying fog nodes. Figure 11a draws Pareto frontiers derived from different algorithms. As this figure shows, MOCSA outperforms against others. Mega Node which MOCSA extracts is depicted in Fig. 11b; it shows the optimal deployment plan and the selected fog nodes are 6, 7, 11, 16, and 17. In addition to, Fig. 11c and d depict comparison of different algorithms’ performance in terms of the first objective (total power consumption based on Eq. (16)) and the second objective (overall latency based on Eq. (17)).

Fig. 11
figure 11

Performance comparison of different algorithms in scenario with 30 components on 20 fog nodes

Table 13 compares algorithms’ performance in term of elapsed time. This Table shows that MOCSA falls in the third place after MOPSO and NSGA-II that are the fastest and the second fastest between all, but the quality of non-dominated solutions of MOCSA are better than others.

Table 13 Performance comparison of algorithms in term of elapsed time

Fourth scenario: 25 fog nodes and 40 application components

Figure 12 demonstrates performance comparison of different algorithms in a scenario with 40 requested components to be placed on 25 underlying fog nodes. Figure 12a draws Pareto frontiers derived from different algorithms. As this figure shows, MOCSA outperforms against others. Mega Node which MOCSA extracts is depicted in Fig. 12b; it shows the optimal deployment plan and the selected fog nodes are 1, 4, 7, 9, 11, 16 and 21. In addition to, Fig. 12c and d depict comparison of different algorithms’ performance in terms of the first objective (total power consumption based on Eq. (16)) and the second objective (overall latency based on Eq. (17)).

Fig. 12
figure 12

Performance comparison of different algorithms in scenario with 40 components on 25 fog nodes

Table 14 compares algorithms’ performance in term of elapsed time. This Table shows that MOCSA falls in the third place after NSGA-II and MOPSO that is the fastest and second fastest between all, but the quality of non-dominated solutions of MOCSA are better than others.

Table 14 Performance comparison of algorithms in term of elapsed time

Fifth scenario: 40 fog nodes and 60 application components

Figure 13 demonstrates performance comparison of different algorithms in a scenario with 60 requested components to be placed on 40 underlying fog nodes. Figure 13a draws Pareto frontiers derived from different algorithms. As this figure shows, MOCSA outperforms against others. Mega Node which MOCSA extracts is depicted in Fig. 13b; it shows the optimal deployment plan and the selected fog nodes are 1, 2, 3, 7, 9, 12, 14, 15, 16, 19, 24, 28, 31, 33 and 37. In addition to, Fig. 13c and d depict comparison of different algorithms’ performance in terms of the first objective (total power consumption based on Eq. (16)) and the second objective (overall latency based on Eq. (17)).

Fig. 13
figure 13

Performance comparison of different algorithms in scenario with 60 components on 40 fog nodes

Table 15 compares algorithms’ performance in term of elapsed time. This Table shows that MOCSA falls in the second place after MOPSO that is the fastest between all, but the quality of non-dominated solutions of MOCSA are better than others. In term of execution time, the proposed MOCSA competes marginally with NSGA-II that is in the third place.

Table 15 Performance comparison of algorithms in term of elapsed time

Sixth scenario: 55 fog nodes and 75 application components

Figure 14 demonstrates performance comparison of different algorithms in a scenario with 75 requested components to be placed on 55 underlying fog nodes. Figure 14a draws Pareto frontiers derived from different algorithms. As this figure shows, MOCSA outperforms against others. Mega Node which MOCSA extracts is depicted in Fig. 14b; it shows the optimal deployment plan and the selected fog nodes are 3, 6, 8, 9, 22, 24, 28, 29, 31, 32, 39, 43, 45 and 55. In addition to, Fig. 14c and d depict comparison of different algorithms’ performance in terms of the first objective (total power consumption based on Eq. (16)) and the second objective (overall latency based on Eq. (17)).

Fig. 14
figure 14

Performance comparison of different algorithms in scenario with 75 components on 55 fog nodes

Table 16 compares algorithms’ performance in term of elapsed time. This Table shows that MOCSA falls in the second place after NSGA-II that is the fastest between all, but the quality of non-dominated solutions of MOCSA are better than others. In term of execution time, the proposed MOCSA competes marginally with MOPSO that is in the third place.

Table 16 Performance comparison of algorithms in term of elapsed time

Seventh scenario: 70 fog nodes and 100 application components

Figure 15 demonstrates performance comparison of different algorithms in a scenario with 100 requested components to be placed on 70 underlying fog nodes. Figure 15a draws Pareto frontiers derived from different algorithms. As this figure shows, MOCSA outperforms against others. Mega Node which MOCSA extracts is depicted in Fig. 15b; it shows the optimal deployment plan and the selected fog nodes are 8, 10, 13, 17, 18, 19, 20, 24, 30, 35, 38, 39, 40, 42, 45, 49, 51, 52, 53, 58, 64 and 66. In addition to, Fig. 15c and d depict comparison of different algorithms’ performance in terms of the first objective (total power consumption based on Eq. (16)) and the second objective (overall latency based on Eq. (17)).

Fig. 15
figure 15

Performance comparison of different algorithms in scenario with 100 components on 70 fog nodes

Table 17 compares algorithms’ performance in term of elapsed time. This Table shows that MOCSA falls in the second place after NSGA-II that is the fastest between all, but the quality of non-dominated solutions of MOCSA are better than others. In term of execution time, the proposed MOCSA competes marginally with MOPSO that is in the third place.

Table 17 Performance comparison of algorithms in term of elapsed time

Eighth scenario: 100 fog nodes and 150 application components

Figure 16 demonstrates performance comparison of different algorithms in a scenario with 150 requested components to be placed on 100 underlying fog nodes. Figure 16a draws Pareto frontiers derived from different algorithms. As this figure shows, MOCSA outperforms against others. Mega Node which MOCSA extracts is depicted in Fig. 16b; it shows the optimal deployment plan and the selected fog nodes are 1, 9, 16, 17, 19, 21, 24, 28, 39, 40, 51, 53, 56, 57, 59, 60, 62, 63, 65, 72, 73, 78, 84, 86, 88, 89 and 93. In addition to, Fig. 16c and d depict comparison of different algorithms’ performance in terms of the first objective (total power consumption based on Eq. (16)) and the second objective (overall latency based on Eq. (17)).

Fig. 16
figure 16

Performance comparison of different algorithms in scenario with 150 components on 100 fog nodes

Table 18 compares algorithms’ performance in term of elapsed time. This Table shows that MOCSA falls in the second place after NSGA-II that is the fastest between all, but the quality of non-dominated solutions of MOCSA are better than others. In term of execution time, the proposed MOCSA competes marginally with MOBA that is in the third place.

Table 18 Performance comparison of algorithms in term of elapsed time

For the sake of data analysis statistically, the proposed MOCSA outperforms 43%, 28%, 41%, 30% and 32% improvement against MOGWO, MOGWO-I, MOPSO, MOBA and NSGA-II in term of average reduction in power consumption; also, in the minimum value gained by solutions, the proposed MOCSA outperforms 26%, 36%, 23%, 39% and 43% improvement against MOGWO, MOGWO-I, MOPSO, MOBA and NSGA-II in term of minimum value of power consumption. In addition to, the proposed MOCSA outperforms 42%, 29%, 46%, 13% and 5% improvement against MOGWO, MOGWO-I, MOPSO, MOBA and NSGA-II in term of average reduction in overall latency; also, in the minimum value gained by solutions, the proposed MOCSA outperforms 40%, 33%, 37%, 17% and 6% improvement against MOGWO, MOGWO-I, MOPSO, MOBA and NSGA-II in term of minimum value of overall latency.

Time complexity

Now that, time complexity of all sub algorithms have been determined, the time complexity of Algorithm 1 is now calculated. The preprocessing takes K\({N}^{2}\)+M + N which belongs to O(M + K\({N}^{2}\)). Also, the main loop iterates MaxIteration times. For the main loop, we have MaxIteration \(\times \)(N∙PopSize + \({\mathrm{PopSize}}^{2}\)). If we consider N < PopSize, Algorithm 1’s time complexity is O(M + K.\({N}^{2}\) + MaxIteration∙\({\mathrm{PopSize}}^{2}\)) which is relatively acceptable time complexity.

Conclusion and future direction

In this paper, an algorithm for the deployment of IoT application components on fog nodes has been presented to meet reliable deployment for user requests. To address this issue, this deployment problem was modeled to a multi-objective optimization problem with total power consumption and overall latency perspectives. To solve this combinatorial optimization problem, a multi-objective optimization algorithm based on cuckoo search meta-heuristic algorithm known MOCSA was extended. To reach concrete results, different scenarios were conducted and the effectiveness of proposed MOCSA was compared with well-reputed meta-heuristic algorithms MOGWO, MOPSO, MOBA, and NSGA-II in fair experimental conditions. The results obtained from simulations prove the significant superiority of proposed algorithm in terms of average overall latency and average total power consumption against other state-of-the-arts in objective functions. The merit of the current paper is to deliver users reliable services along with meeting objective functions. Also, the simulation proved the proposed MOCSA is potentially scalable. The limitation of the current work is to know the resource request in advance. For future work, we intend to present a dynamic model for mobile IoT applications in chain of fog computing nodes with QoS and economic perspectives to reach equilibrium in desired objectives.