Introduction

Future Internet of Things (FIoT) paradigm endorses sophisticated communication architectures and heterogeneous technologies for satisfying the service requirements of the varying users [1]. The conventional IoT architectures and infrastructures along with the information communication technologies (ICTs) are preserved in this platform with the concentration of service reliability and swiftness in access. The need for ubiquitous service requirements and resource access is increasing the recent times due to the portability of information and user-centric application demands [2, 5]. To meet the service requirements as user demands in a scalable and flexible manner, the ICTs are exploited to another level of communication along with data sharing and responsive integrations. The future aspect of distributed information sharing extends to online storage, computation, and access due to portable and light-weight technologies [3]. The aim and intent of these service platforms is to enhance the quality of customer experience of irrespective category of service and use of applications. Precisely, swift operations and on-demand computation and storage requirements of the users are to be satisfied by exploiting multi-level of communication in an IoT platform [4].

The multi-level platform incorporates human–machine, device-to-device, human–robot, machine-to-machine, human–digital things, etc. for ensuring seamless and less complex information sharing platforms [5]. With the emerging smart city applications and service requirements in real world, the usage of data and exploitation of resources are massive. Handling large data resources and service features in IoT is facilitated with the incorporation of cloud-like infrastructures [6]. The distributed storage and sharing features of the cloud are directly inherited using the ICT in the smart city environment. As IoT-Cloud scales the communication architecture of the smart city in various applications such as transportation, traffic management, road safety, healthcare, data sciences, smart home management, etc., the need for service architectures is obligatory [7, 8]. The conventional short-range communication architectures are less feasible in supporting scalable and flexible data sharing and service provisioning architectures [9].

Service-concentric architecture design and implementation is focused with the assimilation of multiple platforms such as cloud, fog, mobile-edge computing, etc. The platform, application, and infrastructure as a service of the cloud paradigms are extended to the user devices through the intermediate architectures such as edge, fog and IoT [2, 10]. These architectures assures reliable and ease of access for the distributed environment, improving the service requirements for the end-users. The design of such integrated architecture requires flexible and extended ICT for supporting wide range of applications [5, 7, 11]. The application category of the users varies from tiny sensors to human-related multimedia services for which timely access and concurrency are required. Cloud service providers grant concurrency and access control methods for differential environments along with the knowledge of the resource sharing and service provisioning [10, 12].

Related works

Ren et al. [13] proposed an IoT Service Function Chains (IoTSFCs) Orchestration in SDN-IoT Network Systems. In this work, a two-step processing is done: the first one is the composition, which is a multi-criterion based on ordered and modeled to resolve the order of involved IoT network functions. The second one is the establishment of a service function chain to illustrate the composed IoTSFC.

Li et al. [14] introduced a resource service chain (RSC) to notice an anomaly business by means of business mining. The RSC is measured to evaluate the object, and the second is based on QoS making an anomaly business a standard of RSCs. The last one anomaly is detected in the business for collaborative tasks in IoT.

A cloud computing for context-aware IoT is developed by Lee et al. [15]. It consists of two layers such as the cloud control layer (CCL) and a user control layer (UCL). The CCL is used to manage the resource allocation in the cloud, scheduling, the service profile, and adaptation policy. The UCL is used to control the end-to-end service connection and context from the user view side.

Chen et al. [16] proposed a new heuristic method on IoT-Data Intensive Service Components Deployment (iDiSC) in the edge-cloud hybrid system. It is used for optimal operation development through the minimum guaranteed latency.

A Markov Approximation method is introduced by He et al. [17], for Optimal Service Orchestration. The author developed a Service Chain (SC) orchestration and learns the Virtual Network Functions (VNFs) placements. It increases the multiple instances to reduce the cost and delay, and it is more associated with the network guarantee and load balancing.

A Performance and Resource Aware Orchestration System is observed by Wang et al. [18]. The linear programming model makes an effective approximation for Service Function Chain (SFC) to avoid resource idealness. The prototype system is used to perform the IoT (PRSFC-IoT) which is modeled ahead on Open Stack for online SFC orchestration.

Khansari and Sharifian et al. [19] designed an evolutionary game theory in the fog-computing environment. A cloud-related framework for the collection and composition of IoT resources is used to boost the QoS. The multi-objective evolutionary game theory improves the evaporation-based water cycle algorithm (EG-ERWCA) in cloud assistant to optimize the CPU usage and power consumption.

Nasiri et al. [20] proposed a distributed stream processing in the data processing layer on smart cities. The ability to hold real-time data processing is done on Distributed stream processing frameworks (DSPFs). In a smart city, a variety of IoT devices produce continuous data required within a short period of time and analysis.

Kim et al. [21] proposed an intelligent IoT common service platform architecture and implements the fundamental procedures. An IoT broker is used for the connectivity and verifies the importance of the modeled work in the service-oriented platform. The IoT has huge potential knowledge for a higher variety of services.

A microservice is used to connect the IoT management platforms and AI service for chain management is modeled by Kousiouris et al. [22]. The author integrates the microservice system and implements the postprocessing tasks to provide the chain monitoring from the online systems.

Model-Driven Development of Service-Oriented IoT Applications is developed by Sosa-Reyna et al. [23]. This study consists of four stages such as abstraction, viewpoints, granularity, and service oriented. Using this methodology, the smart vehicle is addressed in heterogeneous devices in multiple ways.

QoS-aware service commendation based on the relational topic model and factorization machines for IoT Mashup applications is observed by Cao et al. [24]. The initial step of this is to characterize the relationship among the Mashup, services and links. The second one is to exploit the machine factorization to train the topics of latent to predict the link relationship.

Concurrent service access and management framework (CSAMF)

This framework is designed for improving the concurrency in service access for the varying user density in an IoT-based smart city. The purpose of the framework is to leverage the accessibility rate of resources and to reduce the failures in service sharing. In this framework, IoT, cloud and user requirements are consolidated to form a reliable solution. The reliable solution provides both cost effective and responsive service platform for the smart city users. In a smart city environment, the service requests and access for different applications are processed as per the user requirements. The density of service requests and response relies on the accessible and available feature of the IoT-cloud infrastructure. In Fig. 1, the architecture of IoT-cloud smart city where CSAMF is deployed is illustrated.

Fig. 1
figure 1

IoT-cloud-smart city architecture

The proposed frame work consists of two different process is a parallel manner, namely access management and service availability. These process are independent but are jointly used for validating the performance and improving the response reliability of the users. Depending on the requesting service or application, service availability varies for retaining the quality of reliability. The design of this framework retains cost and delay outcomes and its assessment using fitness based learning method.

Access management

The purpose of this process design is to balance the request and response of the users in an optimal manner. With the available infrastructure units and resources, this process ensures that no additional cost for resource access is required in the IoT architecture. The cost refers to the connection establishment with respect to the time span of the access time \( \left( {t_{{\text{a}}} } \right)\) and query time \( \left( {t_{{\text{q}}} } \right).\) Therefore, the objective of access management is given as

$$ \left. {\begin{array}{*{20}c} {{\text{maximize}}\,\,\,r_{{\text{r}}} \,\,\forall \quad r_{q} \,{\text{in}} \left( {t_{{\text{a}}} - t_{{\text{q}}} } \right)} \\ {{\text{minimize}} \left( {\frac{{t_{{\text{a}}} - t_{{\text{q}}} }}{{t_{{\text{a}}} }}} \right)} \\ {\begin{array}{*{20}c} {{\text{such}}\,\,{\text{that}}} \\ {t_{{\text{q}}} \le t_{{\text{a}}} \,{\text{and}}} \\ {t_{{\text{a}}} > t\_{\max}} \\ \end{array} } \\ \end{array} } \right\}. $$
(1)

In Eq. (1), the variables \( r_{{\text{r}}}\) and \( r_{{\text{q}}}\) denoted the request responses and quires, respectively. The factor \( \left( {\frac{{t_{{\text{a}}} - t_{{\text{q}}} }}{{t_{{\text{a}}} }}} \right)\) indicates the cost based on access and \( t_{{\max}} \) is the maximum valid time of the query request. Therefore, the objective is to maximize request response by providing less time for access. In a smart city, the density of users \( \left( \rho \right)\) varies with different instances and the request query for the application/services varies. However, the cloud and IoT are responsible for handling \( r_{{\text{q}}}\) and satisfying Eq. (1). Let \( \left\{ {1,2, \ldots S} \right\}\) represent the \( t_{{\text{q}}}\) sessions in an IoT environment. The session refers to the augmentation of \( r_{{\text{r}}}\) from different devices in a non-replicated manner. If replication \( r_{{\text{q}}}\) occurs at two or more \( S\), then it indicates the failure in response/access. The concurrent nature of \( r_{{\text{q}}}\) requires non-overlapping \( t_{{\text{a}}}\) irrespective of \( t_{{\text{q}}}\). This means that the arriving request queries must not overlap with the other \( t_{{\text{a}}}\) responses providing multiple concurrencies in resource occurs. Now, the objective in Eq. (1) for the \( S\) is defined as in Eq. (2)

$$ \left. {\begin{array}{*{20}c} {{\text{maximize}} \,\,\mathop \sum \limits_{i = 1}^{S} \left( {\frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} }}} \right)_{i} \forall \,\,\,\,S \in \left[ {t_{{\text{q}}} ,t_{{\text{a}}} } \right]} \\ {{\text{such}}\,{\text{that}}\,\, t_{q} \le S \le t_{{\text{a}}} } \\ \end{array} } \right\}. $$
(2)

Based on the satisfactory condition that prevails in the cloud resource access, the fitness is estimated. This fitness conditions are sustained for all \( \left( {t_{{\text{q}}} - t_{{\text{a}}} } \right)\) session, ensuring maximum access is provided for the \( r_{{\text{q}}}\). The fitness \( \left( F \right)\) is estimated for two concurrently overlapping \( S\), i.e., the continuous \( S\) that accepts requests and generates reliable \( r_{{\text{r}}}\) is accounted for estimating the fitness. This fitness is estimated using Eq. (3)

$$ \left. {\begin{array}{*{20}c} {F_{{\text{c}}} = \frac{{\tau - \tau_{\min } }}{{1 - \tau_{\min } }},} \\ {{\text{where}}\, \tau = \frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} }}\,{\text{ and}}\, \tau_{\min } = \left[ {0,1 - \frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} }}} \right],\,\,{\text{ if}}\,\, r_{{\text{r}}} < r_{{\text{q}}} } \\ {\begin{array}{*{20}c} {{\text{and}}} \\ {F_{t} = \frac{{t_{{\text{a}}} - t_{{\text{q}}} }}{{\left( {t_{{\text{a}}} - t_{{\text{q}}} } \right) + t_{{\text{l}}} }},} \\ {\begin{array}{*{20}c} {{\text{and}}} \\ {F = \frac{{F_{\tau } + f_{t} }}{S}.} \\ \end{array} } \\ \end{array} } \\ \end{array} } \right\} $$
(3)

In Eq. (3), the fitness is estimated with respect to time and request, where \( t_{{\text{l}}}\) is the time lag between two successive \( S\). The fitness information is validated using linear convolution learning method for verifying if \( F_{\tau }\) and \( F_{t}\) are mapped with each other. If the fitness is mapped precisely such that \( F = 1\) or near to \( 1\), then the access is high, increasing the reliability of the session. In this learning process, the above fitness is estimated only if \( t_{{\text{a}}}\) is allocated for a \( r_{{\text{q}}}\). If that is the case, then \( F_{t}\) is the learning instance and the conditions in Eqs. (1) and (2) are the validations to ensure that \( F_{\tau }\) is achieved. Let \( \emptyset\) denote the solution space of the learning process such that the expected output is

$$ \left. {\begin{array}{*{20}c} {\emptyset = {\text{armin}}\,\frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} }} + \rho \left( {1 - \frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} }}} \right) \pm \gamma ,} \\ {{\text{where}}} \\ {\gamma = \frac{{\rho \left( {\frac{{\text{d}}}{{{\text{d}}t_{a} }} - \frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} }}} \right)}}{{F_{t} }}\beta ,} \\ \end{array} } \right\} $$
(4)

where \( \gamma\) is the deviation between number of requests in the previous and current \( S\) and \( \beta\) is the rate of learning. The mapping of \( F_{\tau }\) and \( F_{{\text{t}}}\) is a time \( t_{{\text{a}}}\) occurs in the convolution layer wherein, the probability of mapping, i.e.,\( p(F_{\tau } ||F_{{\text{t}}} )\) is computed as

$$ p(F_{\tau } |{|}f_{t} {)} = \left\{ {\begin{array}{*{20}l} {\mathop \sum \limits_{i = 1 }^{S} \emptyset_{i} \left( {\frac{{F_{\tau } }}{{F_{t} }}} \right)_{i} + \gamma ,\quad {\text{if}}\,\,F_{\tau } = F_{t} } \\ {\mathop \sum \limits_{i = 1}^{S} \emptyset_{i} \left( {\frac{{F_{\tau } }}{{F_{t} }}} \right) - \left( {1 - \frac{{r_{r} }}{{r_{q} }}} \right)\gamma ,\quad {\text{ if}}\,\,F_{\tau } \ne F_{t} .} \\ \end{array} } \right. $$
(5)

This probability of mapping as in the above equation is performed for \( F_{\tau }\) in \( \emptyset \). Based on the analysis and validation, the number of instances in which mapping occurs is identified. The change in \( p(F_{\tau } ||F_{t} )\) is handled by concentrating on service availability. This learning process and communication for \( F_{\tau }\) is presented in Fig. 2a, b.

Fig. 2
figure 2

a Learning process for \( \emptyset\). b Communication between IoT device and IoT gateway

The process of access management is performed between the IoT device and the IoT infrastructure. The allocation of \( t_{{\text{a}}} , r_{{\text{r}}}\) and mapping relies on the resource availability in the cloud. Based on the \( \left[ {\beta ,\gamma } \right]\) update pairs, the service chain is allocated. Therefore, for a session \( S\) with \( t_{{\text{l}}} = 0\), the access is estimated as \( \left( {\frac{{r_{{\text{r}}} \times \rho }}{{r_{{\text{q}}} }}} \right)\), wherein the response to the available service is given in \( t_{{\text{a}}}\) in a \( S\). The conditions in Eqs. (2) and (3) are verified as follows:

If \( t_{{\text{l}}} = 0\), then \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\) is the remaining requests that are to be allocated with \( t_{{\text{a}}} = t_{{\text{a}}} + 1\) in the next \( S\). Therefore, \( r_{{\text{r}}}\) that \( \in \emptyset\) shows maximum range between \( \tau_{\min } \ne 0\) and \( \left[ {\left( {r_{{\text{r}}} - r_{{\text{q}}} } \right), r_{{\text{r}}} } \right]\). Similarly, for \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right), \left( {S + 1} \right) \le t_{{\text{a}}}\), the condition \( t_{{\text{a}}} = t_{{\text{a}}} + 1 \) falls in the consecutive \( S\). In this case, \( t_{{\text{q}}}\) of \( \left( {r_{{\text{q}}} } \right) > \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right) \in S\) and the \( \left( {s + 1} \right)\) handles the remaining requests. Here, the impact of \( \beta\) and \( \gamma\) is random where the probability of mapping as in Eq. (5) ensures the satisfaction of Eq. (2). As per the objective and mapping performed in Eq. (5), the maximum access provided is \( \left( {\frac{{r_{{\text{r}}} \times \rho }}{{r_{{\text{q}}} }}} \right)\) in \( S\). This is maximized to \( \rho\), if \( S \le t_{{\text{a}}}\) and does not require an additional \( \left( {S + 1} \right)\). Besides, the learning conditions \( \left( {F_{\tau } = f_{t} } \right)\) and \( p(F_{\tau } |{|}F_{t} {)} = 1\) determines the rate of \( \beta\). Here, \( \beta\) is identified for all \( r_{{\text{q}}}\) process as in Fig. 2b to verify if the access in non-replication and \( F\) is attained in \( \emptyset\). The \( \beta\) and \( \gamma\) factors are considered, if \( S = S + 1\) or \( t_{{\text{a}}} = t_{{\text{a}}} + 1 \) where the concurrent access to the resources is augmented. The concurrency in access management is subject to the availability of resources in the IoT-cloud platform. The rate of learning is high, if \( r_{{\text{r}}} = 0\) for all \( r_{{\text{q}}}\) allocated with \( t_{{\text{a}}}\). On the other hand, the initial learning is focused on mapping \( F_{\tau }\) with \( F_{{t_{{\text{a}}} }}\) such that all \( r_{{\text{q}}}\) (maximum \( r_{{\text{q}}}\)) is served. The changes in \( \gamma\) if exceed the solution space \( \emptyset\), then the availability is to be considered in this case. In the next session, the service availability and its management in the IoT-cloud are discussed.

Service availability

In IoT cloud context, the rate of \( r_{{\text{r}}}\) and retaining \( F_{\tau }\) relies on the available resource. The rate of \( \beta\) and \( \gamma\) determines the availability and need for allocating new resource for the remaining \( r_{{\text{r}}}\). Therefore, the conditions that require more service availability are presented in Table 1 with their explanation.

Table 1 Condition and analysis of the service availability

The conditions and analysis presented in Table 1 discuss the possible assessment for improving the \( r_{{\text{a}}}\) on the given \( t_{{\text{a}}}\). The optimal solutions on the \( \emptyset\) rely on two factors, i.e., confining \( \tau_{\min } { \nless }\left( {\frac{{r_{{\text{r}}} - r_{{\text{q}}} }}{{r_{{\text{q}}} }}} \right)\) and \( \gamma < 1 - \beta\) conditions at the time of resource management. The combination of service provides and virtual machines are jointly used for handling the query requests of the end-users. The available service providers/virtual machines are given by Eq. (6) as

$$ A = \frac{{r_{{\text{q}}} }}{M} + \frac{{r_{{\text{q}}} }}{{M^{2} }} + 1,\quad \forall \,\, t_{{\text{q}}} < t_{{\text{a}}} , $$
(6)

where \( A\) and \( M\) denote the availability factor and service provider count. For the available \( M\), the \( r_{{\text{q}}}\) satisfied at time \( t_{{\text{a}}}\). The remaining requests are to be served as per the conditions in Table 1 to retain the swiftness and cost efficiency of the framework. The conditions in Table 1 define the inappropriate \( F_{\tau }\) and \( F_{t}\) at different time intervals. The changes are \( S\) and offloaded requests to the all consecutive session are assessed if the access \( r_{{\text{q}}}\) is mapped to the availability resources/service in the next session. The ratio of availability of the \( M \) in its time period is given as \(\frac{{t_{{\text{u}}} }}{{r_{{\text{q}}} }}\); therefore, Eq. (6) becomes

$$ r_{{\text{q}}} = M\sqrt {\frac{{t_{{\text{u}}} - 1}}{M + 1}} , \quad \forall \,\,t_{{\text{q}}} < t_{{\text{a}}} , $$
(7)

which are the acceptable request queries from the previous \( S\). Therefore,\( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right) < r_{{\text{q}}}\) as in Eq. (7) are the request queries to be processed in \( S + 1\). The condition in Table 1 is validated for the above \( r_{{\text{q}}}\) other than the generated \( r_{{\text{q}}}\). Therefore, \(\frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} \in S + r_{{\text{q}}} \in S + 1 }}\) is the maximum achievable service response in the proposed framework. In this service availability process, the learning instances are modified based on the \( r_{{\text{q}}}\) and \( A\) [as in Eq. (6)]. The learning instances are different but cannot be augmented for different instances as it increases the complexity of processing and computation cost. Therefore, the linear learning method for \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\) is the input wherein the convolution layer filters the above conditions in a step-by-step manner for mapping \( F_{\tau }\) and \( F_{t}\). This mapping is sequential such that the availability of \( M\) is the best-fit criteria for accepting \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\). The variations in \(\frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} }}\) is then computed in the \( S + 1 \) for the available \( M\) and \( r_{{\text{q}}}\) in the consecutive service mapping process. The overflowing \( r_{{\text{q}}}\) in the previous session is offloaded to the available \( M\), wherein the errors are suppressed in \( F_{\tau }\) and \( F_{t}\) mapping. The errors are estimated as the deviation/change in \( r_{{\text{r}}} /r_{{\text{q}}},\) where \( t_{{\text{l}}}\) is not greater than \( t_{{\text{a}}}.\) The case of analysis for \( S < t_{{\text{a}}}\) and \( t_{{\text{a}}} \le S + 1\) is retained by identifying the \( r_{{\text{q}}}\) that is needed for allocation time. The learning process is attuned as represented in Fig. 3a.

Fig. 3
figure 3

a Learning process using \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\). b IoT gateway and cloud process

As illustrated in Fig. 3a, the process of learning focusses on \( A\) and rate of \( r_{{\text{q}}}\). The left out \( r_{{\text{q}}}\) as in the above process requires two sequential validations, i.e., \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right) < r_{{\text{q}}}\) and \( \gamma < \left( {1 - \beta } \right)\). These two conditions if not satisfied, the error in learning process occurs. This error is restricted to the available resources as it may not accept the \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\); therefore, the request is offloaded to the \( M\) that is available is less \( t_{{\text{l}}}\). Therefore, a mediate of the request \( \in \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\) experiences a \( t_{{\text{l}}}\) such that it is retained in the same state. The learning rate must be less compared to \( \gamma\) as the error would be high in assessing the \( M\) with \( A\) in time \( t_{{\text{r}}}\). Therefore, with this condition, the requests that are not served for the different instances \( \left( {s + 1} \right)\) are considered as the error and it forms the training set of the learning process. The process between IoT gateway and cloud is represented in Fig. 3b.

The service from the cloud experiences an additional time as \( t_{{\text{l}}} \ne 0\) and therefore, the condition \( \left( {t_{{\text{a}}} + 1} \right) < S\) is verified by the gateway IoT to ensure maximum \( r_{{\text{r}}}\) is generated. However, the log in consecutive request processing is observed, wherein \( r_{{\text{q}}} > \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\) is retained such that the access rate is improved. The access and availability management are concurrently handled in this proposed method. Where the requests are arranged sequentially using different conditions in the learning process in the \( t_{{\text{l}}}\) and error shifts, the resource usage and the acceptance rate of requests are increased.

Performance analysis

CSAMF performance is verified using the simulations carried out in Contiki Cooja Simulator. The simulation environment is constructed in reference to the architecture presented in Fig. 1. For a smart city environment, a subnet consisting of multiple services such as traffic signal, transportation system, smart home, and industry is augmented to the scenario. In this scenario, 160 IoT devices are placed in the aforementioned applications. The detained configuration of the scenario components is presented in Table 2.

Table 2 Configuration and values

The above configuration is used for verifying the performance of the proposed framework using the metrics access ratio, failure rate, access time, time lag, and service usage rate. This performance is verified using a comparative analysis of the above metrics with the existing methods in [16, 17], and [19]. The methods are named as VPMIA, iDiSC, and EG-ERWCA respectively.

Access ratio

The proposed framework achieves high service access ratio by retaining \( \frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} }}\) and \( \frac{{r_{{\text{r}}} }}{{r_{{\text{q}}} \in S + r_{{\text{q}}} \in S + 1}}\) at two different time instances \( t_{{\text{a}}}\) and \( t_{{\text{a}}} + 1\). The change in request concentration is offloaded to the available \( M\) on the basis of \( t_{{\text{l}}}\), wherein maximum requests are provided with service access. The classification of the fitness as in the convolution layer increases the differentiation between the requests assigned to the service \( M\). Therefore, \( \left( {\frac{{r_{{\text{r}}} \times \rho }}{{r_{{\text{q}}} }}} \right)\) in \( S\) is the maximum achievable rate of access; whereas, the remaining \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\) in \( \left( {S + 1} \right)\) satisfying the conditions in Eqs. (2) and (3) helps to maximize the ratio. Therefore, in both \( S\) and \( S + 1 \) sessions, the rate of \( r_{{\text{r}}}\) is high (without replication), which means the access to \( M\) is retained in an abruptly high range (Fig. 4).

Fig. 4
figure 4

Access ratio comparisons

Failure rate

Failure rate per session in the proposed frame work is confined by verifying the acceptable limit of \( r_{{\text{r}}}\) and \( A\) of the \( M\) in \( S + 1 \) , respectively [25]. For any two consecutive execution sessions,\( t_{{\text{a}}}\) or \( t_{{\text{a}}} + 1\) is retained within sky filtering \( r_{{\text{q}}} \in S\) and \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right) \in S + 1 \) from \( \emptyset\). Therefore, if \( S, s + 1, \ldots \in \emptyset\), then the solution generated is high, reducing the failure rate. The number of denied requests in this case is reduced, as the \( p(F_{\tau } |{|}F_{t} {)} = 1\) condition is verified in the linear learning process and \( \gamma < \left( {1 - \beta } \right)\) is the verifying condition in service management learning. Therefore, the rate of \( r_{{\text{r}}}\) in \( S\) and \( \left( {S + 1} \right)\) is retained to a high level. The error in the proposed framework is achieved only if \( t_{{\text{l}}}\) is prolonged for a \( r_{{\text{r}}}\) such that \( t_{{\text{a}}}\) or \( t_{{\text{a}}} + 1\) exceeds \( S + 1\).

This error results in failure rate increment; whereas, the \( r_{{\text{r}}}\) that are not served at \( S\) are offloaded to \( S + 1 \) session (Fig. 5a, b).

Fig. 5
figure 5

a Service providers versus Failure Rate/S. b Failure % versus IoT devices

Access time

Concurrent request processing and access in the proposed framework is augmented to gather by allocating \( M\) at precise time [26]. The swiftness in request query processing and \( M\) assigning are verified by differentiating the \( S\) for satisfying \( t_{{\text{q}}} \le S \le t_{{\text{a}}}\) condition. To preserve this condition, \( \emptyset\) is estimated on the basis of fitness and maximum probability of mapping \( r_{{\text{q}}}\). Instead, the \( \emptyset\) for \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\) in \( S + 1 \) is retained by verifying the learning rate and difference in response. The \( r_{{\text{q}}}\) for the \( M\) assigned in \( S\) are said to satisfy the conditions of Eq. (2) and (3), respectively. Whereas, the \( \left( {r_{{\text{q}}} - r_{{\text{r}}} } \right)\) in \( \left( {S + 1} \right)\) are concurrently assigned for \( M\) in \( \left( {S + 1} \right)\)-based mapping. As long as the learning rate satisfies \( \gamma < \left( {1 - \beta } \right)\) condition,\( t_{{\text{a}}} \le S + 1\) and therefore, the access time is less. However a time lapse is observed in \( S + 1 \) session, where \( t_{{\text{a}}} + 1 \le S\) achieves less access time (Fig. 6).

Fig. 6
figure 6

Average access time comparisons

Average time lag

For the varying request density, time lag varies depending on the availability of \( M\) and \( S\). The concurrent user access increases the request rate, varying the \( S\) and \( t_{{\text{a}}}\). Therefore, instantaneous availability of \( M\) serves both \( r_{{\text{q}}} \in S\) and \( r_{{\text{q}}} \in \left( {S + 1} \right)\) in the allocated \( t_{{\text{a}}}\). The time lag for \( r_{{\text{q}}} \in S\) is not observed as the requests are served in a first come first serve manner. The learning rate and difference is retained based on the response condition \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\). The average time lag in the proposed framework is confined to the level of \( A\) and \( r_{{\text{q}}}\) [as computed using Eq. (7)] is the required factor for improving the swiftness in access. On the other hand, if the Swiftness in processing is achieved, then the offloading \( r_{{\text{q}}}\) are processed within time \( t_{{\text{a}}} + 1 \),i.e., \( t_{{\text{a}}} + 1 < S + 1\) and hence, a slighter \( t_{{\text{l}}}\) is achieved in the proposed framework (Fig. 7).

Fig. 7
figure 7

Average time lag comparisons

Service usage rate

The service utilization rate of the proposed framework is achieved by suppressing the access time constraints and \( A\) of the \( M\). In both \( S\) and \( S + 1\) the classification and learning constraints is ignored. The condition verification based on \( F_{\tau }\) and \( \gamma < \left( {1 - \beta } \right)\) helps to retain the number of requests that are currently active on the \( M\). The chances of less \( r_{{\text{r}}}\) and unconditional offloading of \( r_{{\text{q}}}\) is prevented in this framework. The difference in offloaded and \( \beta\) performs a re-allocation of \( S\) and if \( t_{{\text{a}}} + 1 > S\), then the concurrency is retained by assigning \( \left( {r_{{\text{r}}} - r_{{\text{q}}} } \right)\) to the available \( M\) based on minimum \( t_{{\text{l}}}\). The performance for \( M\) with \( t_{{\text{l}}} = 0\) is given such that the changes in handling request are suppressed in a reliable manner. Therefore, the \( M\) is engaged to its maximum time by handling \( r_{{\text{q}}} \in S\) or \( r_{{\text{q}}} \in \left( {S + 1} \right)\) and hence, the service utilization is high (Fig. 8). In Table 3, the comparison of Access %, Failure Rate/S, and M usage rate with respect to the varying service providers is presented.

Fig. 8
figure 8

Service usage rate comparisons

Table 3 Comparative analysis with respect to the varying service providers

In Table 4, the comparative analysis of access time, time lag, and failure % is presented.

Table 4 Comparative analysis of access time, time lag, and failure %

From the comparative analysis, it is seen that with respect to the varying service providers, and request concentration, the proposed framework achieves better access and service utilization rate, reducing access time, and failure rate and time lag.

Conclusion

This manuscript discusses concurrent service access and management framework for future IoT in smart city environment. This system works independently of access and service management to increase the speed at which resources are used and to improve the quality of services. The adversaries in request processing and service access are sustenance through convolutional learning process. The learning process operates in a linear and differential manner for classifying the requests and offloading them to improve the concurrency in service access. The classified requests are allocated based on the availability of the service providers in different session confining the level of access time and, in future, optimized learning frameworks are planned to use in concurrent service access and management. The experimental results show that the proposed framework achieves less access time and failure rate, improving the service utilization and access rate.