Introduction

OpenStack, which is the open-source Infrastructure-as-a-Service (laaS) Platform Project, is widely used in various fields. The OpenStack project is developing at a rapid pace as a result of the participation and support of a number of major companies and a community of active developers.

OpenStack comprises nodes such as controller, compute etc. Based on its role [1], each node contains Nova, Cinder, Glance etc., which are components of the OpenStack services and form the laaS platform [2].

Each component of OpenStack uses the message queue service to check, exchange and coordinate the information related to the operation and status of the components [3], and the message queue service of OpenStack is the centralized approach service that is executed at the controller node [4].

OpenStack supports tools and libraries including RabbitMQ [5], Apache Qpid [6] and ZeroMQ [7] as message queue service, and OpenStack distributions use RabbitMQ by default [8].

The OpenStack message queue service based on RabbitMQ, which is a centralized approach service, has to cope with performance degradation problem as all the information is saved in the message queue of the controller node; additionally, the requests for checking, exchanging, and coordinating information related to the operation and status of the components are present in the message queue server of the controller node.

Blockchain [9], a decentralization approach, can be used to improve the performance speed by distributing requests for checking, exchanging and coordination that comprise the OpenStack message queue service, while reducing the security risk with the data duplication approach [10]. However, this approach has difficulty processing the information that is updated in real time; this is because of the transactions problem [11] that occurs due to the overhead of the consensus algorithm of the blockchain technology.

The OpenStack message queue service based on a Hybrid decentralized Practical byzantine fault tolerance Blockchain Framework (HdPBF) with a two-step verification approach is proposed in order to improve the processing speed of the OpenStack message queue service, by distributing requests using blockchain and reducing the security risks with the data duplication approach. The practical byzantine fault tolerance (PBFT) Blockchain Framework is based on the PBFT algorithm [12], which is an asynchronous [13] algorithm among the consensus algorithms of the blockchain approach, to efficiently process transactions and ensure reliability by double-checking the information with the message queue server. Despite the quick processing speed of the PBFT algorithm, delay in results may occur due to large volumes of requests and delay in real-time syncing of the blockchain peer.

The proposed scheme can ensure reliability of the message queue service, which is a basic component of OpenStack, through a two-step verification approach, which is a combination of the centralization and blockchain- based decentralization approach. The first step is performing the decentralization approach query; this query checks the information with the HdPBF peer, and in case there is no information, centralization query is performed to check the information with the message queue server as the second step. This method improves the performance by distributing requests for the message queue information to each node through the blockchain based decentralization approach and reduces the security risk associated with the message queue information by saving the message queue information into each node.

OpenStack message queue service

The concept of the message queue operation, which is a centralization approach service, is shown in Fig. 1 [14].

Fig. 1
figure 1

Concept of message queue

The publisher creates exchanges of Direct, Topic or Fanout, and the consumer creates individual queues for use, which are binding to the exchange.

The structure of the OpenStack message queue service and the OpenStack services based on the RabbitMQ, which is the default message queue service of OpenStack distributions, is shown in Fig. 2 [15].

Fig. 2
figure 2

OpenStack service with message queue

Services such as Nova, Cinder, Glance etc., which are OpenStack services, check, exchange and coordinate the information related to operation and status of services through the message queue associated with each service.

The entire composition and flow of the OpenStack services and message queue service based on the RabbitMQ is shown in Fig. 3 [16].

Fig. 3
figure 3

OpenStack architecture with RabbitMQ based message queue

If a request from a client using the OpenStack was generated, after the process of authorization of the OpenStack is completed, the resource of nodes through the API of each service is used, and each service and node checks, exchanges, and coordinates information related to the operation and status through the message queue [17].

One of primary operating procedures of Nova, which is a core component of the OpenStack service, is the message queue service based on the RabbitMQ of the OpenStack, with an example that an OpenStack user requests the vnc-console of the instance that is already created, is shown in Fig. 4.

Fig. 4
figure 4

Operating procedure of RabbitMQ based message queue service

The operating procedure of the message queue service based on the RabbitMQ of the OpenStack in Fig. 4 is described below in steps 1 to 7.

  1. 1.

    A client requests the vnc-console of the instance.

  2. 2.

    The nova-api makes the RPC call through the nova-rpcapi.

  3. 3.

    The nova-rpcapi sends the RPC call to execute the get-vnc-console at the nova-compute that has the requested instance and becomes a block status until the return value of the connect_info is received.

  4. 4.

    The RPC call message of the nova-rpcapi is delivered to the nova-compute through the RabbitMQ.

  5. 5.

    The nova-compute returns the result of execution through the RabbitMQ.

  6. 6.

    The nova-rpcapi returns the connect_info to the nova-api.

  7. 7.

    The nova-api returns the connect_info to the client.

When a user issues a request to the vnc-console of the instance, the nova-api makes a request to the nova-rpcapi for the RPC call, and the nova-rpcapi issues the get-vnc-console call message into the message queue, and waits for the result. The nova-compute returns the connect_info of the message queue, which is the result of executing the get-vnc-console to the nova-rpcapi, and the nova-api returns the connect_info to the client.

Based on the operating procedure shown in Fig. 4, requests for checking, exchanging and coordinating the operating related information and the status information of services through the message queue are present on the message queue server, which is a centralized approach service. A performance degradation problem is experienced in case a large volume of requests are generated.

HdPBF for OpenStack message queue

PBFT Blockchain Framework

The Blockchain Framework in this paper divided version 0.6 of the Hyperledger Fabric based on PBFT and upgraded the RocksDB [18], which is the backend to the version 5.17.2, and realized the PBFT Blockchain Framework that upgraded Go [19], which is the chaincode programming language, to the version 1.11.1. The realized PBFT algorithm-based Block Create procedure is as shown in Fig. 5.

Fig. 5
figure 5

PBFT Algorithm based Block Create Procedure

The peer of the PBFT Blockchain Framework is divided into one leader that is elected from the validating peers and non-validating peers, and the validating network follows a procedure similar to the block creation based on the existing PBFT Algorithm comprising validating peers and a leader that participate in the agreement process of Fig. 5, while the non-validating peer is excluded from the agreement process [20].

HdPBF for OpenStack message queue

The entire structure and flow of the OpenStack service and the message queue service containing the HdPBF realized on the basis of the PBFT Blockchain Framework is shown in Fig. 6.

Fig. 6
figure 6

OpenStack architecture with HdPBF based message queue

Crucial Information for operation within the message queue is saved into the HdPBF peer, and each node that is the individual HdPBF peer synchronizes the saved information with the HdPBF peers. The detailed structure and flow is shown in Fig. 7.

Fig. 7
figure 7

OpenStack architecture with HdPBF based message queue details

In case of a large volume of requests for information, synchronization between HdPBF peers may be delayed due to overhead based on the consensus algorithm of the blockchain approach. In such cases, the information is checked by local query to the HdPBF peer as the first step, and in case there is no result, the information is checked by query to the centralized message queue server as the second step. The same reliability as the one from the existing message queue service can be ensured through this two-step verification approach.

In case an OpenStack client sends a request initially, the information that is not synchronized with HdPBF peer comprising HdPBF is created in the message queue of the OpenStack. For the information created only in the OpenStack message queue and not synchronized, the HdPBF ensures reliability by performing the two-step verification. The HdPBF peers are synchronized for the corresponding information. The operating procedure of the OpenStack message queue service, which comprises HdPBF is shown in Fig. 8. As one of principal operating procedures of Nova, the two-step verification is executed, with an example of the case that a user of the OpenStack requests the vnc-console from the instance of the information that is already created initially and synchronization between HdPBF peers is completed.

Fig. 8
figure 8

Operating procedure of the two-step verification of HdPBF based message queue service for synchronizing each HdPBF peer

The operating procedure of the OpenStack message queue service comprising HdPBF that executes the two-step verification shown in Fig. 8 is described below from steps 1 to 7.

  1. 1.

    A client requests the vnc-console of the instance.

  2. 2a.

    The nova-api checks if there is connect_info of the instance at HdPBF.

  3. 2b.

    In case of no returned information, 2c is executed.

  4. 2c.

    The information on connect_info is requested to the nova-compute through the nova-rpcapi, which is the RPC API.

  5. 3.

    The nova-rpcapi sends the RPC call to execute the get-vnc-console to the nova-compute that has the requested instance and assumes a block status until the connect_info is returned.

  6. 4.

    The RPC call message of the nova-rpcapi is delivered to the nova-compute through the RabbitMQ.

  7. 5a.

    The nova-compute synchronizes the result of execution through HdPBF.

  8. 5b.

    The nova-compute returns the result of execution through the RabbitMQ.

  9. 6.

    The nova-rpcapi returns the connect_info to API.

  10. 7.

    The nova-api returns the connect_info to the client.

In the case of a first time request sent by an OpenStack client, the operating procedure of the OpenStack message queue service comprising HdPBF is shown in Fig. 9, with an example of the case in which connect_info is created based on the initial call made by a user of the OpenStack, on the condition that the information between HdPBF peers is synchronized after completing the procedure in Fig. 8 and the vnc-console of the instance that is already created is requested again.

Fig. 9
figure 9

Operating procedure of HdPBF for synchronized information

The operating procedure of the OpenStack message queue service comprising HdPBF shown in Fig. 9, in which the nova-api checks the information with itself as the HdPBF peer based on the request of a client is described below in steps 1 to 4.

  1. 1.

    A client submits a request to the vnc-console of the instance.

  2. 2.

    The nova-api checks if the connect_info of the instance at HdPBF exists.

  3. 3.

    If the information on the connect_info exists, it reads the connect_info.

  4. 4.

    The nova-api returns the connect_info to the client.

In cases of the connect_info, which is the core information for operation of Nova, if the information is created upon initial execution, it performs the two-step verification as shown in Fig. 8; synchronization between HdPBF peers is made, and only the 1 step Query in Fig. 9 is executed after synchronization.

The two-step verification is executed only when an initial request for the information is created at the message queue of the OpenStack service. Subsequently, only the first step query is executed, and as further repetitive requests for the identical instance are made, there are fewer cases for the two-step verification to be carried out.

The performance is maximized by minimizing the execution of query to the centralized message queue server through the network by a local query to the HdPBF peer. To this end, synchronization of the core information of the OpenStack message queue is performed as the first step. To ensure the authenticity of the information comparable to the message queue service of the basic composition of the OpenStack, a query is performed on the centralized message queue server as the second step only for the information that may be generated due to overhead of the consensus algorithm, and it is updated in real time but not yet synched with HdPBF peer. The security risk associated with the message queue information can be reduced as the core information for operation of Nova at the message queue is synched with each node that is the HdPBF peer and identically saved in each Node with the data duplication approach.

Approach and performance evaluation

Environment of experiment

As shown in Fig. 10, the set up comprising controller node and compute node, which are basic components of the OpenStack, similar to the actual environment and the storage node, was set up to create the instance to be used for the experiment.

Fig. 10
figure 10

Diagram of the experimental setup

The hardware and software composition of the experiment in Fig. 10 are shown in Table 1.

Table 1 Hardware and software characteristics

If a client requests the vnc-console from the instance, the controller node requests the get-vnc-console from the corresponding compute node, and the connect_info is created at the message queue, if the get-vnc-console is called initially on the instance that is already created. After that, every occurrence of calling the get-vnc-console, the connect_info from the message queue is brought and used.

After creating 6 instances in advance in a method similar to the actual environment, the initial vnc-console request is made in sequence for each instance, and the get-vnc-console is called 6 times for one instance, then the call is made 5 times for the first instance and a new instance followed by another 5-time call for the existing 2 instances and a new instance, then another 5-time call for the existing 3 instances and a new instance, followed by another 5-time call for the existing 4 instances and a new instance and then lastly, the get-vnc-console is called 4 times on all the 6 instances, which creates a total of 100 call requests such that synchronization between HdPBF peers due to the creation of the connect_info can be made in sequence in the process of the experiment. The shell script created for the experiment is shown in Fig. 11.

Fig. 11
figure 11

Shell script for the experiment

In the configuration of the OpenStack message queue based on RabbitMQ of the existing OpenStack, the results of verifying the response time by executing the script of Fig. 11 defining the environment of an experiment is shown in Fig. 12.

Fig. 12
figure 12

Response time of RabbitMQ based OpenStack message queue service

The average response time for the experiment performed 100 times is 11.0026 ms.

In the configuration of the HdPBF integrated OpenStack message queue based on the two-step verification approach, the results of verifying the response time by executing the script in Fig. 11 defining the environment of an experiment is shown in Fig. 13.

Fig. 13
figure 13

Response time of HdPBF based OpenStack message queue service

The average response time for experiment performed 100 times is 5.8539 ms.

The comparison of response times from the configuration of the OpenStack message queue based on the RabbitMQ of the existing OpenStack with the configuration of the HdPBF integrated OpenStack message queue based on the two-step verification approach is shown in Fig. 14.

Fig. 14
figure 14

Comparison of response times between RabbitMQ and HdPBF

The results of the comparison in Fig. 14 are outlined in Table 2.

Table 2 Comparison of response times between RabbitMQ and HdPBF

Based on the results of the experiment, it was confirmed that the average response time of the HdPBF integrated OpenStack message queue service based on the two-step verification approach has reduced by approximately 46.75% when compared to the OpenStack message queue service based on the existing RabbitMQ.

Conclusions

The OpenStack message queue service based on the RabbitMQ, which is a basic component of the OpenStack, is a centralized approach where all requests are concentrated in the centralized message queue server.

It was validated through the experiment that the OpenStack Message queue service integrated with the proposed HdPBF ensures the same reliability as the message queue service with the basic configuration of the OpenStack through the two-step verification approach of performing the decentralization approach query, followed by checking the information at the HdPBF peer, as the first step and performing the centralization query, followed by checking the information with the message queue server, as the second step, results in performance improvement of approximately 46.75% through the blockchain-based decentralization approach distributing the requests. It also reduces the security risk for the message queue information, as the message queue information is synched with all the nodes that are each HdPBF peer and the information is saved in all the nodes identically with the data duplication approach.