Abstract

In recent times, security in cloud computing has become a significant part in healthcare services specifically in medical data storage and disease prediction. A large volume of data are produced in the healthcare environment day by day due to the development in the medical devices. Thus, cloud computing technology is utilised for storing, processing, and handling these large volumes of data in a highly secured manner from various attacks. This paper focuses on disease classification by utilising image processing with secured cloud computing environment using an extended zigzag image encryption scheme possessing a greater tolerance to different data attacks. Secondly, a fuzzy convolutional neural network (FCNN) algorithm is proposed for effective classification of images. The decrypted images are used for classification of cancer levels with different layers of training. After classification, the results are transferred to the concern doctors and patients for further treatment process. Here, the experimental process is carried out by utilising the standard dataset. The results from the experiment concluded that the proposed algorithm shows better performance than the other existing algorithms and can be effectively utilised for the medical image diagnosis.

1. Introduction

The development in the medical field and the advancement of medical devices such as magnetic resonance imaging (MRI) and computed topography (CT) produces huge data on daily basis, and the data collected from these devices are high dimensional and rich in variables [1]. Thus, the medical image databases and their dimensionality are increasing tremendously. Due to this increase in medical databases, it is difficult to handle the file system with increasing data volume. Thus, handling of medical data turns into a biggest concern for healthcare service providers [2]. Hence, cloud computing was utilised in the medical field for storing and computing the medical data since the medical image cloud is easy to handle. Cloud computing offers flexible and scalable computing resources from the distant locations, and they are accessible depending on the necessity of the user [3]. It is also efficient for delivering computing resources in the high end-computing environment [4]. For cloud computing, the users do not need their own storage and servers at their computers because the cloud platform is utilised for data storage. Hence, the users will be free from having large storage and huge number of servers in their computers [5]. Therefore, computing and storage of long-term medical image records in the cloud are efficient for solving many problems in the medical field [6]. So cloud computing is widely used in the medical field for storing, computing, and sharing the patients’ medical records. In cloud computing, the hospital only needs to collect the information about the patients from the files and transfer the data to the cloud for storage.

Moreover, Internet of Things (IoT) is also extended in cloud computing for developing new facilities and applications in the healthcare process [7]. IoT is defined as the network of things or objects such as sensors, software, and electronic devices interconnected with each other for exchanging data with the operator, manufacturer, or other connected devices to attain greater values and services. Also, IoT  offers advanced connectivity among the systems, services, and devices which includes various domains, protocols, and applications. IoT and cloud computing are profited equally while combining the technologies. The IoT technology always supports the cloud for enhancing the performance such as computational ability, energy, storage, and high resource utilization. Also, it favours the cloud to provide many new services through a distributed and active approach.

While storing medical data in the cloud platform, it is necessary to safeguard the information so that the cloud cannot learn anything about the data. Thus, securing the medical images in cloud platform is necessary. Generally, medical images are highly sensitive to modifications, and thus, any alterations in their contents can cause errors in medical diagnosis [8]. Hence, it is also important to maintain the sensitive contents of medical images during a reconstruction phase. Thus, an encryption algorithm is required for increasing the security and privacy of data which secures the medical data without leaking any sensitive information. Many encryption algorithms are proposed based on chaotic systems like 1D and 2D chaotic systems, but these chaotic systems with low dimensions only contain simple orbits and lesser parameters. Thus, the initial values and parameters can be easily estimated from the image [9]. To minimise these limitations, chaotic system based on backpropagation (BP) neural network is utilised for the image encryption process which effectively secures the medical data in the cloud environment.

For medical image analysis and disease diagnosis, machine learning technique is extensively used in the medical field [10]. It has many functions such as pattern recognition, disease prediction, fraud detection, and image segmentation [11]. However, the conventional machine learning techniques are not adequate for the large medical databases. Thus, high performance computing was employed in machine learning data for accurate and efficient diagnosis of big medical data. A division of machine learning process is the deep learning which depends on learning data illustrations for feature classification, and it utilises supervised and unsupervised machine learning methods. It is an advanced technique which has the ability to discriminate features of data without human intervention. It has an ability to consequentially produce robust and representative features layer by layer in neural networks [12]. Data used for deep learning are obtained from different sources with precise data types, and it is significant to develop suitable models for handling data analysis. The typical modelling of data analysis includes clustering model, classification model, neural network model, and other efficient models. Neural network is considered as a vital element in deep learning, which replicates biological systems for communication node distribution and information processing. Deep learning process consists of various types such as recurrent neural network (RNN), restricted Boltzmann machine (RBM), autoencoder (AE), multilayer perceptron (MLP), and convolutional neural network (CNN) [1316].

The storage, handling, and processing are challenging for large sized medical images, and thus, high performance processors are required for medical image processing [17]. Hence, CNN is utilised in the medical field for processing large volume of medical data. CNN is a feed-forward neural network which helps in the modelling of sequential data. It also helps to easily predict and classify the disease and aids in decision-making during disease diagnosis by utilising various approaches. Thus, it assists the clinicians to detect and characterize the important features in large image series. Convolutional neural networks are utilised for different computer applications such as superresolution, medical image classification, and sematic segmentation [18]. They are also used for detecting the objects from satellite image and for implementing many real-time applications [19]. Convolutional neural network effectively learns the local and global features from the image dataset, so it is widely employed in the image classification process. CNN utilises the supervised learning techniques, and thus, it gives better classification results than the unsupervised techniques like RBM neural networks [20].

Convolutional neural networks enable accurate and robust big data feature learning. They utilise many samples for extracting meaningful information from data. Yet, the data may be ambiguous because of inadequate information or complexity of the data source which is known as data ambiguity. Instead, the fuzzy logic is the effective tool for modelling human thinking and perception. It offers the mathematical model for processing ambiguous data by executing numerical computations using linguistic labels and fuzzy set degrees of membership, but the fuzzy systems lack learning ability and the fuzzy rules are determined by the human experts. Thus, the fuzzy system is combined with neural networks so that the fuzzy rules can be derived from a large source of training data by automatically learning the membership functions. Hence, fuzzy convolutional neural network is proposed by integrating fuzzy logic and convolutional neural networks. It utilises the advantages of both fuzzy logic and CNN for accurate and robust classification of data.

This paper is formulated as follows: the literature survey related to the article is explained in Section 2. The architecture of the proposed system is described in Section 3. In Section 4, the proposed methodology regarding the secured cloud storage and medical image processing is explained. Section 5 gives the experimental results for the proposed system. Finally, Section 6 illustrates the conclusion and future works.

2. Literature Survey

Due to the momentary advancement in the healthcare services, there is a rapid growth in medical data. The healthcare services in medical field involve diagnosing the diseases, treatment, injury, disease prevention, and treating other mental or physical injuries in patients. The quality of these healthcare sectors lies in the problem detection efficiency and allocation of medical resources which require application and collection and management of medical data [21]. Previously, many organisations used manual records about the patients organized in the form of reports [22], but for large number of datasets, a modern technology is required in data analysis and management. Hence, cloud computing technology and IoT have been introduced in the medical field for effectively handling, storing, processing, and transferring the medical information of patients. Cloud computing is a smooth and dynamic process which offers a reasonable strategy for handling medical data [23], whereas Internet of Things provides interconnectivity among the devices, applications, data, objects, and sensors which highly influence the data transmission process in medical field. Thus, IoT provides continuous communication and interaction between the sensors and devices in the cloud computing environment. Now, many organisations have initiated to utilise the IoT and cloud computing technology due to its convenience in their operations [24]. Also, cloud computing and IoT are utilised by pharmaceutical and medical research organisations whose applications can have great impact for the welfare of society.

The developments in the medical technology made the storage of medical data in cloud become more convenient. The medical images contain much significant information about patients, and thus, the secure transmission of data and secure storage is important. Hence, for the security of the data, the encryption and the decryption algorithm has been utilised. The chaotic system is extensively applied in the encryption process as it provides better results for randomness and sensitivity of parameter and initial values [25]. Many investigations have been conducted by utilising chaotic system in encryption algorithms. Zhou et al. [26] suggested 1D chaotic system for the encryption process, and Cao et al. [27] offered 2D chaotic system for the encryption process. These are low-dimensional chaotic system with simple orbits and few parameters, and thus, the initial values and parameters can be estimated easily. Gupta and Silakari [28] suggested an image encryption algorithm by utilising 3D cat map. Then, Huang and Nien [29] analysed about the colour image encryption process by utilising 3D chaotic system. Also, Wu et al. [30] proposed the 6D hyperchaotic algorithm for colour image encryption. However, these algorithms have less safety factors including low differential attraction, larger correlation coefficients, and smaller key space. Thus, for improving the safety features of the image encryption algorithm, the backpropagation (BP) neural network- and chaotic system-based encryption algorithm was proposed for the image encryption process.

The progressions in the data storage capabilities, computational resources, hardware design, and safety procedures have increased the necessity of medical data. The medical data undergo processing for effective analysis of diseases using data processing techniques. The medical data processing includes classification, segmentation, and abnormality prediction by utilising the images produced from the medical devices. For large volume of medical data, an efficient data processing technique is required for collecting the valuable insights from datasets. Deep learning [31, 32] is a part of machine learning which has the ability to classify the data based on feature learning without manual interference. It is the widely applied machine learning technique [33] which can produce robust and representative features automatically [34, 35]. It is especially suitable for analysing large dataset and also utilised in speech analysis, computer vision, natural language processing (NLP), scene classification [36], and face recognition [37]. Many investigations have been conducted on image classification by utilising deep learning algorithms like convolutional neural networks and stacked autoencoder (SAE) [30].

Among the deep learning methods, the convolutional neural network provides better classification accuracy with most powerful architecture [38]. CNN is a deep learning process used for solving complex classification problems [12]. It improves the computational performance and precision rate for large datasets. Also, it automatically extracts the feature maps in association with training data. During the classification process, the model is pretrained with related and sufficient data through which the feature description of the model is assigned. Thus, the model begins with the pattern which is related to the task rather than starting from certain random values. Still, the learning process needs training dataset related to testing dataset because the CNN cannot perform efficiently without relevant data [39]. Thus, the convolutional neural networks are combined with fuzzy logic system to form fuzzy convolutional neural network for modelling nonlinear functions. This fuzzy convolutional neural network improves the effect of function approximation. Generally, the convolutional neural networks utilise the fully connected neural networks for analysing the information from feature maps. Here, the convolutional neural network is integrated with the fuzzy logic system instead of fully connected neural network to increase the ability to estimate ideal hypothesis.

3. System Architecture

The conceptual structure of the proposed system comprises the following phases. At the initial phase, the necessary datasets such as query images and medical images are collected for image processing. The collected medical image datasets are then transferred to cloud subsystem using IoT and are stored into cloud database. Before transmission, these medical images undergo the encryption process to eliminate the security threats. This encryption process is carried out using BP neural network and chaotic system. After encryption, the medical will be transferred to the cloud platform where disease diagnosis takes place. The diagnosing process in the medical images is carried out using the proposed algorithm called fuzzy convolutional neural network algorithm which classifies the processed images into normal and disease affected. The classification results from the cloud are transferred to the doctors or experts in context of person’s health by IoT. The architecture of the proposed system is displayed in Figure 1.

4. Proposed Methodology

4.1. Secured Cloud Data Storage and Transmission Using IoT

The medical images are generally distributed to various organisations for accurate analysis of diseases. Thus, medical data sharing is necessary for improving the quality of healthcare services. The connectivity made through the Internet by IoT devices and the development in the cloud computing technologies made the users to utilise accessible and distributed cloud computing platforms. Here, the medical images are stored in the cloud where the image classification for disease diagnosis takes place. It classifies the images into normal and abnormal images, and the results are then transferred to the doctors and patients using IoT devices. IoT-based medical image transmission is given in Figure 2.

In cloud computing, the medical datasets are stored in the third party service provider which concerns the privacy of the user. Due to this privacy-related problems, the application of cloud sharing platforms is limited in the medical field. For secured data storage on cloud database, BP neural network- and chaotic system-based encryption algorithm is proposed in this system.

4.1.1. Medical Image Encryption

The reason for implementing an encryption-decryption system is privacy. As information travels over the Internet, it is necessary to examine the access from unauthorized organisations or individuals. Hence, the data are encrypted to reduce data loss and theft. Encryption is the process in which the information is converted into secret code for concealing the actual information. The encryption and decryption processes are collectively called as cryptography. The algorithm utilised for encoding and decoding data is known as ciphers or encryption algorithms, while the unencrypted data are called plaintext and the encrypted data are known as ciphertext. The encryption process converts the plaintext into alternate form called ciphertext. The variable known as a key is required for decrypting the encrypted images. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users. Encryption is commonly used to protect data in transit and data at rest. Encryption has been a longstanding way for sensitive information to be protected. In modern times, encryption is used to protect data stored on computers and storage devices, as well as data in transit over networks. The time and difficulty in guessing the information is what makes encryption such a valuable security tool.

(1) Preprocessing. For preprocessing, the input image of size is used. The image size is disintegrated into subimage blocks, and each block is given as . For accelerating the convergence of network training, it is necessary to normalise the generated subimage blocks. Here, mean distribution pretreatment is used for normalising the grayscale image range which is given as , and the transformation domain is given as . The pixel value for processing is assumed as and which is obtained as given in the following equation:

By equation (1), the pixel values of the plain image are distributed from [0, 1] through which the preprocessing of training samples takes place.

(2) Image Compression. After preprocessing, image compression takes place in the medical images by employing the BP neural network. During compression, the image data samples are given as the input. The number of hidden nodes in the network is denoted as , and the number of nodes in input layer is denoted as . The compression rate of BP neural network is denoted as t, and the association among them is given as follows:

During neural network training, the network coupling weight remains unaltered during the compression process. Thus, the newff function is employed for generating a trainable feed-forward network. The compression image is obtained by utilising newff function for training. The transfer functions are calculated as follows:

(3) Image Scrambling Using Extended Zigzag Confusion. The extended zigzag confusion algorithm is utilised for image encryption which scrambles the BP neural network trained image data. It is an extended form of zigzag confusion which is designed for eliminating the drawbacks in zigzag confusion. The scrambling process in zigzag confusion begins with the upper left corner of the matrix. During zigzag confusion, some values in particular positions are not altered even after scrambling for numerous times, and thus, it can be cracked easily. Also, the zigzag transformation is only suitable for square matrix and not suitable for nonsquare matrix. Hence, extended zigzag confusion is used which overcomes the limitations in regular zigzag confusion. In extended zigzag confusion, the scrambling process can begin from any of the four corners which are chosen by the random number created in the chaotic system. The extended zigzag confusion is also suitable for nonsquare matrix. Figure 3 shows the zigzag confusion for square matrix and the extended zigzag confusion for nonsquare matrix.

For extended zigzag confusion, set the initial values and parameters of fractional-order chaotic system and then iterate the chaotic system (m+P×Q) times. For improving the sensitivity of initial values and parameters in chaotic system, the initial values are discarded and three chaotic series are obtained. By combining these three series, two pseudorandom series R1 and R2 are formed. These series R1 and R2 are utilised for diffusing pixel values. Equations (5) and (6) give the xor diffusion algorithm.

Positive algorithm:

Reverse algorithm:where E is denoted as ciphertext, which is a one-dimensional vector; R is denoted as the password vector; and Q is denoted as an expanded form of confusion image. Based on equations (5) and (6), the ciphertext vector E is formed which is the recovered pixel matrix. From the above equations, the encrypted image is attained.

4.1.2. Medical Image Decryption

Generally, the data are encrypted to make it difficult for someone to steal the information. The conversion of encrypted data into its original form is called decryption. It is generally a reverse process of encryption which recovers the original image from the encrypted image. It transforms the data that has been rendered unreadable through encryption back to its unencrypted form. Thus, it decodes the encrypted information so that an authorized user can only decrypt the data because decryption requires a secret key or password. If a decryption passcode or key is not available, special software may be needed to decrypt the data using algorithms to crack the decryption and make the data readable. In decryption, the system extracts and converts the corrupted data and transforms it to texts and images that are easily understandable not only by the reader but also by the system. Decryption may be accomplished manually or automatically. It may also be performed with a set of keys or passwords. Decryption is taking encoded or encrypted text or other data and converting it back into text user or the computer can read and understand.

The decryption algorithm comprises inversing the pixel value diffusion, inversing the pixel position confusion, and reconstructing the BP neural network algorithm. For the decryption process, the encrypted image is given as input. Then, the inverse vectors S1 and S2 are obtained, and the inverse diffusion algorithms are expressed in equations (7) and (8).

Positive inverse algorithm:

Reverse inverse algorithm:

Then, using the extended inverse zigzag algorithm, the scrambled pixels are recovered. The pixel values quantization and the pixel blocks are retrieved, and the vectors also retrieved into subimage blocks. Finally, the decrypted image is formed by combining every subimage. Algorithm 1 illustrates the encryption and decryption processes in medical images.

Input: image of size m × n, number of hidden layers (L), number of hidden nodes (h), compression rate (C)
For n = 1: number of pixels
Partition the image
H = decompose the input image
End
Normalise all the subimage block matrices
NET =  create backpropagation neural network (L, h, C)
Get compressed data
Scrambled image = extended zigzag algorithm (input image, NET)
Generate chaotic sequences.
S1, S2 = chaotic random sequences
Diffuse pixel values by xor algorithm
C1 = encrypted image
//Decryption
Input = C1
S1, S2 = chaotic random sequences
Diffuse pixel matrices
Inverse zigzag algorithm
Recovery of pixel blocks
I = decrypted image
4.2. Medical Image Processing

Medical image processing plays a major role in the diagnosis of diseases and makes the treatment process more efficient. The medical image processing in this system is carried out using the fuzzy convolutional neural network. The medical images are generally collected as high dimensional type. Due to this high dimensional data, the functioning of the learning process in the fuzzy CNN can be weakened. Hence, before the implementation of fuzzy CNN, there is a requirement for dimensionality reduction and segmentation for improving classification performance.

4.2.1. Improved Principal Component Analysis (IPCA)

Dimensionality reduction is very efficient for processing high dimensional images. The principal component analysis (PCA) is widely utilised for decreasing the dimensions of the original data. It utilises the linear transformation to create a simplified dataset while preserving the important features of original dataset. During principal component analysis, the output variables of eigenvectors are exposed through a variance and it has a huge impact in the analysis. This drawback in the analysis is improved using improved principal component analysis (IPCA). The IPCA requires lesser training time, and it experiences a smooth dimension reduction process.

(1) Covariance Matrix Formation. In covariance matrix formation, Xn is assumed as training vector and it is given as . The method for improved principal component analysis is illustrated as follows: where the sample size is denoted as n, dimension number is denoted as  m, and the normalization value of is expressed as . In this process, there is no loss in information and the correlation matrix does not have any difference. The covariance Co matrix is formed as follows:

(2) Calculation of Eigenvalue. For decomposing the covariance matrix Co, the eigenvalue is calculated. It is expressed as where the eigenvalue is denoted as and the rank of matrix is considered as y. The subeigen values of the vector in the orthogonal basis of is given as . The diagonal of matrix is given as , and we have .

(3) Principal Components Calculation. After setting up the covariance matrix Co and normalization, the original vector is shifted to the uncorrelated vector , which is illustrated as

Cumulative contribution rate consists of information proportion of the initial principal components. Also, the threshold value of is taken as 85% for collecting necessary information. The contribution rate and cumulated contribution rate are determined from the following equations:

After estimating the accurate value of , the principal component of sample is determined as follows:where

4.2.2. Image Segmentation Based on K-Means Clustering

Image segmentation is utilised efficiently in many medical imaging practices such as recognising the tumour in brain and determining the size of the tumour and its response to the treatment. Having good segmentation process will help the clinicians and patients as they present necessary information for 3D visualization, surgical planning, and disease detection. K-means clustering is the widely utilised method for image segmentation. It is an unsupervised learning algorithm which has relatively low computational complexity. Also, it is the simplest clustering method which divides a set of data into precise number of clusters. Generally, K-means algorithm identifies the number of clusters (K) in the particular regions of images, and thus, it is suitable for medical image segmentation [40].

K-means clustering is the data partitioning algorithm, where observations are iteratively assigned to exactly one of the clusters defined by centroids, where is selected before the initiation of the algorithm. Here, is given as the set of cluster centres and is given as the set of data points. These data points will be clustered into cluster centres. The objective function of K-means clustering algorithm is given aswhere is the data point and is the cluster centre. By minimising the objective function, the optimal centres can be attained. The initial cluster centre greatly influences the K-means clustering results. Thus, the cluster centres must be selected delicately. Algorithm 2 illustrates the process of K-means clustering algorithm.

(1) Select cluster centres in random manner
(2) Compute the distance from each data point to the centre.
(3) The data point is assigned to the cluster centre where the data point distance is the lowest of all cluster centres.
(4) The cluster centre is updated by utilising the equation:
Where is the data point allotted to cluster centre, total number of data points allotted to cluster centre, and is the cluster centre.
(5) Repeat the steps 2 to 4 till the objective function is minimised.
4.2.3. Computational Complexity Analysis

The most relevant way to measure the complexity of an algorithm is in terms of how much time it takes to run (space, i.e., memory, can also be used as the basis for a complexity measure, but time is usually the more useful measure). For a complexity measure to reflect the intrinsic nature of the algorithm, it should summarize performance on all problem instances, and it should not reflect inconsequential implementation details.

Computational complexity is the method of measuring the complexity of the algorithm especially the running time of the algorithm. The space (memory) occupied by the algorithm can also be used for calculating the computational complexity, but time is generally more efficient than space. For reflecting the intrinsic nature of the algorithm, the complexity measure must summarize the performance on all problematic cases, and it must never reflect the insignificant implementation factors. The computational complexity of K-means clustering algorithm according to Algorithm 3 is illustrated.(i) Input: denotes the number of clusters, denotes the maximum number of iterations, denotes the data point allotted to cluster centre, denotes the total no. of data points allotted to cluster centre, and is the cluster centre.(ii) Ensure: best cluster centre: .(iii) Initialization: .(iv) Randomly select cluster centres, : .(v) Calculate the distance to each centre.(vi) Assign the data point to the cluster centre, whose distance from the data point is least of all cluster centres.(vii) Objective function evaluation, : .(viii) Update the location of cluster centre: .(ix) Repeat steps 3 to 6 until the objective function is minimised.(x) Iteration counterincrement: .

Input: assume input image Xi = 0, 1, 2, …, n, training epoch number E, batch number N, number of convolutional layer Cl, ith fuzzy rule Rk.
Output: assign outputs Yi = 0, 1, 2, …. m, f (.) = activation function, trained parameters.
Initialisation: randomly initialise weight Wt and membership function Mx, My, Mz.
//Compute bias and kernel maps by minimising the loss function.
for e = 1 to E do
for b = 1 to B do
//input membership function
Rk: if Xi is , where F is the fuzzy set with ith input and kth fuzzy rule.
fuzzification (Xi) is //fuzzy inputs
then Yi is
fuzzification (Wt) is //rule evaluation
for Cl = 1 to CL do
end
//output membership function
defuzzification is Y;
fully connected (Y) is //calculate sensitivities
cross entropy is CE;
update ; //membership function chosen fuzzy rule.
end
end
4.2.4. Fuzzy Convolutional Neural Network (FCNN)

The image processing is complex in medical images because it has high resolution images. Thus, an effective classification algorithm like fuzzy convolutional neural network algorithm is utilised for medical image classification. The FCNN algorithm utilises both neural and fuzzy systems for the classification process. The information collected through CNN and fuzzy is merged together for creating the entire system for classification. In FCNN, fuzzy minimises the uncertainties and the neural network reduces the noises in the original data. The structure of FCNN is presented in Figure 4.

The structure of the proposed FCNN consists of convolutional layer and fully connected layer. The convolutional layer undergoes three stages such as convolution stage, nonlinearity stage, and pooling stage. At the initial stage, the input data undergo processing in the fuzzification layer for creating the fuzzy logic representation. Accordingly, the fuzzy representation is convoluted in the fuzzy convolutional stage which contains fuzzified convolution kernels. Fuzzy convolution is the min-max composition of fuzzified kernels which obtain higher level fuzzy features from its input spatial features. During the defuzzification layer, the crisp values are produced from the features obtained by pooling. In the final stage, a fully connected layer functions as an output classifier of FCNN.

To perform fuzzy inference, fuzzy rule is needed to be generated. For a multi-input method, every input is graded with each fuzzy set for its degree of membership. Next, inputs and outputs are assigned and the fuzzy rule is generated as follows:where is denoted as the fuzzy set with input and fuzzy rule. Based on membership functions, multiple linguistic labels are assigned for each element in input matrix. The input node membership to a certain fuzzy set is described as grade which is calculated by the fuzzy membership function. The fuzzy set () is illustrated aswhere the centre of input fuzzy membership function is denoted as Mx and the input matrix is denoted as .

The processing phases in the fuzzy convolutional layer include fuzzy convolutional phase, nonlinearity phase, and pooling phase. In the fuzzy convolutional phase, the fuzzy convolutional filter is applied to the original data, which is given as

The fuzzy convolutional filter is calculated as follows with as the original convolution filter:

Equation (21) gives the nonlinear transformation of gained output from the fuzzy convolutional phase. The final phase in the fuzzy convolutional neural network is the max pooling phase. By this phase, the size of the input may be reduced in the next fuzzy convolution layer:

In the convolution phase, the activation function is denoted as . The fully connected phase of fuzzy convolutional neural network works as a classifier with input features. By the process of defuzzification with centre of gravity, the crisp value is calculated, which is illustrated aswhere denotes the centre of the defuzzification membership function. The output of the FCNN is represented as which is given as

In the fully connected layer, the weight matrix is denoted as . For evaluating the output error, cross entropy is utilised. Equation (24) illustrates the estimation of cross entropy function:where the output of the classifier is given as , target is given as z, and the number of samples is denoted as N. The model parameters are trained using cross entropy loss function. Equation (25) gives the weight update:

Equation (26) gives the updated defuzzification membership function, where the centres are denoted as , the learning rate is denoted as :

Equation (27) gives the updated fuzzification membership function with centre value as and learning rate as :

Equation (28) gives the updated fuzzification membership function with centre value as and learning rate as :

Algorithm 3 summarizes the process of the fuzzy convolutional neural network algorithm.

5. Experimental Evaluation

5.1. Dataset Descriptions

The image classification is an important task in the proposed system which classifies the image dataset into normal dataset and disease affected dataset. Here, the experiments are performed on medical datasets for evaluating the proposed algorithm using parameters like specificity, sensitivity, and accuracy. Besides, the other existing algorithms like decision tree (DT), Naïve Bayes (NB), K-nearest neighbour (KNN), and artificial neural network (ANN) are also compared with the proposed algorithm for analysing the efficiency of the proposed algorithm. For evaluating the effect of preprocessing techniques on classification of medical images, two different types of datasets acquired through noninvasive modalities like MRI are utilised. The datasets comprise BRATS images [41] and Brain MRI [42]. They are utilised for training, testing, and validation of the proposed algorithm. Table 1 gives the description of the datasets utilised for classification.

5.2. Classification Results
5.2.1. Parameter Settings

The parameter settings of proposed algorithm and the comparative algorithms for classification are presented in Table 2. Here, 75% datasets are utilised for training the classifier and 25% datasets are utilised for testing the classifier. The proposed approach is implemented in Matlab platform.

5.2.2. Evaluation Parameters

The parameters like specificity, accuracy, sensitivity, and F-measure are considered in the evaluation process. The evaluation parameters are determined using the number of true positive (TP), number of false positive (FP), number of true negative (TN), and number of false negative (FP). The specificity, accuracy, and sensitivity are illustrated as follows:

In true positive (TP), the right values will be predicted as the correct value. In true negative (TN), the right values will be predicted as the wrong value. In false positive (FP), the false values will be predicted as the right value, and in false negative (FN), the false value will be predicted as the wrong value.

Figure 5 illustrates the analysis of specificity among the proposed algorithm and other existing algorithms such as KNN, NB, DT, and ANN. The analysis is conducted for five different sets of images such as 5000, 10000, 15000, 20000, and 25000. It shows that the specificity is high for the proposed algorithm than the other existing algorithms like KNN, NB, DT, and ANN.

Figure 6 illustrates the analysis of sensitivity among the proposed algorithm and other existing algorithms such as KNN, NB, DT, and ANN. The analysis is conducted for five different sets of images such as 5000, 10000, 15000, 20000, and 25000. It shows that the sensitivity is high for the proposed algorithm than the other existing algorithms like KNN, NB, DT, and ANN.

Figure 7 illustrates the classification accuracy among the proposed algorithm and other existing algorithms such as KNN, NB, DT, and ANN. The analysis is conducted for five different sets of images such as 5000, 10000, 15000, 20000, and 25000. It shows that the accuracy is high for the proposed algorithm than the other existing algorithms like KNN, NB, DT, and ANN.

F-measure is utilised for measuring the effectiveness of the classification process. If the F-measure value is high, the predicting potential of classification process will be high. The F-measure is evaluated from the mean values of recall and precision. The recall is measured by the ratio of true positive values to the sum of true positive and false negative values. Instead, the precision is measured by the ratio of true positive values to the sum of true positive and false positive values. The F-measure based on precision and recall is expressed as

Figure 8 illustrates the analysis of F-measure among the proposed algorithm and other existing algorithms like KNN, NB, DT, and ANN. The analysis is conducted for five different sets of images such as 5000, 10000, 15000, 20000, and 25000. It shows that the F-measure is high for the proposed algorithm than the other existing algorithms like KNN, NB, DT, and ANN.

The comparison results are given for the classification methods mentioned above. From the results, it is known that the proposed approach has superior performance over other classification methods. Particularly, the proposed approach has been able to attain better accuracy compared to other techniques like KNN, NB, DT, and ANN, respectively. In case of specificity, the proposed approach has competently acquired the higher value as compared to KNN, NB, DT, and ANN. Similarly, the proposed approach yielded better value for sensitivity and F-measure, which is comparatively higher than other classification methods. Hence, it is concluded that, for medical image classification, the proposed model is highly efficient.

5.2.3. Computation Time for Classification

Table 3 displays the computation time between the proposed fuzzy convolutional neural network algorithm and other existing algorithms like KNN, NB, ANN, and DT. It displays that the computation time is greater for the proposed algorithm than the other existing algorithms like KNN, NB, DT, and ANN.

5.2.4. Statistical Analysis

The statistical analysis was performed for analysing the improvement in the performance of the proposed algorithms than the other existing algorithms. It analyses classification problems such as the error rates and classification accuracy. Many tests were conducted for this statistical analysis which includes post hoc test, Dunnett test, Tukey test, Friedman test, and ANOVA test. For concluding whether there is any statistical difference among the proposed algorithm and other compared algorithms, a one-way analysis of variance (ANOVA) test has been conducted which analyses the statistical correctness of the proposed algorithms over other existing algorithms. It comprises mean and variance for determining the test statistic. The test statistic is then utilised for determining either the group of data is same or different. The box plots for ANOVA test are illustrated in Figure 9.

Table 4 gives the applications of CNN-based methods for medical image retrieval, computer-aided diagnosis, and classification task. Convolutional neural networks have proven to provide high performance in medical image processing than the other techniques. Thus, CNNs can be successfully applied for various tasks in medical image analysis. The results may differ based on choice of CNN model, number of classes, and number of images used.

5.3. Encryption Results
5.3.1. Analysis of Information Entropy

Entropy is known as the significant aspect of randomness, and it can be utilised for defining the degree of uncertainty of the image. Information entropy computes the uniform distribution of gray pixel throughout the image. If the information entropy value is greater, the image confusion level of the encrypted image will be high. Assume E as the source of information and the information entropy of E is computed as follows:where all the possibilities of is denoted as n and the probability of occurrence of is denoted as . Based on the equation, information entropy for the proposed algorithm is analysed in the encrypted image.

The comparative analysis is carried out in the proposed system for analysing the influence of proposed encryption algorithm in the security performance of medical images. Different types of encryption algorithms are utilised for comparative analysis of proposed algorithm. The most commonly used approaches for the encryption process are feed-forward, feed-forward backpropagation, and fitting neural network algorithms. All these algorithms have been proved as highly effective in many security-related approaches. The information entropy results for the proposed algorithm, and other existing algorithms are displayed in Table 5. From the table, it is observed that the information entropy value is high for the proposed encryption algorithm than the other existing algorithms.

5.3.2. Correlation Coefficients

Correlation analysis is the statistical testing process which has the ability to crack the encryption algorithm. Also, the correlation coefficients provide evaluation standard for testing the encryption algorithm. For plain images, the correlation between adjacent pixels will be generally high. In order to avoid the statistical attacks due to correlation among adjacent pixels, correlated coefficients must be reduced for encrypted image. Here, the correlation coefficient of different pixels is utilised for determining the ability of encryption algorithm and to minimise the correlation between adjacent pixels. Correlation analysis of encrypted image considers all the possible adjacent situations such as diagonal, vertical, and horizontal directions. The following equations are utilised for assessing the correlation between the adjacent pixels in encrypted and plain images:where  M  is represented as the overall pixels of the image and x and y are denoted as the neighbouring pixel values. The mean values are represented as and , and the covariance and variance are represented as and . Based on the correlation equations, 2000 sets of adjacent pixels are selected from the diagonal, vertical, and horizontal directions. During the proposed encryption process, the pixels in the image are entirely randomized so that there will be no leaking of statistical information from encrypted image. Hence, the proposed algorithm has the potential to withstand statistical attack.

Figure 10 displays the correlation coefficients of plain image for the proposed encryption algorithm and other existing algorithms such as feed-forward, feed-forward backpropagation, and fitting neural network in diagonal, vertical, and horizontal directions. From the figure, it can be observed that the correlation coefficients of plain image are very high for the proposed encryption algorithm than the other existing algorithms in all directions.

Figure 11 displays the correlation coefficient of encrypted image for the proposed algorithm and other existing algorithms such as feed-forward, feed-forward backpropagation, and fitting neural network in horizontal, vertical, and diagonal directions. From the figure it is known that the correlational coefficient of encrypted image is comparatively less for the proposed algorithm than the other existing algorithms in all directions.

5.3.3. Differential Attack Analysis

Generally, a small variation in the original image can lead to noticeable variation in the encrypted image. To measure these changes in the original image, unified average changing intensity (UACI) and number of pixels change rate (NPCR) are utilised. The objective of NPCR is to calculate the exact number of altered pixel values in the differential attack. If the NPCR value is high, the result will be better. Instead, UACI focuses on the average variance between two paired encrypted images. If the value of UACI is low, the result will be better. NPCR and UACI quantitatively and qualitatively analyse the processed image and measure the sensitivity of original image. They can also determine whether the images will be able to resist differential attack. The UACI and NPCR are computed by the following equations:where and are represented as an encrypted image before and after changing one pixel of the plain image. If then if not . The estimated average UACI value is 33.46% and NPCR value is 99.56%. If the proposed algorithm attains this value, then the algorithm will have better performance.

The experimental results for NPCR and UACI are compared with other existing encryption algorithms such as feed-forward, feed-forward backpropagation, and fitting neural network. Table 6 displays the results of UACI and NPCR for the proposed and other existing encryption algorithms. The values from the table illustrate that, for NPCR, the proposed algorithm has the highest value than the other existing algorithms. But for UACI, feed-forward neural network has the lowest value than the other algorithms.

5.3.4. Computational Time Analysis

The time consumed by the algorithm for encryption and decryption processes is utilised for analysing the efficiency of proposed algorithm. However, the time consumed by the same algorithm will be varied for different hardware platforms. Table 7 displays the computation time of the proposed and the other existing algorithms such as feed-forward, feed-forward backpropagation, and fitting neural network. From the table, it is observed that the computation time required for the decryption process in the proposed encryption algorithm is less when it is compared with the other existing algorithms.

6. Discussion

In this study, IoT  and cloud computing-based medical image analysis using fuzzy convolutional neural network has been proposed. IoT and cloud computing are the emerging technology and has been applied successfully in many fields. Many research works are conducted for utilising IoT and cloud computing in medical field. They can be effectively utilised for storing, processing, and sharing. Even though IoT and cloud computing has many advantages, it is not widely employed in the medical field due to the security aspects. Cancer has grown as the deadly disease for human beings from children to older generations. The earlier prediction of this disease can considerably reduce the mortality rate. By utilising IoT and cloud computing, the time required for predicting the disease can be minimised. This is because the data generated from the medical devices can be directly transferred to the cloud platform through Internet. Thus, the time required for the manual transmission of medical data will be reduced. Similarly, the image processing is carried out in the cloud, and thus, the processing time is also minimised. In medical images, the accuracy of the result should be precise for the effective treatment. Thus, an effective image processing technique is necessary for the effective prediction of diseases. Classification is one of the major tasks during image processing, and for obtaining more accurate classification results, the fuzzy logic system is integrated with convolutional neural network to form fuzzy convolutional neural network. It classifies the images into disease infected and not infected, and the results will be immediately transferred to the doctors and healthcare works. Thus, it helps the doctors to analyse the results and treat the patients at the initial stage. Furthermore, in order to increase the security and privacy of the data, encryption process based on BP neural network is employed in the proposed study. The perspective of this study is to create an efficient IoT-based cloud computing environment for earlier and accurate prediction of diseases. The proposed system can be effectively utilised in cancer centres for earlier prediction of disease and also analysing the progress in the treatment process. The proposed system can be also applied in large hospitals and healthcare centres with numerous patients’ data. Implementing the IoT-based cloud computing system in the hospitals can greatly reduce the storage, and also it will ease managing patient’s data.

7. Limitation of Proposed Approach and Future Prospects

The CNN has provided better performance for many applications in clinical domain, but it still has some limitations. More computational power and huge amount of training data are required for the architecture of CNN. If there is any deficiency in computational power, more time is required for training the data which depends on the size of the training data. These limitations can be minimised by better architecture of CNNs, increasing the number of digitally stored medical images, improved computational power, and enhanced data storage facilities. IoT-based cloud computing can be utilised for enhancing the data storage facilities. Even though the Internet of Things and cloud computing can be of huge benefit to healthcare, there are still major challenges to tackle before the complete implementation of connected devices in healthcare. Security and privacy are the major concerns which prevent the users from utilising IoT technology for medical purposes, as healthcare monitoring systems have the possibility to get hacked or breached. The disclosure of sensitive information about the patient’s health and location and interfering with sensor data may cause great consequences, which would counteract the advantages of IoT. For privacy protection, encryption and decryption processes are utilised in the proposed approach. However, the encrypted image can be potentially decrypted without acquiring the key if there is an availability of considerable computational resources and skills. Thus, the skilled hackers may be able to decrypt the encrypted data. Also, failure or bugs in the hardware or even power failure can impact the performance of sensors and connected equipment placing healthcare operations at risk. In addition, skipping a scheduled software update may be even more hazardous than skipping a doctor checkup. Moreover, regarding IoT protocols and standards, there is no consent, so devices made by different manufacturers may not work well together. The lack of uniformity prevents full-scale integration of IoT, therefore limiting its potential effectiveness. Although IoT promises to reduce the cost of healthcare in the long-term, the cost of its implementation in hospitals and staff training is quite high.

8. Conclusion

IoT and cloud computing are considered as the significant techniques in big data processing since they are closely related to each other. The experimental process mainly focuses on utilising both IoT and cloud computing in healthcare procedure especially in brain tumour prediction. The dataset required for disease diagnosis is collected and securely transformed to the cloud. Then, the fuzzy convolutional neural network algorithm is proposed for medical image diagnosis. The proposed algorithm classifies the images into normal and abnormal images, and the results from the analysis are transferred to the doctors and healthcare providers with the help of IoT for further treatment process. From the results, it is concluded that the proposed algorithm outperforms other existing algorithms and can be effectively utilised for image diagnosis. In future, other deep learning neural network algorithms can be applied for improving the accuracy of disease prediction and for efficiently analysing large volume of medical data by utilising the medical resources.

Data Availability

The image data used to support the findings of this study are included within the article. We can visit https://www.smir.ch and create an account to get the BRATS dataset.

Conflicts of Interest

The authors declare that they have no conflicts of interest.