Abstract

High-dimensional and unbalanced data anomaly detection is common. Effective anomaly detection is essential for problem or disaster early warning and maintaining system reliability. A significant research issue related to the data analysis of the sensor is the detection of anomalies. The anomaly detection is essentially an unbalanced sequence binary classification. The data of this type contains characteristics of large scale, high complex computation, unbalanced data distribution, and sequence relationship among data. This paper uses long short-term memory networks (LSTMs) combined with historical sequence data; also, it integrates the synthetic minority oversampling technique (SMOTE) algorithm and K-nearest neighbors (kNN), and it designs and constructs an anomaly detection network model based on kNN-SMOTE-LSTM in accordance with the data characteristic of being unbalanced. This model can continuously filter out and securely generate samples to improve the performance of the model through kNN discriminant classifier and avoid the blindness and limitations of the SMOTE algorithm in generating new samples. The experiments demonstrated that the structured kNN-SMOTE-LSTM model can significantly improve the performance of the unbalanced sequence binary classification.

1. Introduction

With the continuous growth of urban population and wealth accumulation, the urban security has been shown in evidence, while it also faces more and more security challenges. It puts forward more austere requirements on the ability of urban disaster prevention and emergency response as various social accidents increase and the terrorism threats happen. Such technologies as big data, the Internet of things, cloud computing, and artificial intelligence have been continuously applied in the field of intelligent monitoring, which have exerted a far-reaching impact on the smart firefighting and provided more safety and security for our daily life as well as improved the efficiency and value of urban management. Monitoring methods are gradually shifting from manual audit to model prediction and automated decision-making.

Machine learning has proven to be effective in many fields, and in the context of wireless sensor network (WSN), it has proven adequate to detect attacks. High-dimensional and unbalanced data anomaly detection is common, such as detection of electricity theft behavior, anomaly detection in wireless sensor network, and anomaly detection in IDS detection. Effective anomaly detection is essential for problem or disaster early warning and maintaining system reliability. In this work, our main objective is to improve the effect of high-dimensional and unbalanced data anomaly detection. And, anomaly detection in wireless sensor network is a typical problem of high dimensional and unbalanced data anomaly detection [1].

Heterogeneous wireless sensor network is a source to represent such massive different information as light, temperature, and humidity. An important research issue related to the analysis of the sensor data is the detection of anomalies. Anomaly detection systems are usually based on the expert analysis method and data analysis method or a combination of them [2, 3]. The expert analysis method tries to find out the specific abnormal scenes through rules. The accuracy of this method has huge connection with the knowledge of experts, which is subjective, and the results are unscientific and poor in interpretability. The data analysis method is based on machine learning algorithms, which can improve the performance of the system through learning the characteristics of anomalous data. The model will provide the corresponding judgment under new situation. Common machine learning algorithms include logistic regression, support vector machine (SVM), and gradient boosted decision tree (GBDT).

Wireless sensor network (WSN) is a distributed network architecture consisting of a set of autonomously networked electronic devices (sensor nodes) that collect data from the surrounding environment. Such data as current, voltage, power, temperature, humidity, light, and noise will be collected. The market of wireless sensor network is ever growing just that advances in technology and computing [4]. Meanwhile, it is necessary to use effective network management technology to cope with the complexity of network and the large amount and variety of sensor data [5, 6]. WSN usually connects to the cloud services via the Internet. Cloud platform provides the storage and computing infrastructures which are necessary for filing and processing the huge amount of data produced by sensors [7].

A challenging study in this paper is the sensor data analysis for automatic anomaly detection [8]. In this paper, we focused on detecting abnormal changes in the sensing data which may be caused by the sensor system itself or the environment under scrutiny. For wireless sensor network, the causes of anomalies may be related to the following factors: the devices running out of power, the devices deviating from the expected behavior, and the device failing. However, it is difficult to tell an anomaly in a sensor system from a real anomaly of the sensed environment. In this case, the type of wireless sensor network, the detection method, and the interested type of exceptions may trigger a significant impact on the solution design.

This study covers following innovative points as it contains a basic classifier based on LSTM network being the anomaly detection and the structured modeling integration constructing the WSN anomaly detection system.(1)Considering that the distribution of the wireless sensor data will change with time, this study adopts the LSTM-based anomaly detection network model to classify data, which can effectively process time-domain sequence data.(2)The data distribution of wireless sensor data is unbalanced [9], which means that abnormal data cover a small portion of all daily monitoring data. We use the SMOTE algorithm to amplify the data to solve the overfitting caused by unbalanced data.(3)As the SMOTE algorithm would produce noise data, influencing the determination of classification boundary, we adopt the discriminant classifier based on kNN algorithm and the basic classifier based on LSTM to screen out the valid samples and remove the noise samples, which can effectively improve the performance and accuracy of classification.(4)The experiment shows that the defects of misclassification of the traditional method can be solved through the model based on basic classifier LSTM, data generator, and discriminant classifier and circulated organic structural fusion can be achieved.

2. Analysis of High-Dimensional and Unbalanced Data Anomaly Detection

2.1. Modeling of High-Dimensional and Unbalanced Data Anomaly Detection

In the wireless sensor network anomaly detection, Rassam et al. [10] presented the challenges of anomaly detection in WSN and put forward requirements for designing efficient and effective anomaly detection models. Abduvaliyev et al. [11] designed an intrusion detection system for the vital areas in wireless sensor networks. Steiger et al. [12] analysed time-series similarities for anomaly detection in sensor networks. Peña [13] designed a rule-based system to detect energy efficiency anomalies in smart buildings.

In the data classification and learning of high-dimensional and unbalanced data, the problem of unbalanced and multiview data classification remains unexplored in the field of network neuroscience. An intuitive approach is to obtain a balanced distribution of data through sampling methods, which can be oversampling, undersampling, or synthetic sampling. One advanced synthetic sampling method called SMOTE [14] augments artificial examples created by interpolating neighboring data points. Following this work, safe-level SMOTE [15] proposes a weighted generation process to make the data synthetic process more robust. The hybrid strategy is always chosen, which combines multiple techniques from one or both categories. Sun et al. [16] investigated the combination of the time weighting resampling method and Adaboost ensembling. The diversified ensemble learning framework, which finds the best classification algorithm for each individual subdataset, is proposed in the literature [17, 18]. Graa and Rekik [19] proposed a multiview learning-based data proliferator (MV-LEAP) that enables the classification of imbalanced multiview representations. Shi et al. [20] proposed a general multiple distribution selection method for imbalanced data classification, by proving that traditional classification methods that use single softmax distribution are limited for modeling complex and imbalanced data. Fu et al. [21] proposed an algorithm called sssHD to achieve stable sparse feature selection applied it to complicated class-imbalanced data.

In the machine learning and data analysis of high-dimensional data anomaly detection, Bereziński et al. [22] proved that an entropy-based approach is suitable to detect modern botnet-like malware based on anomalous patterns in network. Flores et al. [23] presented a continuous hidden markov model for network anomaly detection. Huang et al. [24] proposed a natural outlier (NOF) to measure outliers. Robinson and Aria [25] used hidden Markov model (HMM) to verify the data validity of the method. Khan et al. [26] proposed a scalable and hybrid IDS based on Spark ML and the convolutional-LSTM (Conv-LSTM) network. Ergen and Kozat [27] studied unsupervised anomaly detection with LSTM neural networks. Dainius and Goranin [28] analysed the applicability of such complex dual-flow DL methods as long short-term memory fully convolutional network (LSTM-FCN) and gated recurrent unit (GRU)-FCN for the task specified on the attack-caused Windows OS system call trace dataset (AWSCTD) and compared it with vanilla single-flow convolutional neural network (CNN) models.

Compared with these methods, our proposed model can be applied not only to traditional machine learning models but also to the training of neural networks. Our model can be combined with sampling methods and can be incorporated with various hybrid strategies.

Wireless sensor network anomaly detection is essentially an unbalanced sequence binary classification. It can be said that wireless sensor anomaly detection is a challenging machine learning issue [29]. Data for such issue have three characteristics:(1)The distribution of data will change with time(2)The distribution of data is unbalanced, and abnormal data covers a small portion of all monitoring data.(3)Anomaly detection is essentially a continuous unbalanced classification task

In the construction of a wireless sensor network anomaly detection system, feature selection is important for accurate classification. In wireless sensor data sets, the feature property set is similar. The feature properties are from sound-light alarm data, alarm data, electricity equipment data (daily freezing/real-time), fault data, gas data, smoke data, intelligent terminal data, water pressure data, water level data, etc.

Definition 1. Sample feature set refers to the data of wireless sensor network, that is, the ith sample feature set is , i = 1, 2, …, n, which means there are n samples and m characteristics, means the mth feature of the ith sample feature in data set .

Definition 2. Sample classification refers to whether the sample data belongs to abnormal data of wireless sensor network. 1 indicates abnormal data and 0 indicates normal data. The , .

Definition 3. Set the historical data as V and the real-time data at time t as . Then, the anomaly detection of wireless sensor network is to determine whether is abnormal according to V, that is,Thus, anomaly detection in wireless sensor network is an unbalanced dichotomy issue, and the data sample size is large, the computational complexity is high, the data distribution is unbalanced, and there will be a sequence relationship among data.

2.2. LSTM Modeling

Recurrent neural network (RNN) is a deep learning model for processing time series data [30]. Its special network structure can make the neuron output act directly on itself as the input in next moment. The neural network output is the result of the interaction between the input of the moment and the states of all moments, aiming to achieve the purpose of sequence modeling. It can learn characteristics and long-term dependencies from sequence and time series data. The recurrent neural network with sigmoid activation function has been proved to be Turing-complete by Schafer and Zimmermann in 2006, which means that RNN can perform the same calculation as any computable program, given the correct weight [31]. However, the gradient at the current moment can only deliver a finite layer to the historical moment, indicating that RNN cannot solve the problem of long-term dependence.

In 1997, Hochreiter and Schmidhuber proposed the long short-term memory networks (LSTMs) based on RNN and introduced CEC (constant error carousel) unit to solve the gradient explosion and disappearance problem of BPTT (backpropagation time) [32]. In 2001, Felix Gers further improved the network structure of LSTM by adding Forgotten Gate and Peephole Connection [33]. In 2012, Alex Graves proposed the CTC (connectionist temporal classification) training criterion of LSTM [34]. In 2014, Chung proposed the GRU (gate recurrent unit), which integrated the input gate and forgotten gate of LSTM into an updated gate to reduce the parameters and train faster [35].

The LSTM model preserves a long-term memory by such unique gates as forgotten gate, input gate, output gate, and the memory unit, which is denoted as . Figure 1 presents the schematic diagram of LSTM cell structure.

The state of LSTM depends on the current input and the previous state [37], while the latter in turn depends on the even more previous input and state. Suppose that the hidden unit uses the sigmiod function for each time step from t = 1 to t = r and the following updated equation is applied:

In the updated equation, and represent the forgotten gate and the input gate, respectively, which are obtained by the linear transformation of the input and the output of the hidden layer in the previous step and then by the activation function . represents the output of the current hidden layer. In every moment, the forgotten gate controls the forgotten degree of the previous memory, and the input gate controls the memory degree of the new memory unit into long-term memory. Therefore, it can be concluded that the transition from the state of the previous memory unit to the current state is not only entirely dependent on the state calculated by the activation function but also controlled by the forgotten gate and the input gate. stands for the output gate, which refers to the controlling of how the short-term memory is affected by long-term memory. represents the weight matrix of each neuron node in the neural network. is the weight of input gate , is the weight of forgotten gate . is the bias of the neuron. and represent the bias for the forgotten gate and the input gate, respectively.

In a well-trained LSTM model, when there is no important information in the input sequence, the value of the forgotten gate of LSTM is close to 1 and the value of the input gate is close to 0, while the past memory will be saved, thus realizing the long-term memory function; when important information appears in the input sequence, this information indicates that the previous memory is no longer important, the value of the input gate is close to 1, and the value of the forgotten gate is close to 0 which means that the old memory is forgotten and the new important information is saved. After such network design, the entire model gets easier access to learning the long-term dependencies between sequences.

2.3. SMOTE Algorithm

Synthetic minority oversampling technique (SMOTE) algorithm is an improved scheme based on the random oversampling method [14]. It synthesizes new samples for the minority classes based on “linear interpolation.”

The SMOTE algorithm adopts a subset of data from the minority classes as an example and then creates similar new synthetic examples. Then, the original data set will collect these synthetic examples. In this process, it generates a sample from the line between the minority class samples and their neighbors. The new data set can work as a training sample to train the classification model, which can effectively solve the problem of overfitting caused by simple random oversampling. Figure 2 illustrates the schematic diagram of the synthetic instances of the SMOTE algorithm.

Algorithm 1 shows the steps of the SMOTE algorithm.

Input:T for training sample, N for oversampling rate, K for k-nearest neighbor parameter, and n for the number of the small-sized sample ()
Output:S for new training sample
Step 1: calculate the k-nearest neighbors of of minority samples () (Euclidean distance is adopted in this paper)
Step 2: randomly select a sample from the k-nearest neighbors.
Step 3: generate a random number ζ between 0 and 1 for synthesis of a new sample
.
Step 4: repeat the step 2 and step 3 according to the oversampling rate N
Step 5: obtain new training sample S
Step 6: the new training sample S was classified with a classifier
Step 7: output classification results

Further, a new oversampling method borderline-stroke algorithm is proposed in the literature [38] based on SMOTE. The main idea is to generate new samples along the boundary of the minority samples, as shown in Figure 3. Bunkhumpornpat et al. [15] proposed a safe-level-smote algorithm to oversample a minority samples at a safer level by adjusting the safety ratio.

3. Anomaly Detection Model Based on KNN-SMOTE-LSTM

The wireless sensor network (WSN) anomaly detection model based on kNN-SMOTE-LSTM is a LSTM WSN detection network model based on SMOTE improvement, and kNN discriminant classifier can continuously screen the security generated samples to improve the performance of the model. Figure 4 presents the process of the model.

Considering that the distribution of the wireless sensor data will change with time and new abnormal situation may appear at any time, we adopt the LSTM-based anomaly detection network model to effectively cope with this kind of time-domain sequence data. The unbalanced data distribution of wireless sensor data, which means that abnormal data are only a small portion of all daily monitoring data, leads to the application of the SMOTE algorithm to amplify the data to solve the problem of overfitting caused by unbalanced data. As the SMOTE algorithm would produce noise data, influencing the determination of classification boundary, we adopt the discriminant classifier based on kNN algorithm and the basic classifier based on LSTM to screen out the valid samples and remove the noise samples, which can effectively improve the performance and accuracy of classification.

Before constructing the model based on kNN-SMOTE-LSTM, it is necessary to process the data according to the characteristics, removing the irrelevant features and eliminating those characteristics whose positive and negative sample distributions are particularly close, only retaining those characteristics which are highly correlated and have a large difference between positive and negative sample distributions. After that, inputting the new wireless sensor network data will see whether this situation is anomaly.

We have set several parameters as follows: X for the real data set after data preprocessing, in which the normal sample set of most classes is and for the abnormal sample set of minority classes. The data generator is Z, and the data set is generated by SMOTE algorithm. The discriminant classifier is D, and kNN algorithm is adopted in this paper. The basic classifier is , and we adopt LSTM anomaly detection network model for the basic classifier. The algorithm steps of kNN-SMOTE-LSTM anomaly detection network model are described as follows:(i)Step 1: kNN algorithm works to train the real data set X and construct kNN discriminant classifier D.(ii)Step 2: by training the real data set X in formula (2) and adjusting such network parameters as the number of layers and nodes, activation function, loss function, time step, learning rate, and dropout rate, the LSTM anomaly detection network basic classifier is constructed.(iii)Step 3: calculate the k-nearest neighbors of in the small-sized sample (Euclidean distance is adopted in this paper) and randomly select a sample from the k-nearest neighbors.(iv)Step 4: generate a random number ζ between 0 and 1 for synthesis of a new sample :(v)Step 5: the discriminant classifier D and the basic classifier work to determine whether the classification label of is consistent with the classification label of . If consistent, then is the valid generated sample and constitutes to be a part of the data set . If not, they will be discarded.(vi)Step 6: this is the most critical step. According to the oversampling rate N, iteration t = 1, 2, …, M. The real data set X and the generated data set constitute the training data set X + . By generating the data set , the basic classifier can be continuously trained. In turn, the base classifier can determine whether the data in the generated data set is true, that is, whether the sample is valid generated.(vii)Step 7: after the iteration, the two types of data () approach equilibrium, and the final classifier is obtained.(viii)Step 8: test and evaluate the test data based on and output the final prediction results.

In this paper, Step 1 is the experimental step of the discriminant classifier based on kNN, Step 2 is the experimental step of the base classifier based on LSTM, and Steps 3-4 are the experimental steps of the data generator based on SMOTE. Steps 5–7 are a part of the iteration cycle of the wireless sensor network anomaly detection model based on kNN-SMOTE-LSTM. Step 8 helps to generate the final classifier for testing and evaluation.

4. Experimental Analysis and Evaluation

In this section, the real data of a wireless sensor network is mainly used for modeling and analysis. Data sources, data attributes, data preprocessing, model training, and experimental results comparison together make up the analysis. Table 1 presents the experimental environment of this paper.

4.1. Data Sources and Attributes

The data set contains real-time wireless sensor data for a scenario that occurred in August 2019, and there are 369 recorded exceptions out of 213,608 data. The data set was so lopsided that the abnormal data account for 0.173% of all records. The project aims to improve the performance of the existed anomaly detection process, improve the accuracy of anomaly detection, better interpret the anomaly patterns, and prevent anomalies through techniques under the existed data-driven strategies.

Because of confidentiality, the original characteristics and further background information about the data were not available. After the PCA (principal component analysis) extraction, it was transformed into fields of V1, V2, …, V28. “Time” and “Class” are the only fields out of the PCA transformation. The field “Time” represents the number of seconds between each status data and the initial status data in the data set. “Class” refers to the label of the category: 1 means the abnormal and 0 means normal. The data sample has characteristics of large scale, complex computation, and unobservable features. Also, a better classification requires a characteristic selection through data preprocessing and the elimination of similar distribution.

4.2. Data Preprocessing

Data quality depends on a number of factors, including integrity, accuracy, consistency, timeliness, interpretability, and credibility. However, the worldwide data are easy to be affected by factors of noise, missing values, and inconsistent data. Low quality of original data will lead to low quality of data mining results. Therefore, the original data must be preprocessed.

Data preprocessing mainly includes data cleaning, data integration, data reduction, and data transformation. Effective data preprocessing can significantly improve the quality of the data and then improving the effect of the model and reducing the time consumption in the actual modeling process. The raw data are all processed structured data.

We analyze the distributions of V1, V2, …, V28 and use factor analysis, discrepancy analysis, and means comparison analysis to research the fields. It finds that fields of V22, V23, and V25 had serious overlap, and there is no significant difference in the independent sample T test/F test. These characteristics and fields are eliminated. Table 2 shows the difference of the measurement data by F test.

4.3. The Training Model

In this paper, we have trained such basic classification models as Gaussian Naive Bayes (Gaussian NB), logistic regression, k-nearest neighbor classifier (kNN), BP neural network, support vector machine (SVM), AdaBoost classifier, gradient boosted decision tree (GBDT), random forest (RF), and LSTM.

We evaluate the sample data set with 10-fold cross-validation. It partitions the data set T into 10 mutual exclusive subsets of similar size. Each subset tries to maintain the data distribution consistency. Then, the union of 9 subsets functioned as the training data set in turn, and the remaining one subset worked as the test data set. The ultimate test evaluation result is the average of the evaluation results of the 10 tests. Obviously, ten-fold cross validation is more stable and accurate for test evaluation results.

The evaluation indexes mainly cover accuracy, precision, recall, F-score, and AUC (area under roc curve) [39, 40]. Importantly, determining a highly unbalanced data set will by default report a high baseline accuracy. For the binary classification, it is suggested that a more rigorous method should be applied like the confusion matrix which provides the true positive (TP), true negative (TN), false positive (FP), and false negative (FN). Table 3 shows the judgment of the experimental results made by the confusion matrix which is formed according to its real category and the predicted category of classifier.

From these rates, two other key metrics could be formulated: one is precision, the fraction of moments correctly classified as abnormal sample, and the other one is recall, the fraction of abnormal moments that are correctly classified:

Given the area under the precision-recall (PR) curve (AUPR), different classification algorithms are more effective than accuracy only. The P-R curve (precision-recall curve) takes the precision as the vertical axis and the recall as the horizontal axis. The ROC curve (receiver operating characteristic curve) takes the true positive rate (recall) as longitudinal axis and the false positive rate as the horizontal axis. The two curves can assess the classifier performance of classification and generalization ability. AUC refers to the areas under the ROC curve.

Scikit-learn (Sklearn) is Python’s algorithm library of machine learning. It achieves such common machine learning algorithms as data preprocessing, data dimensionality reduction, classification, regression, and unsupervised. In this paper, Sklearn (machine learning library of Python) is used to train Gaussian naive Bayes (GaussianNB), logistic regression, k-nearest neighbor classifier (kNN), BP neural network, support vector machine (SVM), AdaBoost classifier, gradient boosted decision tree (GBDT), and random forest (RF) [41]. Keras (deep machine learning library of Python) functions to train the LSTM model. Table 4 presents the parameters of each base classification model.

The classification report provided by scikit-learn v0.18.1 also evaluates algorithm performance. This report provides precision and recall values with respect to both classes and F-scores. The information is valuable for determining the extent to which the algorithm provides FP and FN.

Considering the randomness of the applied algorithms in this paper, it adopts the method of “fixed pseudorandom parameters + cross validation,” which makes the detection results more stable and reliable and even the randomness still exist. Table 5 shows the detection results obtained as the models were trained by each base classifier. Table 5 shows that the F-score of the Gaussian naive Bayes model is the lowest with only 0.18. This is because the Gaussian naive Bayes classifier assumes conditional independence. As simple classification algorithms, logistic regression, AdaBoost classifier, and k-nearest neighbor classifier have similar evaluation results, but they are slightly inferior to BP neural network, GBDT, SVM, and other effective classification algorithms. The random forest (RF) model and the LSTM model show better comprehensive classification performance than the above classification models.

A further analysis of Table 5 shows that the random forest model and the LSTM model have the highest accuracy of 99.95%, but the accuracy is not conclusive as the test data are unbalanced. It has shown that excluding the Gaussian Naive Bayes model (this is because the assumption of conditional independence exists in the Gaussian Naive Bayes classifier; when the assumption is not established, characteristics will interact with each other, resulting in a significant reduction in the precision.), the random forest model has the highest F-score and the LSTM model has the highest AUC. By contrast, LSTM is superior to the commonly used SVM classification algorithm. The antinoise capability of the LSTM model may be better than that of SVM model when constructing the optimal hyperplane to solve the global optimal solution. The LSTM model is more suitable for this kind of unbalanced sequence classification task and can effectively cope with this kind of time-domain sequence data.

In summary, the LSTM model works as the base classifier in the anomaly detection model of wireless sensor network.

To further verify the effectiveness of the LSTM model as the basic classifier, the P-R curve and ROC curve are drawn as references, and Figure 5 pictures the detection results. Both the P-R curve and the ROC curve estimate the classification performance and generalization capability of machine learning algorithms with a given data set. Depending on Figure 5, all classification models performed well except Gaussian naive Bayes. Among the P-R curves, the random forest model and k-nearest neighbor classifier performed best, while the LSTM model had the largest area under the ROC curve, indicating that the LSTM model could be a better base classifier.

The sampling method is utilized to make up the imbalance of the data set by reducing the classes to be same in size. Undersampling and oversampling are two roughly equivalent and opposite techniques which use a bias to achieve this purpose. Such other complex algorithms as synthetic minority oversampling technique (SMOTE), and the adaptive synthetic sampling approach (ADASYN) actually create new data points based on known samples and their features rather than simply replicating the minority classes [42].

In summary, we adopt the LSTM model as the basic classifier for time-domain sequence data like wireless sensors data. For the unbalanced data distribution, it takes the SMOTE algorithm as the data generator. For the impact of noise data on the determination of the classification boundary caused by the addition of SMOTE algorithm, the discriminant classifier based on kNN and the basic classifier based on LSTM work to screen out the valid samples. Then, the kNN-SMOTE-LSTM model is constructed to carry out experiments for anomaly detection in wireless sensor network.

The parameters of kNN and LSTM are still the same as previous, and Table 6 lists the parameters of the SMOTE algorithm.

4.4. Model Verification and Experimental Results Analysis

After data preprocessing and model training of basic classifier, data generator, and discriminant classifier, the kNN-SMOTE-LSTM anomaly detection model has been constructed. Model verification is mainly to test the stability and generalization ability of the kNN-SMOTE-LSTM model.

The kNN-smote-LSTM model updates the sampling ratio in each loop iteration. The sampling ratio is the proportion of the minority samples to the majority samples. Figure 6 shows the performance of the P-R curve and ROC curve under different sampling ratios for kNN-SMOTE-LSTM. The P-R curve and the ROC curve illustrate that the results of different sampling ratios are very close. Combined with the AUC value changes shown in Figure 7, the kNN-SMOTE-LSTM model performs best when the sampling ratios are 0.7 and 1.

To further verify the effectiveness of the kNN-SMOTE-LSTM model, we compare the experimental results with that of the unbalanced oversampling algorithm which includes ADASYN, SMOTE, borderline-SMOTE, SVM SMOTE, SMOTEENN, and SMOTETomek in the combination with the LSTM model [43]. Table 7 presents the detection results. The kNN-SMOTE-LSTM model showed excellent comprehensive performance.

Under the detection of real data, most of the oversampling algorithms give poor classification performance when worked without the integration of the constructed model or with only two models, and the values of F-score are lower than those of just LSTM (the F-score is 0.8483) only, including the SMOTE + LSTM model with the F-score of only 0.2276. The proposed kNN-SMOTE-LSTM model in this paper integrates the basic classifier, data generator, and discriminant classifier through the structural fusion of the model. The F-score is 0.9167, and the AUC is 0.9296 which show comprehensive classification performance and generalization ability, given that a sampling algorithm would add a lot of unnecessary noise data while leading to the deviation of final test on the training samples. However, the kNN-SMOTE-LSTM model proposed in this paper improves the classification performance through the rigorous designed structured network model and the organizational fusion of the model to continuously iterate the sampling ratio and screen the valid samples.

To verify the assumption, confusion matrices [44] of each model are shown in Figures 815, LSTM in Figure 8, ADASYN + LSTM in Figure 9, SMOTE + LSTM in Figure 10, Borderline-SMOTE + LSTM in Figure 11, SVM SMOTE + LSTM in Figure 12, SMOTEENN + LSTM in Figure 13, SMOTETomek + LSTM in Figure 14, and kNN-SMOTE-LSTM in Figure 15.

Experiment results show that the directly used sampling algorithm can improve the accuracy of samples of the minority class, but it will cause serious mistake for the majority class sample classification and improve the rate of FP. The FP rate of the base classifier LSTM model is 0.00021with the number of 18. The FP rate for ADASYN + LSTM model is 0.043 with the number of 3706, and the FP rate for SMOTE + LSTM model is 0.011. Compared with Figure 15, the kNN-SMOTE-LSTM model improved this situation. It does not only enhance the accuracy of minority samples but also solve the problem of the misclassification of majority samples. The misclassification rate of most samples was 0.000082, showing a better classification performance.

To sum up, the kNN-SMOTE-LSTM anomaly detection network model proposed in this paper is an effective method to deal with the anomaly problem of wireless sensor network. The KNN-SMOTE-LSTM model can effectively deal with unbalanced data like wireless sensor network anomaly through circulated organic structural fusion. Experiment results show that the classifier based on deep learning model is more applicable in solving complex nonlinear problems like wireless sensor network anomaly.

5. Conclusion and Future Works

In this paper, we mainly focus on the anomaly detection which is a challenging machine learning issue. It presents a WSN anomaly detection model based on kNN-SMOTE-LSTM which is composed of the based classifier, the discriminant classifier, and the data generator through the researches of the unbalanced classification, data mining, and deep learning technology. Experiments demonstrate that, compared with other methods of anomaly detection in wireless sensor network, this model can overcome the defect of misclassification of unbalanced data of the traditional methods. The accuracy of the model reaches 99.97%, and the AUC value reaches 0.9296, which can improve the efficiency of identifying anomaly detection in wireless sensor network and have a significant warning effect.

In this paper, considering that the distribution of the wireless sensor data will change with time and new abnormal situation may appear at any time, it adopts the LSTM-based anomaly detection network model to classify this kind of time-domain sequence data. As the data distribution of wireless sensor data is unbalanced, which means the abnormal data is only a small portion of all daily monitoring data, it applies the SMOTE algorithm to amplify the data to solve the overfitting caused by unbalanced data. As the SMOTE algorithm would produce noise data, influencing the determination of classification boundary, we adopt the discriminant classifier based on kNN algorithm and the basic classifier based on LSTM to screen out the valid samples and remove the noise samples, which can effectively improve the performance and accuracy of classification.

In terms of empirical study, we make an empirical analysis on the wireless sensor network (WSN) anomaly detection model based on the kNN-SMOTE-LSTM. Firstly, the characteristics of parallel distribution are eliminated by data preprocessing. Then, the feasibility, advantages, and disadvantages of each basic classification algorithm are compared by training the basic classifier. The experiment of training the kNN-SMOTE-LSTM anomaly detection network model and comparing with the unbalanced oversampling algorithm of ADASYN, SMOTE, Borderline-SMOTE, svmSMOTE, SMOTEENN, and SMOTETomek combined with the LSTM model demonstrates that the kNN-SMOTE-LSTM anomaly detection model based on basic classifier LSTM, data generator, and discriminant classifier can overcome the defects of misclassification of the traditional method through circulated organic structural fusion. The project aims to improve the existed anomaly detection process, improve the accuracy of anomaly detection, and better interpret the anomaly patterns and prevent anomalies through techniques in the data-driven strategies.

Data Availability

The Oracle data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This research was funded by the Natural Science Foundation of Zhejiang Province (Grant no. LQ20G010002) and the National Science Foundation of China (71571162). The authors also gratefully acknowledge the helpful comments and suggestions of Fang Yi, Geyao Li, and Yihao Jiang which have improved the presentation.