1 Introduction

With the rapid development of Internet of Things technology, more and more researchers are actively participating in this research field. As one of the supporting technologies of the Internet of Things, wireless sensor networks have attracted a lot of attention. An important application of wireless sensor networks is to monitor the temperature, humidity, and illumination of the environment. Usually, a wireless sensor network is composed of a large number of sensor nodes, each node needs to collect a large amount of data, and then reach the central node through multi-hop routing. In this process, a lot of storage space and energy are consumed. Due to the limited computing, power, and storage capabilities of sensor nodes, we need to establish efficient models for data acquisition and transmission to maximize sensor life and reduce the cost of information acquisition. So, the main goal of data collection is to collect the most accurate data with the least cost. Traditional methods such as distributed source coding techniques [1], cooperative wavelet transform [2], and data clustering [3] can be used to reduce data traffic. For example, in order to improve the efficiency of wireless network data transmission and reduce energy consumption, Shih et al. studied the selection of the modulation method of wireless signal transmission, and proposed a low-power coding mode of the physical layer [4]. Singh et al. proposed a low-energy signal sampling method in wireless networks to increase the lifetime of sampling nodes [5]. Some researchers also established the prediction model of multi-step sensor data in wireless sensor networks to reduce network traffic and increase network life correspondingly [6,7,8]. According to the characteristics of time series or spatial sequence of wireless sensor networks, [9] uses Fourier transform, discrete cosine transform, and wavelet transform to establish signal sparse basis, generate sparse representation data of signals, and then sample sparse data. This can greatly reduce the time and space consumption of sampling. The sparse reconstruction algorithm can achieve more accurate data reconstruction and lower energy efficiency [10]. These methods make use of the spatial correlation of the detected data, compress and encode the data, but cannot effectively handle the abnormal event data, and the computational complexity is high.

The theory of compressed sensing proposed in recent years provides a new way of data acquisition for wireless sensor networks [11,12,13]. According to the theory of compressed sensing, a sparse signal can be accurately reconstructed with fewer samples, and its sampling can be done by linearly projecting the detected data. This allows sensor nodes to perform data acquisition in a compressed manner without additional computational overhead. For the wireless sensor network, although it has the characteristics of convenient construction, strong adaptability and high transmission efficiency, there are some limitations in some aspects, such as energy supply, sensor life cycle, delay, bandwidth, signal distortion, and transmission cost. Nodes in wireless sensor networks also require independent energy supplies, so energy consumption is an important factor in determining the life cycle of sensor nodes. The combination of compressed sensing theory and wireless sensor networks provides an effective way to solve these problems [14], which can optimize sensor node energy consumption [15]. Compressed sensing enables sparse signals in wireless sensor networks to be accurately reconstructed with fewer samples [16]. Compressed sensing essentially provides a method for optimal computation based on mathematically constrained conditions for the solution of sparse information.

When combining compressed sensing theory with wireless sensor network, the influence of noise on signal in wireless sensor network environment must be considered [17]. The compressed sampling itself uses the projection of the signal on the sparse basis to generate the sparse signal, then uses the measurement matrix to sense, and then obtains the sampling value. In the process of sampling, in fact, it is not as complete as the sampling method under the Shannon-Nyquist theory to collect signal information, but a kind of under sampling method [18]. The “incomplete” of signal acquisition makes the sampling value more sensitive to noise than the “complete” sampling. Reducing the impact of noise on this “incomplete” sampling is the key to the compression sensing theory that can be effectively used in wireless sensor networks.

In the experimental, 100 sensor nodes are randomly distributed in the area of 100 × 100, and the center of the area is the center node. The target signals (sources) to be detected are randomly distributed in the region. The experiment assumes that the sensor node collects the signal in a period of time, and each sensor processes the signal sparsely, compresses the sample, and then transmits it to the central node.

Further, we built a real wireless sensor network system, which is composed of 30 temperature sensor nodes. Every nodes support 802.11 and 2.4 GHz network bands. The wireless sensor nodes are separated by 5 m, and the central node is directly replaced by PC. A stable heat source was randomly placed in the experiment, and then the temperature of the heat source was measured. Since the current hardware-based sensing matrix design is still not perfect, we add a module to each sensing node, implement the sparse and compressed sampling by software, and then transmit temperature data by compressed sampled to center node. Signal of temperature was refactored in the center node.

The main contributions of this paper are as follows:

  1. 1.

    The multi-path channel transmission model and compressed sampling model of the wireless sensor network are given, and the mathematical representation of the sampling matrix of the sensor network is given. The Restricted Isometry Property (RIP) is also demonstrated.

  2. 2.

    In view of the fact that wireless sensor networks are susceptible to noise interference, a noise reduction algorithm for compressed sensing restoration is proposed. In this algorithm, the approximate gradient iteration method is adopted, and the convex optimization problem of signal recovery is solved step by step to approach the optimal solution, so that the signal can be reconstructed perfectly. The experimental results show that the algorithm has good robustness and reconstruction accuracy in noisy environment.

  3. 3.

    Through experiments, we analyze the excellent performance of our proposed sensor signal reconstruction algorithm compared to other algorithms under the number of iterations and noise interference. Furthermore, a temperature sensing wireless sensor network environment is constructed. The test results show that our method has higher reconstruction accuracy.

The rest of this paper is arranged as follows: in Section 2, we briefly discuss the basic theory of compressed sensing and the constrained isometric attribute; in Section 3, we introduce the working structure of wireless sensor network, gives the multi-path channel transmission model of sensor network, and demonstrates the construction method of compressed sampling matrix of wireless sensor network and its compliance with rip characteristics; in Section 4, we discuss the sensing signal for the reconstruction problem, and an approximate gradient descent algorithm is proposed for signal reconstruction in noisy environment; in Section 5, the specific process of signal acquisition and reconstruction in wireless sensor network based on compression sensing is introduced in detail; in Section 6, the experimental environment used for performance analysis is introduced, and the experimental results are discussed; and in Section 7, the full text is summarized, and further research work is introduced.

2 Basic theory of compressed sensing

If a discrete signal has only k non-zero elements, the signal is considered to be k sparse. Considering a non-sparse discrete signal U, the sparse or near sparse representation of the signal can be obtained under an appropriate sparse basis Ψ  ∈  RN × L:

$$ \mathrm{U}=\Psi x $$
(1)

U is an n-dimensional signal. Ψ  ∈  RN × L is the sparse basis matrix of signal U. Ψ  ∈  RL × 1 is the sparse or near sparse representation of signal U. Under the theory of compressed sensing, the process of sampling discrete signals can be described as follows: the m-times projection of a signal U with length N on the sensing matrix Φ and {Φi, i = 1, 2, …M} can obtain the compressed sampling form of its signal. Its expression is \( {y}_i={\Phi}_i^Tu \)i = 1, 2,...M. where M represents the sampling number for the signal. In order to improve the sampling efficiency, the sampling times should be as small as possible, usually m < n. Therefore, it can be seen that the length of Y is less than the length of u, so it is called compression sensing. Different from the traditional way of data collection, compression, transmission, and decompression, the compression sensing theory does not need to acquire complete signals and high-resolution images, but only collects the information that best represents the data characteristics, which greatly saves the storage space and reduces the transmission cost. The biggest difference between the compression sensing and the traditional data sampling method is that the compression sensing realizes the compression in the data collection process, and reconstructs in the later use; the traditional method is to collect the complete data information first, and then compress for the needs of storage and transmission [19]. Therefore, compressed sensing theory is an under acquisition method for data acquisition, which can acquire information at a rate lower than Nyquist. The mathematical model of compressed perception is expressed as follows:

For signals U  ∈  RN × 1, find a linear measurement matrix Φ  ∈  RM × N (m < n) and perform projection operation.

$$ y=\Phi u $$
(2)

Among \( \Phi =\left[\begin{array}{l}{\Phi}_1^T\\ {}{\Phi}_2^T\\ {}\dots \\ {}{\Phi}_M^T\end{array}\right],\kern0.5em u=\left[\begin{array}{l}{u}_1\\ {}{u}_2\\ {}\dots \\ {}{u}_N\end{array}\right],\kern1em \mathrm{and}\kern1em y=\left[\begin{array}{l}{y}_1\\ {}{y}_2\\ {}\dots \\ {}{y}_M\end{array}\right] \), Y is the collected signal. Now the key of the problem is to recover u from signal y. because Φ is not a square matrix (M < N), it involves solving an under determined equation (Table 1). In this way, u can be solved by many groups of solutions (Table 2). The theory of compressed sensing shows that under certain conditions, u has a unique solution, and the only solution is the reconstruction of Y obtained by compressed sampling using the recovery algorithm (Table 3). In order to show the projection of the signal on the measurement matrix, we use numerical calculation to explain the process. The Φ is the measurement matrix (as follows Table 1), u is the original signal (as follows Table 2), y is the signal sampled by formula (2) (as follows Table 3). 

Table 1 Φ is the measurement matrix
Table 2 u is the original signal
Table 3 y is the signal sampled by formula (2)

Equation (2) shows the sampling mode of the signal. The theory of compressed sensing shows that the solution of (2) must ensure that x is sparse, and then it can be solved by L0 norm minimization. In the actual environment, most of the signals are non-sparse. The existing theory shows that when the signal is projected onto the orthogonal transform basis, the absolute value of most of the transform coefficients is very small, and the obtained transform vectors are sparse or nearly sparse, which can be regarded as a simple expression of the original signal, which is a prior condition of compression perception, that is, the signal must be sparse under some transformation. Therefore, sparse transform basis ψ can be established to complete sparse representation of non-sparse signals according to formula (1) (Table 5). Combining formulas (1) and (2), the compression sampling of signal U can be described as follows: through formula (2), the compression sampling of signal U is carried out to get y, then the sparse solution x is obtained according to formula (3), and finally, the signal U is reconstructed by sparse inverse transformation of x. The numerical calculation shows that the sparse signal x is obtained by recovering y. x is consistent with projection of u on the sparse basis ψ (as shown in Table 4). Namely, Table 5 is the sparse representation of Table 3, which further shows that the signal can be restored with low sampling through sparse transformation.

$$ y=\Phi \Psi x=\Theta x $$
(3)
Table 4 Ψ is the sparse basis
Table 5 x is the projection of the original signal u on the sparse basis, and the sparsity of u is 7

Where Θ = ΦΨ, this is still an underdetermined equation, but under certain constraints, x can be obtained by y. Of course, if the signal itself is sparse, then no sparse transformation is needed, then Θ = Φ. In addition to the condition that the signal needs to satisfy the sparse expression in compressed sensing, another important constraint is that Θ satisfies the Restricted Isometry Property (RIP).

Definition 1 [19] for matrix Θ, there is a constrained equidistance constant δs, which is the minimum value that holds the following equation.

$$ \left(1-{\delta}_s\right){\left\Vert {x}_s\right\Vert}_2^2\le {\left\Vert \Theta {x}_s\right\Vert}_2^2\le \left(1+{\delta}_s\right){\left\Vert {x}_s\right\Vert}_2^2 $$
(4)

Here, s = 1, 2,... is an arbitrary integer, and x is an arbitrary s-order sparse vector. If a matrix Θ conforms to (4), then Θ satisfies the constraint equidistant property. If δs is not too close to 1, it is not very rigorous to say that Θ conforms to the s-order constraint isometric property. When this property is present, the matrix approximation contains the Euclidean distance of the sparse signal x, which in turn implies that the x sparse vector cannot be in the null space of the Θ matrix.

In practice, we not only care about the recovery of sparse signals, but also those near-sparse signals (the signal vector has some smaller values in addition to the elements with larger values). Therefore, for the near sparse signal vector \( \hat{x} \), assuming that there are k large element values, the remaining elements are zero except for the k larger elements, denoted by \( {\hat{x}}_k \).

Theorem 1 assumes a 2 k-order RIP constant B of a matrix A. For C, the solution x is obtained according to D to satisfy the following formula:

$$ {\left\Vert {x}^{\ast }-\hat{x}\right\Vert}_2\le {C}_0{k}^{-1/2}{\left\Vert x-{\hat{x}}_k\right\Vert}_1 $$
(5)

C0 is a constant. In fact, if \( \hat{x} \) is a standard k sparse vector, then \( \hat{x} \) can be completely recovered from y, and for near-sparse signals, it can be fully recovered under the condition of satisfying Eq. (5).

3 System model

3.1 Working structure of wireless sensor network

Wireless sensor network is composed of many autonomous sensor nodes, which can detect the physical state of the surrounding environment. Each physical node consists of four parts: sensor unit, processing unit, communication unit, and energy supply unit. Wireless sensor nodes usually collect environmental data, such as temperature, pressure, flow rate, humidity and location, and then send these data to the central node (sink) through wireless transmission, and the central node uses other transmission media for transmission. In addition to the aggregation node, each wireless sensor node collects information within its monitoring range and sends it to the aggregation point, so a large number of data in the aggregation point may cause data transmission blocking. In wireless sensor network, compression sensing technology is introduced, and compression sampling is carried out in the process of data acquisition, which can greatly reduce the amount of data transmission and energy consumption.

The wireless sensor system based on compressed sensing works in the following way: each target periodically transmits a signal, the transmission period is T, and the targets are independent of each other and do not require synchronization. The sensor periodically collects the signal, and the period is also T. after the end of the period time slice, each sensor sends the result to the central node, which recovers the data using the perception matrix, then transmits it, and finally analyzes the data at the processing end. As shown in Fig. 1, the solid dot represents the sensor node, and the square represents the center node.

Fig. 1
figure 1

Communication structure of wireless sensor network

Suppose that there are N sensors randomly distributed in the detection area, they can detect the event signals generated in the area. K represents the number of times the sensor node transmits in the period T. Considering that in wireless sensor network, a large amount of data transmission in a long time interval will consume more energy, and frequent data transmission in a short time interval will also cause the energy consumption of nodes to be too fast, so the selection of K plays an important role in balancing the energy consumption of nodes. Each node periodically detects the event signals in this region and gets a vector sequence x, which is usually non sparse, so sparse transformation is needed. Choosing appropriate sparse transform basis can improve the robustness of transmission signal and the accuracy of reconstruction signal.

3.2 Data sampling in wireless sensor networks with compressed sensing

Generally, wireless sensor network consists of a large number of sensor nodes, which have the ability of acquisition, processing, communication, and control, and can monitor the real environment. A wireless sensor network with n nodes, the data collected by each node in a cycle is Xi, i = 1,2,3.... Here, Xi is a scalar, so in a period, the whole wireless sensor network data constitutes a vector, which is expressed as:

$$ \mathrm{X}={\left[{\mathrm{x}}_1,\mathrm{x},\dots {\mathrm{x}}_{\mathrm{n}}\right]}^T $$
(6)

In general, for wireless sensor networks, to obtain complete information, it needs a complete n samples of signal x, and compression sensing can recover the complete wireless sensor signal (β includes non-zero coefficient) by acquiring the transformation coefficient β (||β||0 < <N) of the signal.

In wireless sensor networks, the data vector x is usually large, which may be composed of data from hundreds of thousands of wireless sensor nodes. Compressed sensing can reduce the amount of information collected in wireless sensor networks. For a signal x, if there is a sparse basis ψ, and the signal x can achieve a dilution of P sparse representation under the sparse basis, the sparse basis is expressed as:

$$ \Psi ={\left[{\Psi}_1,{\Psi}_2,\dots {\Psi}_p\right]}^T $$
(7)

Therefore, the data sampling of the wireless sensor network can be expressed as:

$$ X=\sum \limits_{i=1}^N{S}_i{\psi}_i\kern0.24em \mathrm{OR}\kern0.24em X= S\psi $$
(8)

where S is a sparse representation of X. Therefore, the vector data X generated by the N wireless sensor nodes in one cycle can be represented as a vector S (p < <N) having p non-zero coefficients. The usual compression method requires prior determination of the position of all non-zero coefficients of the signal X of length N. The method of compressive sensing does not need to determine the non-zero coefficient in advance and can directly compress and sample the signal. By means of compressed sampling, in the wireless sensor network, we only need to obtain the vector data of length M (p < M < <N) to fully express the information of the whole sensor network unit period sampling and reconstruct it. The original signal. Therefore, the data size processed by the sensor network is reduced from N to M, thereby saving space and time for processing. Compressed sensing uses the sampling matrix Φ to directly sample the data on the sensing node. Considering the sparse representation of the signal X = , the signal Y obtained by the compressed sampling is expressed as:

$$ Y=\phi X=\phi \varPsi S $$
(9)

Among them ϕ = {ϕj, i} is the sampling matrix, also called the perceptual matrix, and the elements in the matrix satisfy the independent and identical distribution, and the variance is 1/M. Therefore, the size of Y obtained by compression sampling is much lower than the original signal, and it is easier to store, transmit, and process. Therefore, the formula (9) can be transformed into:

$$ \left[\begin{array}{c}{y}_1\\ {}{y}_2\\ {}\dots \\ {}{y}_M\end{array}\right]=\left[\begin{array}{cccc}{\varphi}_{1,1}& {\varphi}_{1,2}& \dots & {\varphi}_{1,N}\\ {}{\varphi}_{2,1}& {\varphi}_{2,2}& \dots & {\varphi}_{2,N}\\ {}\dots & \dots & \dots & \dots \\ {}{\varphi}_{M,1}& {\varphi}_{M,2}& \dots & {\varphi}_{M,N}\end{array}\right]\left[\begin{array}{c}{x}_1\\ {}{x}_2\\ {}\dots \\ {}{x}_N\end{array}\right] $$
(10)

In order to achieve perfect recovery after compressed sampling, the value of M is:

$$ M\le \frac{p\log \left(N/p\right)}{1/c} $$
(11)

We have carried out numerical analysis on the sampling length M. The analysis was implemented under different signal length N and sparsity P. As shown in Table 6, the lower the sparsity is, that is, the less the signal non-zero element is and the smaller the required sampling length M is; of course, under the same sparsity, the longer the signal length is, the longer the sampling length will be.

Table 6 The influence of sparsity and signal length on sampling length

Where C is a constant [20]. In order to ensure complete recovery of the sensing signal from the under-sampled information X, we further give four limiting conditions [21]. First, in a wireless sensor network with N nodes, in order to avoid congestion, the common rate R of the sensor node is set as:

$$ R\ge \sqrt{\frac{\log N}{\pi N}} $$
(12)

Second, when the central node receives the signal, the range of the arrival rate ξ:

$$ \zeta \ge \frac{4 WN}{\sigma M\log N} $$
(13)

where W is the bandwidth of transmission signal, and σ > 0 is a small constant. Third, in wireless sensor networks, in order to reduce channel contention from sensor nodes to central nodes, we set the service rate μ as:

$$ \mu =\frac{1+ W\lambda}{W} $$
(14)

Finally, for the N node wireless sensor network, when the node ni sends information to the node nj, in order to ensure the efficiency of transmission, the distance between the nodes ni and nj is generally not greater than the common rate:

$$ \left\Vert {n}_i-{n}_j\right\Vert \le R $$
(15)

3.3 Multipath channel transmission model

In a wireless sensor network, a source may send information to multiple sensor nodes, and a sensor node may also receive signals from multiple sources. Therefore, the entire network signal transmission model can be expressed as the energy r1, …, rm consumed by the sensor S1, S2, …, Sn receiving the source R1, …, Rm. It is assumed that the Ri energy consumption Cij is received from Sj; therefore, the multipath channel transmission model needs to find an optimal transmission scheme to minimize energy consumption.

The mathematical description of this problem is as follows: let the energy consumption from Ri to point Sj be Xij, so the total consumption is:

$$ S=\sum \limits_{i=1}^m\sum \limits_{j=1}^n{c}_{ij}{x}_{ij} $$
(16)

where Xij meets

$$ \left\{\begin{array}{l}\sum \limits_{j=1}^n{x}_{ij}={r}_{j\kern2em }i=1,2,\dots, m\\ {}\sum \limits_{i=1}^n{x}_{ij}={s}_{j\kern2em }j=1,2,\dots, m\\ {}{x}_{ij}\ge 0\end{array}\right. $$
(17)

and

$$ \sum \limits_{i=1}^m{r}_i=\sum \limits_{j=1}^n{s}_j $$
(18)

Therefore, in the sensor network determined by R and s, the problem of finding the best transmission channel is transformed into finding a set of values of Xij satisfying formula (17) to make formula (16) take the minimum value.

Definition 2 assumption R = [r1r2 … rm]T, S = [s1s2 … sn]T are two positive vectors, satisfying

$$ \sum \limits_{i=1}^n{r}_i=\sum \limits_{j=1}^n{s}_j>0. $$
(19)

Assumption  ℋ  (R, S) = {A ∈ Rm × n|A ≥ 0, .A is m x n with R s, as row sum vector ST as column sum vector}

That is, for a given positive quantity R and S, ℋ(R,S) is a set of all m× n nonnegative matrices with R as row sum vector and ST as column sum vector. Such problems are called nonnegative matrix problems with given row sum and column sum.

Definition 3 assumes that q1, q2, …, qr are some non-negative real numbers and satisfy \( \sum \limits_{i=1}^r{q}_i=1. \) The combination \( \sum \limits_{i=1}^r{q}_i{x}_i \) is called the convex combination of the element x1, x2, …, xr. Let X be a set, and the whole convex combination of any finite element in X is called the convex hull of the set X. If any finite element of the set X, its convex combination still in X, and X is said to be a convex set; if a point P of convex set X is not a convex combination of X other points in X, then P is said to be a pole of X.

Suppose A is a m × n non-negative matrix, and b is an m-dimensional non-negative vector. Then the set

$$ \Omega =\left\{y\in {R}^n\left| Ay\le b\right.\right\} $$
(20)

is a convex set.

From Klein-Milman theorem, it is known that a bounded convex set is a convex combination of its poles. From the theory of linear algebra, it is known that the point y belongs to is a pole if and only if the column vector of a corresponding to the non-zero coordinate in Y is the independent vector group of the column vector set of A.

Since the study of a given row, column, and non-negative matrix problem is closely related to the signal transmission problem of the sensor network, the extreme value of the transmission problem must be realized by the pole on the domain. Therefore, the pole in ℋ(R, S)corresponds to the minimum energy consumption of the sensor network.

For the pole problem of H(R, S), the analysis is as follows:

Lemma 1 assumes that R = [r1r2 …rm]T are two positive vectors, satisfying\( \sum \limits_{i=1}^m{r}_i\sum \limits_{j=1}^n{s}_j \), and then A is the pole of ℋ(R, S) if and only if A is the only matrix in H(R, S) that has the same zero pattern as A.

Prove:

suppose A ∈ H(R, S). It is easy to know that A = [aij] is the matrix in ℋ(R, S) equivalent to aij in A is the solution of the equation.

$$ \left\{\begin{array}{l}\sum \limits_{j=1}^n{x}_{ij}={r}_i\kern1em i=1,2,\dots, m\\ {}\sum \limits_{j=1}^n{x}_{ij}={s}_i\kern1em j=1,2,\dots, n\\ {}\sum \limits_{i=1}^m{r}_i=\sum \limits_{j=1}^n{s}_i\end{array}\right. $$
(21)

Therefore, Eq. (21) can be written as

$$ \left\{\begin{array}{l}{x}_{11}+{x}_{12}+\dots {x}_{1n}\kern16.12em ={r}_1\\ {}\kern8em {x}_{21}+{x}_{22}+\dots {x}_{2n}\kern8em ={r}_2\\ {}\kern14em \dots \kern7em \dots \\ {}\kern15em {x}_{m1}+{x}_{m2}+{x}_{mm}\kern1em ={r}_m\\ {}{x}_{11}\kern7em +{x}_{21}\kern4.5em \dots +{x}_{m1}\kern5em ={s}_1\\ {}\kern2em {x}_{12}\kern7em +{x}_{22}\kern2.5em \dots {x}_{n1}\kern6em ={s}_2\\ {}\kern3.5em \dots \kern8.5em \dots \kern4em \dots \kern4.5em \dots \\ {}\kern4em {x}_{1n}\kern8em +{x}_{2n}\kern4em +{x}_{mn}\kern1em ={s}_n\end{array}\right. $$
(22)

Due to condition \( \sum \limits_{i=1}^m{r}_i=\sum \limits_{j=1}^n{s}_j \), Eq. (22) is a set of compatible equations of rank m + n−1. Knowing the solution of Eq. (22) by convex set theory, A = [aij] is the pole in ℋ(R, S).

After the pole is solved, the transmission channel of the sensor network can be established. The sensing node receives the signal through the channel and constructs a sampling matrix. Details of section 3.3 are detailed.

3.4 Compressed sampling matrix of wireless sensor network

In the actual measurement, the active sensor node captures the signal of the event. But there are two problems. One is that if all events happen at the same time, each sensor will receive the signal of mutual interference. Secondly, under the condition of propagation loss and thermal noise, the signal will be distorted seriously. In order to further analyze the signal acquisition process of wireless sensor network, the vector expression of the signal received by the sensor under noise is given here [22]:

$$ {Y}_{M\times 1}={G}_{M\times N}{X}_{N\times 1}+{\omega}_{M\times 1} $$
(23)

Here, X represents the original signal (in order to simplify the description, assume x is a sparse vector, of course, the actual signal x may not be sparse, but we can use the sparse basis constructed by DCT and other methods to sparse), Y represents the sensing signal, and ω represents the thermal noise and interference. It obeys the independent Gaussian distribution with mean value of zero and variance of σ2. GM × N is a channel sampling matrix, whose structure is as follows:

$$ {G}_{i,j}={\left({d}_{i,j}\right)}^{-\frac{\alpha }{2}}\left|{h}_{i,j}\right|i\in \mathrm{M},\kern0.5em \mathrm{and}\ \mathrm{j}\in \mathrm{N} $$
(24)

Where dij is the distance from the i-th signal sensor to the j-th signal sensor. α is the propagation loss factor. hmn is the Rayleigh attenuation parameter of Gaussian noise with mean value of zero and variance of σ2. Therefore, the compressed sensing process of wireless sensor network can be expressed as follows:

$$ \left[\begin{array}{c}{y}_1\\ {}{y}_2\\ {}\dots \\ {}{y}_M\end{array}\right]=\left[\begin{array}{cccc}{G}_{1,1}& {G}_{1,2}& \dots & {G}_{1,N}\\ {}{G}_{2,1}& {G}_{2,2}& \dots & {G}_{2,N}\\ {}\dots & \dots & \dots & \dots \\ {}{G}_{M,1}& {G}_{M,2}& \dots & {G}_{M,N}\end{array}\right]\left[\begin{array}{c}{x}_1\\ {}{x}_2\\ {}\dots \\ {}{x}_N\end{array}\right]+\left[\begin{array}{c}{\omega}_1\\ {}{\omega}_2\\ {}\dots \\ {}{\omega}_M\end{array}\right] $$
(25)

In order to explain the compressed sampling process in detail, the numerical calculation is carried out. Where the distance between sensors in the channel sampling matrix is generated randomly, the matrix G is obtained according to formula (24). The signal x and the interference term w are sampled according to Eq. (25). Then, Then Fourier orthogonal transformation matrix is used as sparse basis to recover signal. Five sets of numerical analysis are carried out to compare the difference between the original signal and the recovered signal (|recovered signal-original signal|/|original signal|*100%). As shown in Table 7, the numerical calculation shows that after compression sampling, it can be recovered, but the recovery accuracy is not enough, which is related to the signal reconstruction algorithm. The later part of this paper studies the reconstruction algorithm.

Table 7 The difference between the original signal and the recovery signal

RIP is the constraint that all sampling matrices must follow under the theory of compressed sensing. Therefore, for the sampling matrix G, we further demonstrate that it conforms to the constraint equidistant property.

According to the Johnson-Lindenstrauss theorem [23], when a matrix Φ satisfies the RIP condition, then the following formula holds:

$$ {\displaystyle \begin{array}{c}\Pr \left(\left|{\left\Vert \Phi x\right\Vert}_2-{\left\Vert x\right\Vert}_2\right|\ge \varepsilon {\left\Vert x\right\Vert}_2\right)\le 2{e}^{-{nc}_0\left(\varepsilon \right)}\\ {}0<\varepsilon <1\end{array}} $$
(26)

Pr(•) indicates the probability of reaching the desired value. C0(ε) is a constant that depends on ε and is greater than zero.

Now, we discuss how to use the convergence of inequality (26) to prove the RIP property of matrix G. First, we discuss the sampling matrix G in a fixed k-dimensional subspace. In particular, we give a subscript set T(T ≤ k), and XT represents a non-zero vector set with subscript T in RN space. This is a k-dimensional linear space which can be used for L2 norm calculation.

The general way to build such a linear space is to build a set of points in the k-dimensional subspace, which meet the uniform constraints of (26), and then extend the result to all k-dimensional signals. This is a common method of constructing set space, and the proof of Dvoretsky theory is also constructed [24]. For L2 norm of matrix G in finite dimensional space, we cannot get the appropriate boundary constraint at the beginning, but adopt the way of gradual refinement.

Theorem 2 assumes that Φ(w) and w ∈ ΩnN are a random matrix of size n × N, which satisfies inequality (26). For any set T, there is |T|0 = k < n, and 0 < δ < 1, then

$$ \left(1-\delta \right){\left\Vert \mathrm{x}\right\Vert}_2\le {\left\Vert \Phi (w)\mathrm{x}\right\Vert}_2\le \left(1+\delta \right){\left\Vert \mathrm{x}\right\Vert}_2\mathrm{x}\in {\mathrm{X}}_T $$
(27)

The probability that the formula is established is greater than or equal to \( 1-2{\left(12/\delta \right)}^k{e}^{-{c}_0\left(\delta /2\right)n} \)

First of all, it can be concluded that formula (27) is obviously valid under the constraint of ‖x‖2 = 1. Because Φ is linear, next, we choose a finite set of points QT, QT ⊆ XT, and for all q ∈ QT, there is ‖q2 ≤ 1, for all x ∈ XT, and there is ‖X2 ≤ 1, so we can get

$$ \underset{q\in {Q}_T}{\min }{\left\Vert x-q\right\Vert}_2\le \delta /4 $$
(28)

Further, we get that there is ‖QT1 ≤ (12/δ)k for set QT. Next, the consistency constraint is applied to the point set X of Eq. (19) and ε = δ/2, so its probability will be greater than \( 1-2{\left(12/\delta \right)}^k{e}^{-{c}_0\left(\delta /2\right)n} \), so

$$ \left(1-\delta /2\right){\left\Vert q\right\Vert}_2\le {\left\Vert \Phi q\right\Vert}_2\le \left(1+\delta /2\right){\left\Vert q\right\Vert}_2\;q\in {Q}_T $$
(29)

Here we assume that a is the minimum value satisfying the above formula

$$ {\left\Vert \Phi x\right\Vert}_2\le \left(1+\mathrm{A}\right){\left\Vert x\right\Vert}_2,x\in {X}_T,{\left\Vert x\right\Vert}_2\le 1 $$
(30)

Our goal is to draw A ≤ δ. So, for any x ∈ XT and ‖X2 ≤ 1, we can choose q ∈ QT so that ‖x − q2 ≤ δ/4, in this case, we can get:

$$ {\left\Vert \Phi x\right\Vert}_2\le {\left\Vert \Phi q\right\Vert}_2+{\left\Vert \Phi \left(x-q\right)\right\Vert}_2\le 1+\delta /2+\left(1+\mathrm{A}\right)\delta /4 $$
(31)

Since A is the minimum value that satisfies the formula (30), A ≤ δ/2 + (1 + A)δ/4 is taken here, so there is \( \mathrm{A}\le \frac{3\delta /4}{1-\delta /4}\le \delta \). We have proved that the upper bound of inequality (27) is established, and the lower bound proof process is similar, which is expressed as follows:

$$ {\displaystyle \begin{array}{l}{\left\Vert \Phi x\right\Vert}_2\ge {\left\Vert \Phi q\right\Vert}_2-{\left\Vert \Phi \left(x-q\right)\right\Vert}_2\\ {}\kern2em \ge 1-\delta /2-\left(1+\delta \right)\delta /4\ge 1-\delta \end{array}} $$
(32)

Then, the lower bound of (27) is also true.

After demonstrating that the sampling matrix G conforms to the constraint equidistant property, another factor that affects the sampling efficiency and recovery accuracy is the number of samples. What we need to further determine is the number of times the sensor node transmits in the period T, the number of sensors that acquire the signal, and the number of all the sensors: K < M < N. Therefore, the last measured signal vector Y is a compressed representation of the event. From another point of view, the vector Y is a feature that acquires X by a lower number of samples (M times). Since the noise interference of the wireless sensor network directly affects the signal accuracy of the compressed sampling, it has a great influence on the signal reconstruction result. Here, we adopt an approximate gradient descent algorithm, which can reconstruct the result of compressed sampling in the noise interference environment and recover the original signal with higher accuracy.

4 Approximate gradient descent algorithm

In order to effectively recover the original signal in the wireless sensing network, signal recovery in a noisy environment must be considered. Therefore, the noisy compressed sensing model is established as follows [11]:

An unknown signal X ∈ RN can be expressed as a known sampling matrix Φ ∈ RM × N (M < <N) and a linear measured value Y ∈ RM:

$$ {Y}_{M\times 1}={\Phi}_{M\times N}{X}_{N\times 1}+{\omega}_{M\times 1} $$
(33)

In order to reconstruct X, we need to solve the proposed constrained denoising model by Candes et al. [25]:

$$ {\min}_x{\left\Vert X\right\Vert}_1\kern0.5em \mathrm{subject}\kern0.5em \mathrm{to}{\left\Vert \Phi X-Y\right\Vert}_2^2 $$
(34)

The solution of formula (34) is a convex optimization process. Here, we give a general description of the problem. For unconstrained convex optimization problems, the expression is minimize

$$ F(x),F(x)=\mathrm{f}(x)+g(x) $$
(35)

The objective function F(x) is considered to be a combinatorial convex optimization function, where g(x) is a continuous convex function, but not smooth, and f(x) is a smooth convex function. The first derivative of function F(x) is lipshciz continuous definition 4. Function f(x) is a lipshciz continuous function if and only if:

$$ {\left\Vert \nabla f(x)-\nabla f(y)\right\Vert}_2\le L(f)\left\Vert x-y\right\Vert $$
(36)

where L(f) > 0 is the Lipschitz constant.

For a general optimization problem, a smooth function f(x) is convex if and only if the tangent of the function is below the function curve. Its mathematical expression is:

$$ f(x)\ge f(y)+<\nabla f(y),x-y> $$
(37)

If the first-order Lipschitz of f(x) is continuous and is a convex function, then f(x) is a local quadratic function whose Hessain matrix is L(f) ⋅ I and satisfies the following conditions:

$$ f(x)\ge f(y)+<\nabla f(y),x-y> $$
$$ f(x)\le f(y)+<\nabla f(y),x-y>+\frac{L(f)}{2}{\left\Vert x-y\right\Vert}_2^2 $$

The proof is as follows:

$$ {\displaystyle \begin{array}{l}f(x)=f(y)+\underset{0}{\overset{1}{\int }}\left\langle \nabla f\left(y+\tau \left(x-y\right)\right),x-y\right\rangle d\tau \\ {}=f(y)+\left\langle \nabla f(y),x-y\right\rangle +\underset{0}{\overset{1}{\int }}\left\langle \left.\nabla f\left(y+\tau \left(x-y\right)\right)-\nabla f(y),x-y\right\rangle d\tau \right.\end{array}} $$
$$ {\displaystyle \begin{array}{l}\left|f(x)-f(y)-\left\langle \nabla f(y),x-y\right\rangle \right|\\ {}=\left|\underset{0}{\overset{1}{\int }}\left\langle \nabla f\left(y+\tau \left(x-y\right)\right)-\nabla f(y).x-y\right\rangle \right|\\ {}\le \underset{0}{\overset{1}{\int }}\left|\nabla f\left(y+\tau \left(x-y\right)\right)-\nabla f(y).x-y\right| d\tau \\ {}\le \underset{0}{\overset{1}{\int }}{\left\Vert \nabla f\left(y+\tau \left(x-y\right)\right)-\nabla f(y)\right\Vert}_2\cdot {\left\Vert x-y\right\Vert}_2 d\tau \\ {}\le \underset{0}{\overset{1}{\int }}\tau L(f){\left\Vert x-y\right\Vert}_2^2 d\tau =\frac{L(f)}{2}{\left\Vert x-y\right\Vert}_2^2\end{array}} $$

4.1 Certificate completion

In order to reconstruct the original signal from the compressed sampled signal, further consider the mathematical optimization model of signal reconstruction in the process of compressed sensing, i.e.,

$$ \operatorname{Minimize}\kern0.24em {\left\Vert X\right\Vert}_1\ \mathrm{subject}\ \mathrm{to}\kern0.24em {\left\Vert \Phi X-y\right\Vert}_1<\varepsilon $$
(38)

Where x and y are vectors and Φ are matrices. This problem in the optimization solution is usually expressed as:

$$ \operatorname{Minimize}\kern0.24em {\left\Vert \Phi x-y\right\Vert}_2^2+\lambda {\left\Vert x\right\Vert}_1 $$
(39)

where λ is the weight between the sparsity of x and the signal error, and the value of λ depends on the degree to which the two parts of Eq. (39) play a definitive role in the optimization problem. Obviously, Eq. (38) is equivalent to Eq. (35), and its corresponding form is:

$$ \mathrm{f}\left(\mathrm{x}\right)={\left\Vert \Phi \mathrm{x}\hbox{-} \mathrm{y}\right\Vert}_2^2\cdot g\left(\mathrm{x}\right)=\lambda {\left\Vert \mathrm{x}\right\Vert}_1 $$
(40)

If in the convex equation of (35), the variable is not a vector but a matrix, and this can be extended to digital signal processing in two-dimensional space, such as image processing. The model of the minimum distortion of total variance can be used to achieve the filtering of image compression perception [26]. For the problem of (39), we propose an approximate gradient descent algorithm to find the optimal solution.

Suppose the function f(x) is a smooth convex function, and the first-order Lipschitz continuous, using the gradient method for k iterations, the expression is:

$$ {X}_k={X}_{k-1}-{t}_k\nabla f\left({X}_{k-1}\right) $$
(41)

where tk > 0 is a scalar representing the step size of the iteration, where a quadratic equation is used to illustrate the iterative calculation from xk-1 to xk

$$ {X}_k=\arg \min \left\{f\left({X}_{k-1}\right)+<\left(X-{X}_{k-1}\right),\nabla f\left({X}_{k-1}\right)>+\frac{1}{2{t}_k}{\left\Vert X-{X}_{k-1}\right\Vert}_2^2\right\} $$
(42)

Ignore the constant term, and the same formula can be obtained by (35):

$$ {X}_k=\arg \min \left\{\frac{1}{2{t}_k}{\left\Vert X-\left({X}_{k-1}-{t}_k\nabla f\left({X}_{k-1}\right)\right)\right\Vert}_2^2+g(X)\right\} $$
(43)

For the compressed sensing in noisy environment, it can be expressed as follows: minimize f(x) = minimize \( {\left\Vert \Phi \mathrm{x}-\mathrm{y}\right\Vert}_2^2+\lambda {\left\Vert \mathrm{y}\right\Vert}_1 \).

According to formula (39), the iteration of each step can be obtained:

$$ {X}_k=\arg \min \left\{\frac{1}{2{t}_k}{\left\Vert X-\left({X}_{k-1}-{t}_k\nabla f\left({X}_{k-1}\right)\right)\right\Vert}_2^2+\lambda {\left\Vert X\right\Vert}_1\right\} $$
(44)

To find the gradient of \( f(X)={\left\Vert \Phi X-y\right\Vert}_2^2 \), the formula (44) is equivalent to:

$$ {X}_k=\arg \min \left\{\frac{1}{2{t}_k}{\left\Vert X-{X}_{k-1}+2{t}_k{\Phi}^T\left({\Phi \mathrm{X}}_{k-1}-X\right)\right\Vert}_2^2+\lambda {\left\Vert X\right\Vert}_1\right\} $$
(45)

where xk is an iterative calculation using such a linear shrinking step.

Let Ek = Xk − 1 − 2tkΦTXk − 1 − y) then (45) be transformed into:

$$ {X}_k=\arg \min \left\{\frac{1}{2{t}_k}{\left\Vert X-{E}_k\right\Vert}_2^2+\lambda {\left\Vert X\right\Vert}_1\right\} $$
(46)

For Eq. (46), we first consider the simple form under one-dimensional conditions:

$$ \underset{x\in \Re }{\min }Q(x)=\lambda \left|x\right|+{\left(x-f\right)}^2 $$
(47)

The solution of this formula is \( x= shrink\left(f,\frac{\lambda }{2}\right) \)

Definition 5 The shrink operator expression is as follows:

$$ shrink\left(f,\frac{\lambda }{2}\right)=\left\{\begin{array}{ll}f-\frac{\lambda }{2}& if\kern0.5em f>\frac{\lambda }{2}\\ {}0& if\kern1em -\frac{\lambda }{2}\le f\le \frac{\lambda }{2}\\ {}f+\frac{\lambda }{2}& if\kern0.5em f<-\frac{\lambda }{2}\end{array}\right. $$
(48)

For Eq. (46), it can be decomposed into multiple one-dimensional optimizations. For the i-th dimension optimization problem, I can fix other elements of the xkj outer vector x, and Eki denotes the i-th element of the vector Ek. According to the definition 5, we can get:

$$ {X}_k=\lambda \times shrink\left({\beta}_j,{t}_k\lambda \right) $$
(49)
$$ {\beta}_j={\sum}_i{E}_{ki}-{\sum}_{k\ne j}{X}_k $$
(among)

By using formula (46) for iterative calculation, xk is kept close to the optimal value. As long as the number of iterations is properly controlled, the original signal can be reconstructed and the noise can be effectively filtered.

figure a

5 Reconstruction of wireless sensor signal based on compressed sensing

Using the approximate gradient descent method as the signal reconstruction algorithm, the specific process of signal acquisition based on compressed sensing wireless sensor network can be expressed as follows:

  1. 1.

    In a wireless sensor network, all sensor nodes are first time synchronized. Assuming that the event occurred for a period of time, each active node detects the event signal with a period T. The resulting signal is represented by the vector X. In order to thin the signal, a discrete cosine transform is used to construct the sparse basis matrix Ψ. Each sensing node generates a projection of a signal vector under the matrix in a period of T, which can achieve signal thinning. This step is a prerequisite for compressive sensing of wireless signals.

  2. 2.

    Each sensor node constructs a sampling matrix according to Eq. (17). Further, the thinned signal vector is projected under the sampling matrix to obtain Y, that is, the sampling of the signal is completed. Since the sampling matrix is not a square matrix, this is a process of undersampling the signal.

  3. 3.

    The sensor node transmits the compressed sampled signal to the central node of the sensor network, and the sampling matrix is also transmitted to the central node (if all sensing nodes use the same sampling matrix, only one of the nodes needs to transmit the sampling matrix to central node). After receiving the signal, the center uses the approximate gradient algorithm to recover the sparse form of the signal and uses the discrete cosine inverse transform to restore the signal, further completing the signal fusion processing. The entire system flow is shown in Fig. 2.

Fig. 2
figure 2

flow chart of wireless sensor based on compressed sensing. The sensor node obtains the signal, after sparse change, samples sparse signal with the sampling matrix, and then transmits it to the central node for recovery

The central node receives the compressed signal Y, and then uses PRG algorithm to approximate the exact solution of Y step by step. At the beginning, we construct a unit vector with the same length as the original signal vector as the initial vector. In the process of algorithm execution, the selection of convergence threshold ε determines the execution time and accuracy of the algorithm. Here, we use the optimal convergence threshold ε = 0.015 obtained by T. blumensath and others in the literature [27]. When using the linear contraction operator to calculate the k-step approximation solution, its step tk depends on the step tk-1, tk = ((tk − 1 − 1)2 + ε)p/2 − 1 of the last iteration, where p is 0.21 [28]. The algorithm is executed iteratively until the convergence threshold condition is satisfied.

6 Simulation experiment

In the experimental design, 100 sensor nodes are randomly distributed in the area of 100 × 100, and the center of the area is the center node. The target signals (sources) to be detected are randomly distributed in the region. The experiment assumes that the sensor node collects the signal in a period of time, and each sensor processes the signal sparsely, compresses the sample, and then transmits it to the central node.

For the sensor node to acquire the signal, the weak signal on a certain sensor node may be a strong signal at other nodes, and a signal strength threshold may be set, and the signal below the threshold is no longer acquired; so to the extent that weaker signals are avoided, they are filtered out as noise during the recovery phase. In order to verify the performance of PRG algorithm in wireless sensor network, we introduce orthogonal matching tracking (OMP) [29], basis pursuit (BP) [30], and subspace tracking (subspace), subspace pursuit, (SP) [32] algorithm, and compressive sampling matching (COSAMP) [33] algorithm to compare and analyze the reconstruction accuracy of different algorithms. According to the theory proposed by Candes et al., the number of times of compression sampling, that is, the line m of the sampling matrix satisfies m ≥ C ⋅ μ2(Φ, Ψ) ⋅ r ⋅ log n, where r represents the sparseness of the signal after sparse projection, n represents the signal length, Φ is the perceptual matrix, and Ψ represents the sparse basis matrix. If Φ, Ψ is irrelevant [34], ideally the coherence factor μ(Φ, Ψ) = 1, then m ≥ C ⋅ r ⋅ log n, most experimental results show that m ≥ 4r is the best value. In the experiment in this paper, the sparse basis matrix uses a 1024 × 1024 discrete cosine transform matrix, so the sensor accepts the signal with a length of 1024 bits for segmentation. The sampling matrix is constructed using (24). In order to more clearly demonstrate the resilience of the PRG algorithm and other algorithms, we use the signal-to-noise ratio (SNR) of the reconstructed signal and the original signal to represent the recovery effect. The definition is as follows:

$$ SNR\left({X}_{true},{X}_{rec}\right)=20\log {\frac{{\left\Vert {X}_{true}\right\Vert}_2}{\left\Vert {X}_{true}-{X}_{rec}\right\Vert}}_2 $$
(50)

where Xture represents the original signal from the source, and Xrec represents the signal that is compressed and then reconstructed.

7 Results and discussion

The signal reconstruction by PRG algorithm is a process of successively approximating the optimal solution. To quantitatively analyze the performance of the PRG algorithm, the experiment is performed at a sampling rate of 400. In Eq. (46), the parameter λ is 0.35 [35]. It can be seen from Fig. 3 that the reconstruction effect of the PRG algorithm tends to be stable after 10 iterations, and the number of iterations is further increased, and the signal reconstruction effect is not improved much. Therefore, the number of iterations of the PRG algorithm is unified to 10 times in subsequent experiments.

Fig. 3
figure 3

PRG algorithm iteration number and reconstruction effect. The SNR of the reconstructed wireless sensor signal by PRG algorithm, in which iterations involve 5, 10, 15, 20, 25, 30, and 35

Figure 4 shows the recovery ability of OMP, SP, BP, CoSaMP, and PRG algorithm in wireless sensor network without noise interference. For the sampling matrix constructed by formula (24), the sub matrix is constructed by randomly selecting row vectors of the matrix to undersample, and the number of row vectors is the sampling rate.

Fig. 4
figure 4

Reconstruction of sensor signals without noise. The SNR of the reconstructed wireless sensor signal by PRG, OMP, SP, BP, and COSAMP. The sampling rates 100, 150, 200, 250, 300, 350,and 400, respectively

It can be seen from the observation in Fig. 4 that the resilience of different algorithms is not much different in a noise-free environment. Under the condition of low sampling rate, the recovery ability of these algorithms is not good. This is because the low sampling rate cannot guarantee the main characteristic information of the sensor to obtain the signal, and it is difficult to achieve perfect reconstruction of the signal, that is, the SNR value is low. As the sampling rate is further increased, it can be seen that the SNR value is increasing and tends to be stable, that is, the signal can be accurately reconstructed.

In order to further analyze the influence of noise interference on the signal collected by the sensor, we add Gaussian white noise and sinusoidal signal plus narrowband Gaussian noise with a signal-to-noise ratio of 10, 20,...100 to the original signal. These are two common noises in wireless sensor networks .Then, compare the reconstruction capabilities of the four algorithms OMP, SP, BP, COSAMP, and PRG. In this experiment, the number of samples is 400, the discrete cosine transform is used to construct the base matrix, and the SNR values of the signal and noise are gradually increased. Figures 5 and 6 show the resilience of the algorithm under Gaussian white noise and sinusoidal signal plus narrowband Gaussian noise. In the case of low signal-to-noise ratio, it means that the noise energy and the signal energy are equivalent. In order to better reduce the noise influence, this puts high requirements on the reconstruction ability of the algorithm. In fact, the noise is relatively strong. In most cases, most algorithms are difficult to reconstruct the signal perfectly. As the SNR value increases, the situation improves. It can be seen whether Gaussian white noise or sinusoidal signal plus narrowband Gaussian noise, the PRG algorithm exhibits better resilience than other algorithms in the SNR value of 40–90, and since 90 is relatively large in signal energy relative to noise, most algorithms can reconstruct the signal better. Moreover, compared with the Gaussian white noise, PRG algorithm has a good recovery effect under sinusoidal signal plus narrowband Gaussian noise. Therefore, it can be concluded that the PRG algorithm exhibits better reconstruction performance under non-strong noise interference and can effectively restore the original signal.

Fig. 5
figure 5

Sensor signal reconstruction under Gaussian white noise. The SNR of the reconstructed wireless sensor signal by PRG, OMP, SP, BP, and COSAMP. The SNR of noise is 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100, respectively

Fig. 6
figure 6

Sensor signal reconstruction under sinusoidal signal plus narrowband Gaussian noise. The SNR of the reconstructed wireless sensor signal by PRG, OMP, SP, BP, and COSAMP. The SNR of noise is 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100, respectively

Further, we built a real wireless sensor network system, which is composed of 30 temperature sensor nodes. Every nodes support 802.11 and 2.4 GHz network bands. The wireless sensor nodes are separated by 5 m, and the central node is directly replaced by PC. A stable heat source was randomly placed in the experiment, and then the temperature of the heat source was measured. Since the current hardware-based sensing matrix design is still not perfect, we add a module to each sensing node, implement the sparse and compressed sampling by software, and then transmit temperature data by compressed sampled to center node. Signal of temperature was refactored in the center node. The experiment still uses the discrete cosine transform to construct the base matrix, and other parameters are the same as the previous experiments. We randomly placed the heat source at 10 locations and repeated 10 experiments to compare the reconstruction capabilities of the reconstruction algorithms OMP, SP, BP, COSAMP, and PRG, as shown in Fig. 7. The relative error between the actual temperature of the heat source and the temperature calculated by the central node is used to represent the reconstruction accuracy.

Fig. 7
figure 7

Temperature sensing reconstruction error based on compressed sensing

From Fig. 6, it can be observed that under the same conditions, the reconstruction accuracy of the PRG algorithm is generally better than other algorithms, but its reconstruction performance is not as stable as the BP and OMP algorithms. In the experiment, we further analyzed that the reconstruction time of the temperature sensing data of the PRG algorithm is consistent with the OMP algorithm, which is lower than the SP, COSAMP, and BP algorithms, and the rapid reconstruction capability is also important for reducing the energy consumption of the wireless sensor network.

In order to study the time complexity of algorithm reconstruction, we compare the time overhead of various algorithms for heat source signal reconstruction. In theory, the algorithm’s reconstruction time for the signal increases as the number of iterations increases. In the experiment, we tested the time to reconstruct the heat source signal in the case where the interval of the number of iterations is [1, 12], and the SNR values of the noise are 20, 40, 60, and 80, respectively. As shown in Figs. 8 and 9, it can be seen that whether Gaussian white noise or sinusoidal signal plus narrowband Gaussian noise, the time increases with the increase of iteration times, and it is obvious that the iterative calculation increases the time overhead. The noise also has a significant impact on the reconstruction time. The higher the noise, the longer the time required to reconstruct the signal. This is because the noise introduces extra data, and the noise is non-sparse signal. The compression is relatively small, which makes the overall calculation of the algorithm increase. Lead to increased time complexity. The PRG algorithm proposed in this paper has lower time overhead than other algorithms. In particular, when SNR = 60, the time cost is significantly lower than other algorithms. In the sinusoidal signal plus narrowband Gaussian noise, we further find that the recovery time of the heat source signal pair is less than that of the Gaussian white noise.

Fig. 8
figure 8

Reconstruction time of heat source signals under Gaussian white noise

Fig. 9
figure 9

Reconstruction time of heat source signals under sinusoidal signal plus narrowband Gaussian noise

8 Conclusion

The advantage of compressed sampling is that it acquires the complete signal at a lower cost. The wireless sensor network needs this facility. Since wireless sensor networks are susceptible to noise, signal reconstruction of undersampled data becomes difficulty. Based on multi-path channel transmission model for wireless sensor networks, an approximate gradient descent algorithm is proposed to recover compressed signal under noise. The algorithm can get the optimal solution of the constraint equation through stepwise iterative approximation, and then restore the original signal. Compared with OMP, SP, BP, and COSAMP algorithm, PRG algorithm shows better reconstruction performance under noisy environment. In the test of temperature sensing networks, the result shows that the PRG algorithm has certain advantages in both reconstruction accuracy and time. However, the following limitations of the PRG algorithm need to be further studied:

  1. 1.

    Although the overall convergence time of PRG algorithm is short, it is found that the convergence time of single iteration of PRG algorithm is more than the other three algorithms in the experiment. In the follow-up research, we need to further optimize the linear contraction step model to reduce time complexity of the algorithm, reconstruction time, and energy consumption of the sensor network.

  2. 2.

    The determination of the weight λ between the sparsity of signal x and the error is based on the weight selection method .This method is a fast shrinkage threshold algorithm proposed by Beck et al. It is not clear for adaptability of PRG algorithm to the reconstruction of wireless sensor signal. Therefore, it is necessary to further analyze from mathematical inference.