1 Introduction

In clinical practice, imaging is widely used in medical diagnosis and treatment because of its high safety and strong real-time performance. With the rapid development of medical technology, a large number of high-resolution images have emerged, such as computed tomography (CT), positron emission tomography (PET), magnetic resonance imaging (MRI), diffuse weighted imaging (DEI), single photon emission computed tomography (SPECT), etc. These imaging techniques provide a variety of functional and anatomical information at different spatial and temporal resolutions [1]. All these kinds of medical images with super-resolution play an important role in life science research, medical diagnosis and clinical treatment. Therefore, super-resolution restoration improves the resolution of medical images, realizes the detection and recognition of more subtle lesions, and assists doctors to provide more accurate information for reducing the misdiagnosis rate.

However, original medical images are far from these requirements. Therefore, it is of great significance to recover the acquired image, which plays an important role in image recognition, detection and classification. Obtaining satisfied high-resolution images by improving physical hardware is difficult and expensive. Therefore, the restoration technology that improves the image resolution through software has become a research hotspot [2].

For restoration of single image with traditional methods, since only one input image is utilized, the enhancement of restoration and resolution is greatly limited [3]. Therefore, current research focuses on super-resolution image restoration based on sequence images by assuming that imaging conditions (light, lens focal length, etc.) at different times are identical with no major change in an one-time application (such as large deformation, occlusion) [4]. However, the registration accuracy, which is the key to super-resolution recovery by using sequence images, is affected heavily because these conditions are difficult to fully satisfy. Super-Resolution (SR) refers to recover a high-resolution image (HR) with a richer detail from low-resolution image (Lower Resolution, LR) with multi-frames, and it soon becomes a hotspot in medical imaging [5]. The SR technique is based on some prior knowledge of image degradation, which is degraded to classic image restoration when the impulse response of the system (the point spread function) is known. However, in most cases, the point spread function is difficult to determine. In this way, super-resolution image restoration extracts the degradation information from the observed image in a specific way to find out image restoration method.

Current research of the super-resolution has made some achievements in image restoration algorithms. However, its application range has certain limitations. Jeong et al. assumed that the Point Spread Function (PSF) was a circle and used a cyclic square matrix as the degenerate matrix to use fuzzy system in super-resolution image estimation [6]. Their method is an iteration which identifies PSD and estimate real image iteratively. In the iteration, they set one in PSD and real image to a known result when the other is estimating by generalized crossover criteria. Meantime, Alqadah et al. effectively dealt PSF with the Linear Spatial Invariant (LSI) fuzzy conditions, and estimated the fuzzy parameters by Gaussian quadratic criterion and Lanczos algorithm [7]. They also improved the secondary and cyclical requirements of the PSF matrix, and increased robustness and effectiveness of the algorithm. Recently, an adaptive regular blind reconstruction algorithm was also proposed by a regular processing based on local smooth features of images [8].

In order to improve the resolution of medical images, it is considered to constrain the super-resolution solution space for LR images by using fusion of fuzzy similarity and multi-scale retention of the image. This paper studies the regularized super-resolution restoration of medical images to improve restoration effectiveness. Section 2 provides materials and methods used in this paper; Section 3 provides results of our method by experiments; Section 4 discussed the results and Section 5 concludes the whole paper.

2 Materials and methods

2.1 Super resolution analysis

In general, function of medical image degradation is unknown. Super-resolution image restoration assumes that the LR sequence is an LSI degenerate of the HR image [9]. Blurr ing, down-sampling, and displacement of image are simulated separately. Then, because down-sampling and blurring are unchanged during the formation of the LR sequence, the LR frame is shown in Eq. 1.

$$ {y}_k={DBM}_k+{n}_k={H}_kx+{n}_{k\kern1.3em }1\le k\le p $$
(1)

where x, yk and nk are the vector of the restored HR medical image, the k-th frame of degraded image and its additive noise, respectively, the matrices D, B and Mk are the down-sampling operator of the imaging system, the fuzzy operator, and the displacement operator of the k-frame medical image, respectively, Hk is the degradation matrix of the k-th frame of medical image. In Eq. 1, D is determined by the down-sampling rate, Mk is estimated by the inter-frame displacement, and the fuzzy operator B is related to the PSF. B and nk are generally not directly available and need to be estimated from the LR sequence.

A general super-resolution algorithm adds detail of medical image and improves resolution by fusing all the information in the LR sequence [10]. Assuming that the size of the restored LR medical image is M × N, and the resolution of the medical image is enhanced by r times. The Eq. 2 is provided by adding Eq. 1 when k is set from 1 to p.

$$ y= Hx+n $$
(2)

where y ∈ RpMN × 1, \( H\in {R}^{pMN\times {r}^2 MN} \), n ∈ RpMN × 1, \( x\in {R}^{r^2 MN\times 1} \). Therefore, a high-dimensional calculation is required to recover HR medical images from Eq. 2 even for a medium-scale LR image sequence.

Since the fuzzy operator B is a discrete matrix form of PSF with finitely support, the matrix B is degraded [11]. Furthermore, in order to fully recover a true HR medical image, an image sequence containing r2 frames that includes all non-redundant complementary information is needed. However, the frames’ number of LR image in practical applications is often limited, which makes Eq. 2 an ill-conditioned equation. Therefore, regularization is used to solve the problem in this paper.

It is assumed that SR has been accurately estimated in medical image motion, and the blur of image is unknown. In order to simplify the estimation of image blur, we assume that the structure of the PSF is known and the parameters of PSF are unknown. The fuzzy matrix B is re-expressed as a function with parameter σ by Eq. 3.

$$ y=H\left(\sigma \right)x+n $$
(3)

In this way, restoration of SR medical image estimates the HR medical image from LR images with p frames by Eq. 3.

2.2 Fuzzy similarity fusion strategy based on local feature information

2.2.1 Fuzzy similarity

If the pixel matrix of a fused image is regarded as a two-dimensional fuzzy set, according to the fuzzy mathematical theory, similarity degree between the two fuzzy sets is measured by the closeness of them [12, 13]. In a general data fusion process, it is assumed that there are N sensors to measure the same feature indicator in the same information collection system respectively. Let xi denotes the data measured by the i-th sensor at a certain time and the random variable Xi denotes an observation corresponding to xi. Due to the randomness of external disturbance factors, the true degree of Xi is only determined by the information which is hidden in the data x1, x2, ⋯xN. The authenticity of xi is high if the similarity between xi and the rest data is high [14].

Then, similarity function in fuzzy mathematics is used to quantify the similarity of the observations in each sensor at the same time [15]. Though there are several ways to define the similarity function, the following two definitions are selected by applying low computational complexity. At time k, the similarity of the data measured by N sensors is:

$$ {\alpha}_{1,2,\cdots, N}(k)=\frac{\min \left\{{x}_1(k),{x}_2(k),\cdots, {x}_N(k)\right\}}{\max \left\{{x}_1(k),{x}_2(k),\cdots, {x}_N(k)\right\}} $$
(4)

or

$$ {\alpha}_{1,2,\cdots, N}(k)=\frac{\mathrm{N}\min \left\{{x}_1(k),{x}_2(k),\cdots, {x}_N(k)\right\}}{x_1(k),{x}_2(k),\cdots, {x}_N(k)} $$
(5)

where xi(k) is the result of the k-th sampling of the i-th sensor, and α1, 2, ⋯, N ∈ (0, 1]. When the value is 1, it means that the k-th sampling result of N sensors is the same, and the maximum similarity is obtained at this time. When the value is close to 0, it means that the k-th sampling result of N sensors is almost different and has the smallest similarity.

2.2.2 Regional features

In order to directly extract or weight the average extracted pixels from the fused medical image, Eqs. 45 are used to describe the relation degree of the regional features around each pixel. Since the source image is obtained through a certain pre-processing algorithm, it contains a few noise and has a good visual effect. Therefore, the key point of improvement for the fusion effect is the abundant details of the fused image. In this way, the regional variance, regional gradient, and regional mean are used as regional features [16]. The medical image contains more details and the fusion effect of medical image is better when these indicators are large enough.

If a pixel point I(i, j) is centered, the area variance, region gradient, and region mean of point I(i, j) are defined as:

$$ V\left(i,j\right)=\frac{1}{d}\sum \limits_{m=i-k}^{i+k}\sum \limits_{n=j-k}^{j+k}{\left[I\left(m,n\right)-\overline{I}\left(m,n\right)\right]}^2 $$
(6)
$$ G\left(i,j\right)=\frac{1}{d}\sum \limits_{m=i-k}^{i+k}\sum \limits_{n=j-k}^{j+k}{\left[{G}_x^2\left(m,n\right)-{G}_y^2\left(m,n\right)\right]}^{\frac{1}{2}} $$
(7)
$$ M\left(i,j\right)=\frac{1}{d}\sum \limits_{m=i-k}^{i+k}\sum \limits_{n=j-k}^{j+k}\left[I\left(m,n\right)-\overline{I}\left(m,n\right)\right] $$
(8)

where d is the number of pixels in the area with size (2k + 1) × (2k + 1) centered on the point I(i, j), Gx(m, n) and Gy(m, n) represent the first order derivative of the medical image in horizontal direction and vertical direction at the point (m, n), respectively. Then, \( \overline{I}\left(m,n\right) \) is used to represent the average of the pixels in the area by Eq. 9.

$$ \overline{I}\left(m,n\right)=\frac{1}{d}\sum \limits_{m=i-k}^{i+k}\sum \limits_{n=j-k}^{j+k}I\left(m,n\right) $$
(9)

Therefore, the regional feature c(i, j) is used as the measurement data of sensor [17]. Similarity α1, 2, ⋯, N of this pixel point in the plurality of fused images is obtained by using Eq. 4 or Eq. 5. Thereby, we select a corresponding extraction to obtain an approximate value of the pixel point. In real application, when the similarity α1, 2, ⋯, N is lower than a defined lower threshold Tα(Tα > 0), the regional features c1(i, j), c2(i, j), ⋯cN(i, j) are considered to be completely different. A general value of Tα is between 0.3 to 0.5.

2.2.3 Rules of fusion decision

Determining the decision rules is the core of fusion strategy. It refers to how we can select information from the source image and add it into the fused image [18]. The simplest method is that we take the mean of source image coefficients as the coefficients of fused image. However, this fusion strategy reduces the contrast of results. It is known from the optical system imaging principle that the high frequency component of sharp image is much larger than blurred image [19]. If gradient, variance, and mean value of a certain area are larger, the more high-frequency information is included in the area and image information is richer. Therefore, the choice of pixels is guided by the size of region gradient, variance, and mean. This decision rule is called the “maximum selection” [20]. The “maximum selection” rule will reach the best result if only one image provides the most useful information in the corresponding position of source image. However, not all fusion source images satisfy this assumption in reality. Therefore, the following decision rules are used:

$$ I\left(i,j\right)=\left\{\begin{array}{l}{I}_k,\arg \max \left\{{c}_1\left(i,j\right),{c}_2\left(i,j\right),\cdots {c}_N\left(i,j\right)\right\},\\ {}{\alpha}_{1,2,\cdots, N}\le {T}_{\alpha}\\ {}{\omega}_1\left(i,j\right){I}_1\left(i,j\right)+{\omega}_2\left(i,j\right){I}_2\left(i,j\right)+\cdots +\\ {}\kern3.199999em {\omega}_N\left(i,j\right){I}_N\left(i,j\right),{\alpha}_{1,2,\cdots, N}>{T}_{\alpha}\end{array}\right. $$
(10)
$$ {\omega}_k\left(i,j\right)=\frac{c_k\left(i,j\right)}{c_1\left(i,j\right)+{c}_2\left(i,j\right)+\cdots +{c}_N\left(i,j\right)}\kern1.4em k=1,2,\cdots, \mathrm{N} $$
(11)

where Tα is the threshold and I is the fused image. The similarity α1, 2, ⋯, N plays a decision-making role in the integration. When the similarity is small, it is considered that only one image provides useful information. At this time, the extraction strategy of fused image is “take one from multiple”, that is, the pixel with best fusion quality is extracted as the final pixel point of the fused image according to region features, and other pixels at same position are dropped. When the similarity is large, the extraction strategy of medical image is “multiple fusion”, that is, the pixel points of the plurality of medical images are weighted and averaged based on regional features to obtain the pixels of the fused image.

The innovation of this decision rule is as follows.

A weighted combination of these medical images is used as the fused image, and the weight coefficient is determined by regional features at the pixel for a plurality of medical images with a certain degree of relevance [21, 22]. It reduces noise and ensures the stability of fusion when the source image contains similar information. The saliency information in the source image is preserved when no similar information is included.

Through the above process, multiple medical images are fused based on the fuzzy similarity fusion to obtain a single medical fusion image. Then, a single medical image is restored by using a regularized super-resolution restoration algorithm.

2.3 Adaptive regularization super-resolution restoration

Super-resolution image restoration is essentially a morbid problem that needs to be solved by a regularization algorithm:

$$ L(x)=\sum \limits_{k=1}^K{\left({y}_k-{H}_kx\right)}^T\left({y}_k-{H}_kx\right)+\alpha {\left\Vert Cx\right\Vert}^2 $$
(12)

Considering the penalty effect of similarity in value function, we have

$$ L(x)=\sum \limits_{k=1}^K{L}_k(x)=\sum \limits_{k=1}^K\left[{\left({y}_k-{H}_kx\right)}^T{S}_k\left({y}_k-{H}_kx\right)+{\alpha}_k(x){\left\Vert Cx\right\Vert}^2\right] $$
(13)

where αk(x) is a regular parameter, C is a smoothing operator, and is generally taken as a Plass operator.

2.3.1 Selection of regular parameters

A regular parameter is selected to make the nonlinear function L(x) have a global minimum solution. In fact, L(x) has a global minimum solution if L(x) is guaranteed to be a convex function [23, 24]. Also if Lk(x) is a convex function, L(x) is a convex function by property of the convex function. Therefore, in order to make that the general function L(x) has a global minimum solution, it is only necessary to ensure that Lk(x) is a convex function.

Proposition 1. If regular parameters satisfy the following two attributes, Lk(x) is a convex function.

(1) αk(x) is a linear, monotonically increasing function of Lk(x):

$$ {\alpha}_k(x)=f\left({L}_k(x)\right)+{\gamma}_k{L}_k(x) $$
(14)

that is

$$ {\alpha}_k(x)=\frac{{\left({y}_k-{H}_kx\right)}^T{S}_k\left({y}_k-{H}_kx\right)}{\frac{1}{\gamma_k}-{\left\Vert Cx\right\Vert}^2} $$
(15)

(2)

$$ \frac{\partial f\left({L}_k\right)}{\partial {L}_k}<\frac{1}{{\left\Vert Cx\right\Vert}^2} $$
(16)

Proof. Let

$$ {L}_k(x)={L}_k\left(p,q\right)=p+f\left({L}_k\left(p,q\right)\right)q $$
(17)

where p = (yk − Hkx)TSk(yk − Hkx), q = ‖Cx2.

To make Lk(x) a convex function, ∀z1, z2 and 0 < λ < 1 we have

$$ {L}_k(x)=\left(\lambda {z}_1+\left(1-\lambda \right){z}_2\right)\le \lambda {L}_k\left({z}_1\right)+\left(1-\lambda \right){L}_k\left({z}_2\right) $$
(18)

where z1 = (p1, q1)T, z2 = (p2, q2)T, that is

$$ {\displaystyle \begin{array}{l}{L}_k=\left(\lambda {p}_1+\left(1-\lambda \right){p}_2,\lambda {q}_1+\left(1-\lambda \right){q}_2\right)\\ {}\kern1.9em \le \lambda {L}_k\left({p}_1,{q}_1\right)+\left(1-\lambda \right){L}_k\left({p}_2,{q}_2\right)\end{array}} $$
(19)

According to Eq. 17, Eq. 19 is converted into:

$$ {\displaystyle \begin{array}{l}\lambda {q}_1f\left({L}_k\left(\lambda {p}_1+\left(1-\lambda \right){p}_2,\lambda {q}_1+\left(1-\lambda \right){q}_2\right)\right)+\\ {}\kern2.6em \left(1-\lambda \right){q}_2\kern0.2em f\left({L}_k\left(\lambda {p}_1+\left(1-\lambda \right){p}_2,\lambda {q}_1+\left(1-\lambda \right){q}_2\right)\right)\kern0.7em \\ {}\kern2.4em \le \lambda {q}_1f\left({L}_k\left({p}_1,{q}_1\right)\right)+\left(1-\lambda \right){q}_2f\left({L}_k\left({p}_2,{q}_2\right)\right)\end{array}} $$
(20)

Since αk(x) satisfies Eq. 14, Eq. 20 is further transformed into:

$$ {\displaystyle \begin{array}{l}\lambda {q}_1{L}_k\left(\lambda {p}_1+\left(1-\lambda \right){p}_2,\lambda {q}_1+\left(1-\lambda \right){q}_2\right)+\\ {}\kern2.1em \left(1-\lambda \right){q}_2{L}_k\left(\lambda {p}_1+\left(1-\lambda \right){p}_2,\lambda {q}_1+\left(1-\lambda \right){q}_2\right)\\ {}\kern2em \le \lambda {q}_1{L}_k\left({p}_1,{q}_1\right)+\left(1-\lambda \right){L}_k\left({p}_2,{q}_2\right)\end{array}} $$
(21)

If Lk(x) is a convex function, then

$$ {\displaystyle \begin{array}{l}\lambda {q}_1{L}_k\left(\lambda {p}_1+\left(1-\lambda \right){p}_2,\lambda {q}_1+\left(1-\lambda \right){q}_2\right)+\\ {}\kern1.5em \left(1-\lambda \right){q}_2{L}_k\left(\lambda {p}_1+\left(1-\lambda \right){p}_2,\lambda {q}_1+\left(1-\lambda \right){q}_2\right)\le \\ {}\kern1.4em \lambda {q}_1\left[\lambda {L}_k\left({p}_1,{q}_1\right)+\left(1-\lambda \right){L}_k\left({p}_2,{q}_2\right)\right]+\\ {}\kern1.6em \left(1-\lambda \right){q}_2\left[\lambda {L}_k\left({p}_1,{q}_1\right)+\left(1-\lambda \right){L}_k\left({p}_2,{q}_2\right)\right]=\\ {}\kern1.5em {\lambda}^2{q}_1{L}_k\left({p}_1,{q}_1\right)+{\left(1-\lambda \right)}^2{q}_2{L}_k\left({p}_2,{q}_2\right)+\lambda \left(1-\lambda \right)\\ {}\kern1.4em \left[{q}_1{L}_k\left({p}_2,{q}_2\right)+{q}_2{L}_k\left({p}_1,{q}_1\right)\right]\end{array}} $$
(22)

Eq. 21 is equivalent to following Eq. 23.

$$ {\displaystyle \begin{array}{l}\lambda {q}_1{L}_k\left({p}_1,{q}_1\right)+\left(1-\lambda \right){q}_2{L}_k\left({p}_2,{q}_2\right)-\\ {}\kern1.4em {\lambda}^2{q}_1{L}_k\left({p}_1,{q}_1\right)-{\left(1-\lambda \right)}_2{q}_2{L}_k\left({p}_2,{q}_2\right)-\\ {}\kern1.5em \lambda \left(1-\lambda \right)\left[{q}_1{L}_k\left({p}_2,{q}_2\right)+{q}_2{L}_k\left({p}_1,{q}_1\right)\right]=\\ {}\kern1.4em \lambda \left(1-\lambda \right)\left({q}_1-{q}_2\right)\left[{L}_k\left({p}_1,{q}_1\right)-{L}_k\left({p}_2,{q}_2\right)\right]\end{array}} $$
(23)

Because λ(1 − λ) > 0, Eq. 23 is equivalent to Eq. 24.

$$ \left({q}_1\hbox{-} \kern0.3em {q}_2\right)\left[{L}_k\left({p}_1,{q}_1\right)-{L}_k\left({p}_2,{q}_2\right)\right]\ge 0 $$
(24)

We have Eq. 25 because Lk(p, q) is a monotonically increasing function for q by Eq. 24 [25].

$$ \frac{\partial {L}_k}{\partial q}\ge 0 $$
(25)

Because \( {L}_k=p+f\left({L}_k\right)q,\frac{\partial {L}_k}{\partial q}=\frac{\partial f}{\partial {L}_k}\bullet \frac{\partial {L}_k}{\partial q}\bullet q+f\left({L}_k\right), \) which is same as

$$ \frac{\partial {L}_k}{\partial q}=\frac{f\left({L}_k\right)}{1-q\frac{\partial f}{\partial {L}_k}}\ge 0 $$
(26)

And because of f(Lk) ≥ 0, thus

$$ \frac{\partial f}{\partial {L}_k}<\frac{1}{q}=\frac{1}{{\left\Vert Cx\right\Vert}^2} $$
(27)

Proposition 1 is proved.■

2.3.2 Iterative solution

The selection of regularized parameters makes L(x) a convex function. Thus, L(x) has a global minimum solution \( \hat{x} \), then \( {\varDelta}_xL\left(\hat{x}\right)=0 \), and \( {\varDelta}_x\sum \limits_{k=1}^K{\alpha}_k\left(\hat{x}\right)=0 \).

Because

$$ {\varDelta}_x{L}_k(x)=-2{H}_k^T{S}_k\left({y}_k-{H}_kx\right)+{\left\Vert Cx\right\Vert}^2{\varDelta}_x{\alpha}_k(x)+2{\alpha}_k(x){C}^T Cx $$
(28)

Then,

$$ {\displaystyle \begin{array}{l}{\varDelta}_x{L}_k(x)=2\sum \limits_{k=1}^K\left\{\left({H}_k^T{S}_k{H}_k+{\alpha}_k(x){C}^TC\right)x-{H}_k^T{S}_k{y}_k\right\}\\ {}\kern4.199998em +\sum \limits_{k=1}^K{\left\Vert Cx\right\Vert}^2{\varDelta}_x{\alpha}_k(x)\end{array}} $$
(29)

Thus, an iterative formula of x is obtained by Eq. 30.

$$ {x}_{n+1}={x}_n-\varepsilon \sum \limits_{k=1}^K\left\{\left({H}_k^T{S}_k{H}_k+{\alpha}_k(x){C}^TC\right){x}_n-{H}_k^T{S}_k{y}_k\right\} $$
(30)

3 Results

With experimental comparisons between the two different provided similarities, we find the improved signal-to-noise ratio (ISNR) of the fused image is similar, and experimental results are almost the same. In this way, this paper chooses Eq. 4 as the similarity measure.

3.1 Recovery analysis of medical images

A medical image with size 256 × 256 pixels is used as the test image in our experiment. The test image is convolved by using the point spread function of a 5 × 5 Gaussian matrix. Then, horizontal and vertical directions are down-sampled to produce an LR image of 128 × 128 pixels by the factor 2, which is used as a reference frame. Moreover, 5 LR image frames with size 128 × 128 are generated with fuzzy similarity matching, and a total of 6 frames of LR images are obtained. Finally, numerical experiment is carried out by using the proposed algorithm, and image restoration result is compared with Ref. [3] and Ref. [5]. The results are shown in Fig. 1 and Table 1.

Fig. 1
figure 1

Comparison of restoration results of different restoration methods. a Primitive medical images (256 × 256). b LR image with Gauss blur. c Algorithm in this paper. d Liang’s algorithm. e Wu’s algorithm

Table 1 Comparison of the effects of different methods of single hospital

Evaluation of the results is generally divided into two types. One is subjective evaluation, considering the human eye’s feelings for the details of restored image. The other is objective evaluation criterion, generally using MSE and PSNR. Assuming x is the original image, \( \hat{x} \) is the restored image, and N is the number of pixels, then MSE and PSNR are shown in Eqs. 3132.

$$ MSE=\frac{1}{N}\sum \limits_{i=0}^{N-1}{\left({x}_i-{\hat{x}}_i\right)}^2 $$
(31)
$$ PSNR=10\lg \frac{255^2}{\frac{1}{N}\sum \limits_{i=0}^{N-1}{\left({x}_i-{\hat{x}}_i\right)}^2} $$
(32)

Figure 1 shows medical image restoration effects of different algorithms. Recovery effects of the three algorithms are compared in Table 1. Compared with Liang’s algorithm, mean square error and peak signal-to-noise ratio of the reconstructed image are significantly improved. Compared with Wu’s algorithm, peak signal-to-noise ratio and mean square error of the restored image are slightly improved a little. However, from the restoration of visual effects, it is significantly better than Wu’s algorithm. It is mainly manifested that the image restored by the proposed algorithm has a significant improvement on the local edge glitch phenomenon, which is due to the proposed fuzzy similarity fusion algorithm.

3.2 Fusion analysis of fuzzy similarity

Standard medical image 1 (512 × 512) and image 2 (512 × 512) are selected for testing. First, gray value of the real image is normalized. The image is then subjected to fuzzy convolution and noise addition to obtain a contaminated image. The PSF of the fuzzy convolution uses a Gaussian function with a standard deviation of 10. Additive noise is Gaussian white noise with a variance of 0.001. Quality of the fused image is measured in terms of ISNR.

Experiment 1: the image processed by different wavelet bases are used as fused source images, and then image fusion is performed by the proposed algorithm. Comparing the performance with fusion image, the results are shown in Tables 2 and 3.

Table 2 Performance analysis of wavelet-based image fusion algorithms
Table 3 Performance analysis of the algorithm in this paper

It is known that fusion image’s ISNR of the proposed algorithm is significantly improved from Tables 2 and 3. For example, in the fourth group of experiments for image 2 in Table 3, source image is an image restored by wavelets db3, sym8, sym10, and ISNR thereof is 7.49, 7.53 and 7.54, respectively. ISNR of the restored image after restoration is 8.01, 8.01 and 8.06, which is 0.5 dB higher than ISNR before fusion, and fusion effect is obvious.

Experiment 2: images from different Contourlet bases and images from different wavelet bases are used as fusion source images to compare the image performance using a single Contourlet basis (Table 4) and the performance with fusion image (Table 5).

Table 4 Performance comparison of ontourlet-based restoration algorithms
Table 5 Performance analysis of the algorithm in this paper

It is known that ISNR of the image after fusion is significantly improved from Tables 4 and 5. For example, in the first group of experiments in Table 5, for image 1, ISNR of the image after (9/7, 9/7), (sym8, 9/7), sym8, and coif5 fusion is 4.29, 4 35, 4.72, and 4. 74, respectively. ISNR of the fused image processed by the proposed algorithm is 4.85 and 4.82, which is improved by 0.1–0.6 dB, and the effect is improved obviously. Combined with the results of the first set of experiments, the proposed algorithm has a good fusion effect.

3.3 Stability analysis of the algorithm

The experiment tests stability of the algorithm. For medical images of different sizes, image restoration is performed using three different restoration algorithms. The image restoration results of different algorithms are compared in Table 6.

Table 6 Comparisons of stability of different algorithms

By analyzing Table 6, when medical image size is 1 MB, difference between MSE and PSNR of the three algorithms is both small. The obtained MSE and PSNR values by the proposed algorithm show a gradual ascending and downward trend with continuous increment of medical image size when medical images with different sizes are restored. However, the results obtained by Ref. [3] and Ref. [5] show an upward trend and a downward trend with more significant volatility when the size of medical image increases. In this way, experimental results show that stability of the proposed algorithm is better than traditional algorithms.

3.4 Comparative analysis of image restoration integrity

Taking the integrity of image restoration as an index, the proposed algorithm in this paper is compared with Ref. [3] and Ref. [5]. The results are shown in Fig. 2:

Fig. 2
figure 2

Integrity comparison of image restoration

From Fig. 2, we find that the algorithm of this paper has the highest recovery integrity, especially when the pixel is 300, the recovery of this algorithm reaches a maximum of 90% for medical images with different pixel conditions. Liang’s algorithm is the algorithm with the lowest complete degree of image restoration in the three algorithms. The maximum degree of integrity is 41%, and the image restoration integrity of the algorithm of the Wu’s algorithm is up to 50%. It can be seen that the image restoration integrity of the proposed algorithm is high, which indicates that the regular super-resolution restoration algorithm in this paper has a strong feasibility.

3.5 Comparative analysis of image restoration efficiency

Taking the image restoration efficiency as an index, the algorithm in this paper is compared with Ref. [3] and Ref. [5]. The results are shown in Fig. 3.

Fig. 3
figure 3

Comparative analysis of image restoration efficiency

According to Fig. 3, the overall image restoration efficiency of this algorithm is high, and the average recovery efficiency is up to 80%. The restoration efficiency of algorithms in Ref. [3] and Ref. [5] first showed an upward trend, and after the pixel value is 300, the restoration efficiency began to decrease. The average recovery efficiency of Liang’s algorithm is 40%, and the average recovery efficiency of Wu’s algorithm is 30%, which shows the superiority of this algorithm.

3.6 Image recovery time-consuming comparison

Taking the image recovery time-consuming as an index, the algorithm in this paper is compared with Ref. [3] and Ref. [5]. The results are shown in Table 7:

Table 7 Image recovery time-consuming comparison

It is clearly in Table 7 that the time consuming is about 7.9 s when the single-amplitude medical image is recovered by the proposed algorithm, the average time consumption of the image recovery of Liang’s algorithm is about 31 s, and the average time of the image recovery of Wu’s algorithm is about 31 s. It seems that the proposed algorithm has obvious advantages, less time-consuming, and can quickly complete the regular super-resolution restoration of single medical images.

4 Discussion

This paper focuses on restoration algorithm of single medical image. The following conclusions are obtained from the proposed experiment. The MSE and PSNR of proposed method are significantly improved by comparing with Liang’s algorithm. Though the MSE and PSNR are slightly improved by comparing with Wu’s algorithm, the restored image has an obvious improvement on burr phenomenon of local edge. ISNR is enhanced when primitive image is fused by the proposed algorithm. When the size of medical images increases, MSE and PSNR values of the proposed algorithm are the most gradual, which means that the proposed algorithm has strongest stability.

The reason that the proposed algorithm has achieved such a good restoration effect is mainly because it improves the basic idea of image fusion. In view of the fact that different restoration algorithms have different advantages for different features of an image (texture information, boundary information, smooth regions, sharp regions, etc.), multiple images are merged into one image. It allows one image to be restored more completely and accurately by using complementarity of fuzzy similarity fusion.

Research of medical image restoration algorithm is still in its infancy. It mainly focuses on the case where degradation model is linear with no noise. Systematic analysis methods and filter design methods have not yet been formed. In addition, basic theory and applied research are far from mature. Further, research direction of the algorithm is roughly divided into the following aspects.

  1. (1)

    Research on improvement of algorithm

It is known from the above discussion that there are still many shortcomings in various existing algorithms, and it is necessary to further improve the algorithm. For example, how to choose the initial conditions to ensure convergence of the algorithm. How to choose an algorithm termination condition to guarantee the restoration quality? How to choose the noise parameters in filter to reduce influence of noise is a hot topic in the future.

  1. (2)

    Research on denoising processing algorithm

Presence of additive noise makes image restoration a challenge. Since general assumption is that only statistical properties of the noise are known, it is impossible to completely remove the noise from a degraded image. In addition, due to the presence of noise, restoration effect is not ideal. Research on image restoration algorithm combined with noise reduction has very practical significance. In most algorithms, noise is described as Gaussian noise, which has great limitations in practical applications. It is also an important research direction for the study of non-Gaussian cases using noise-based high-order statistical characteristics of denoising algorithms.

  1. (3)

    Real-time processing algorithm

Complexity of the algorithm is an important direction that restricts its application. In related data, a method of regularized discrete periodic Radon transform is proposed to transform two-dimensional convolution into one-dimensional processing to improve algorithm’ speed. And real-time processing algorithms using neural networks. Real-time nature of the algorithm is a prerequisite for its practical application.

  1. (4)

    Applied research

Application of algorithms is a driving force behind research. Although image restoration algorithms have gained great applications in astronomy, medical science, and remote sensing. However, there is still a lot of work to be done to apply algorithms to general industrial image real-time detection, machine vision, and image transmission in network environment to recover criminal investigation.

5 Conclusion

In most electronic imaging systems, people are eager to obtain images with high resolution because these images provide more details, especially in medical research field. However, during the imaging process, there are many factors which cause descending and degradation of image quality. Such as optical system aberrations, air disturbances, motion, defocusing, discrete sampling, and system noise, which cause image blur and distortion. In the actually solution to the problem of image quality descending and degradation, image processing with super-resolution is emerged. It provides many different methods to solve the above problems, and it has very important application value and broad application prospects in many fields such as medicine.

In this paper, a regularized super-resolution restoration algorithm for single medical image is proposed based on fuzzy similarity fusion. The fuzzy similarity fusion algorithm is used to obtain the fused image, and regularized super-resolution restoration is performed for the fused image, which greatly improves image restoration effect.

In addition, we will focus on the improvement of algorithm, including the strategy of convergent initial parameter choice, as well as algorithm termination condition choice.