Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access July 9, 2020

Subpixel matching method for remote sensing image of ground features based on geographic information

  • Chen Chen EMAIL logo
From the journal Open Physics

Abstract

In order to solve the problem of large error of subpixel matching and poor filtering effect in traditional methods, a subpixel matching method based on geographical information is proposed. First, the image quality of the remote sensing image is enhanced by the image enhancement method based on light energy allocation. Then, the boundary geographic information is extracted by the improved thresholding segmentation algorithm based on histogram exponential convex hull for the enhanced remote sensing image of ground features. Based on the extracted geographic information, by matching the boundary image with the function measurement method, the center coordinates of the image block corresponding to the actual measurement map and the reference submap which achieve the best matching are obtained. According to the corresponding geometric transformation relationship between the measured image and the reference image, the subpixel matching of the measured remote sensing image and the reference image can be carried out under the least-square-error criterion. The experimental results show that the enhancement performance and noise filtering performance of the proposed method are better than those of the same type of method, the matching residual is very small, the matching accuracy is high, and the application value is significant.

1 Introduction

As the main content of computer vision and image processing, image matching research has important theoretical and practical significance [1]. Due to the influence of various unpredictable factors in the imaging process, image matching problems have not been well solved, but great progress has been made, many new methods have been put forward, and people are still making unremitting efforts. Image matching has been a hot and difficult research in recent decades, and it is to find one or more kinds of transformation space. The transformation makes two or more images from a different time, different sensors, or different viewpoints consistent in space and has been applied in many fields. For example, the terminal guidance of cruise missile uses scene matching to determine the precise position of the missile. When the images from multiple sensors are fused, the first step is to match the images. To get the depth map of an image, it is necessary to find the congruent pairs of the same point in the scene in two images, which is also the content of image matching research. Also, image matching technology is widely used in target recognition and tracking. Due to the changes in shooting time, shooting angle, natural environment, the use of various sensors, and the defects of the sensor itself, the captured image is not only affected by noise but also has serious gray-scale distortion and geometric distortion [2]. In this case, how to realize the matching algorithm with high accuracy, high matching accuracy, high speed, strong robustness, and anti-interference in parallel becomes the goal that people pursue.

In the past few decades, various image matching algorithms have emerged and combined with many mathematical theories and methods, people have constantly proposed new matching methods. However, the research on the subpixel matching of remote sensing images of ground features is very few. In reference [3], an image matching algorithm based on the coupling geometric error constraint rules of the point line projection model is proposed. A fast algorithm is introduced to detect the feature points of the image accurately and quickly, and the projection model of point line is constructed by using the gradient value of pixel points to obtain the main direction of feature points. The polar coordinates are constructed with the feature points as the origin, the neighborhood of the feature points is segmented, and the feature vectors are obtained by combining the distribution histograms in the segmented region, to form the feature descriptors. On this basis, the Euclidean model is used to calculate the ratio of the nearest neighbor to the next neighbor of the feature points and complete the image pixel matching. The experimental results show that the image matched by the algorithm has a good filtering effect, but the pixel matching accuracy error is large [3]. In reference [4], a local multifeature hashing method is proposed. According to the course overlap rate, the prediction area is constructed, the feature points are detected, and a multifeature description is carried out in the prediction area. With the existing tens of thousands of aerial images as training samples, the high-dimensional feature description vector is mapped to a compact binary hash code by a hash function, and the fast matching of feature points is realized by hamming distance. The experimental results show that the matching accuracy error of the algorithm is small, but the filtering effect is not good [4].

Image enhancement is to make the image clearer, facilitate people’s interpretation and understanding of the image, or provide better input for other automatic image processing technologies. In the application of remote sensing, the high-resolution image after correct enhancement can play a role in promoting the application of remote sensing [5]. Therefore, this paper first uses the image enhancement method based on light energy allocation to enhance the image quality of the remote sensing image of the ground features and then realizes the subpixel matching of the remote sensing image of the ground features through the subpixel matching method for the remote sensing image of the ground features based on the geographic information [6].

2 Materials and methods

2.1 Image enhancement method based on light energy allocation

Figure 1 is a flowchart of an image enhancement method based on light energy allocation.

Figure 1 
                  Flow chart of the image enhancement method based on light energy allocation.
Figure 1

Flow chart of the image enhancement method based on light energy allocation.

As shown in Figure 1, first, Fourier transform is applied to the original remote sensing image to obtain the spectrum image, then the low-frequency suppression spectrum image is obtained by suppressing the low-frequency, and the low-frequency suppression image is obtained by inverse Fourier transform. Finally, Gamma correction is applied to the low-frequency suppression image to make the corrected image mean consistent with the input original image mean, to obtain the final enhancement effect. The purpose of low-frequency suppression is to reduce the ability of low-frequency information, and the purpose of the mean adjustment is to allocate the reduced ability of low-frequency information to high-frequency information based on keeping the total energy of the image unchanged.

2.1.1 Low-frequency suppression

Because most of the ability of remote sensing image is concentrated in the low-frequency part (the center of the spectrum image), the amplitude at the low-frequency part is larger than that at other spectrum positions in the spectrum image of remote sensing image. Let f be the original remote sensing image, F be the spectrum image obtained by Fourier transform, and |F| be the amplitude of F. Assuming that the point (x, y) is the coordinate of the maximum value in |F|, that is, the coordinate at the low frequency, and the suppression function is taken as follows:

(1) F = { k F ( x , y ) F other

where 0 < k < 1 is the suppression constant. A smaller k indicates a greater degree of low-frequency suppression and vice versa. If k is too large (close to 1), the enhancement effect is not obvious; if k is too small (close to 0), the enhancement result will be distorted. The value of k can be determined according to the modulation transfer function (MTF) value at the Nyquist frequency of the imaging system. When the MTF value at the Nyquist frequency is small, it means that the loss of high-frequency capability is serious after image degradation, then k should take the smaller value, and vice versa [7].

2.1.2 Gamma correction

The frequency spectrum image F′ after low-frequency suppression is transformed by inverse Fourier transform to obtain the low-frequency suppression image g, and then the final enhanced image r is obtained by Gamma correction of g as follows:

(2) r = 255 ( g ) γ

where g′ is the normalized low-frequency suppressed image and γ is the Gamma correction factor. To keep the energy of the enhanced image and the energy of the original image suppressed, take

(3) γ = 255 m g m f + τ

In the formula, m g and m f are gray mean values of g′ and f, respectively, and τ is constant. The constant τ is used to adjust the overall brightness of the image. If the overall brightness of the original remote sensing image is dimmed due to the decrease of the remote sensor gain, τ is taken as a negative constant to make γ close to 0, to improve the overall brightness of the enhanced remote sensing image r. Assuming the value of τ is 0, the Gamma correction factor γ > 1, the mean gray value of the normalized low-frequency suppression image is greater than the mean gray value of the original remote sensing image. After the correction of Gamma correction factor, the overall gray value of g′ will be reduced, approaching the mean gray value of the original remote sensing image, to keep the energy unchanged before and after the enhancement as much as possible; when the mean gray value of the normalized low-frequency suppression image is small than that of the original remote sensing image, γ < 1. After Gamma correction, the overall gray value of g′ will be improved, approaching the gray value of the original remote sensing image [8].

2.2 Subpixel matching of remote sensing image

2.2.1 Boundary geographic information extraction in multisensor image

In this paper, an improved thresholding algorithm based on histogram exponential convex hull is used to automatically extract the boundary geographic information in the enhanced remote sensing image r. The boundary geographic information extracted in this paper is mainly the boundary information of coastline, lake, and other remote sensing images with obvious geographical indications [9].

Let h(k), k ∈ [0, n] be the histogram function of the enhanced remote sensing image r, n be the maximum gray level, and h ¯ ( k ) be the convex hull of the polygon (that is, the minimum convex polygon containing h(k)). On the basis of polygon convex hull h ¯ ( k ) , the definition of histogram exponential convex hull h ¯ e ( k ) can be derived. The process of getting histogram exponential convex hull h ¯ e ( k ) is as follows, that is to get the polygon convex hull In h ¯ ( k ) of In h(k) first, and then get h ¯ e ( k ) by exp [ In h ¯ ( k ) ] . Assuming that the minimum and maximum gray levels of h(k) ≠ 0 are L min and L max, respectively, for h(k), k ∈ [L i , L i+1] and L minL i L i+1L max, the index convex h ¯ e ( k ; L i , L i + 1 ) within the gray level range [L i , L i+1] can be calculated as follows:

(4) h ¯ e ( k ; L i , L i + 1 ) = { max [ h ( k ) , max p , q { h ( p ) exp { k p q p [ In h ( q ) In h ( p ) ] } } ] L i p < k < q L i + 1 , h ( p ) 0 , h ( q ) 0 h ( k ) , if k = L i , L i + 1 or h ( k ) = 0

Let the residual of the exponential convex hull in the gray level range [L i , L i+1] be

(5) r e ( L i , L i + 1 ) = k = L i L i + 1 [ h ¯ e ( k ; L i , L i + 1 ) h ( k ) ]

It can be used to describe the approximation degree of the exponential convex hull h ¯ e ( k ; L i , L i + 1 ) in the gray level range [L i , L i+1] to the histogram h(k) of the enhanced remote sensing image r, and it also reflects the convex and concave degrees of h(k) in the gray level range [L i , L i+1]. Compared with the polygon convex hull h ¯ e ( k ; L i , L i + 1 ) , the exponential convex hull h ¯ e ( k ; L i , L i + 1 ) can better approach the original histogram h(k), which is conducive to the threshold selection of thresholding segmentation. Therefore, this paper chooses the thresholding segmentation algorithm based on the convex hull selection threshold of histogram exponential.

In the thresholding segmentation algorithm based on histogram exponential convex hull, it is assumed that the gray level range [L min, L max] is divided into N categories, the threshold thresholds selected are t 1, t 2,...,t N−1, respectively, and the overall residual of the exponential convex hull is defined as follows:

(6) R N ( L min , t 1 , t 2 , , t N 1 , L max ) = r e ( L min , t N 1 ) + r e ( t 1 , t 2 1 ) + + r e ( t N 1 , L max )

Then, the threshold t 1, t 2,...,t N−1 selected shall be such that

(7) R N ( L min , t 1 , t 2 , .. . , t N 1 , L max ) = min

In the boundary geographic information extraction of this paper, the enhanced remote sensing image r needs to be divided into two categories (i.e., N = 2), so as long as the threshold t 1 is determined, the following can be achieved:

(8) R N ( L min , t 1 , L max ) = r e ( L min , t 1 1 ) = r e ( t 1 , L max ) = min

Then, the image can be divided into two categories by threshold t 1, based on which boundary geographic information can be further extracted.

In the process of thresholding the actual t 1 image, Landsat MSS and TM image, we find that the above method of thresholding by histogram exponential convex hull is often very sensitive to noise; sometimes, it will lead to improper selection of threshold, making the image on histogram almost fall on one side of thresholding threshold, while the other side of the threshold is almost none. Therefore, histogram must be preprocessed before threshold segmentation [2].

Therefore, the concept of population is introduced, that is to eliminate the influence by population test. The population P i at the gray value i is defined as

(9) P i = j = L min i 1 h ( j ) j = i L max h ( j ) , L min i L max ,

Population P i represents the number of pixels on both sides of a certain threshold in the remote sensing image r. Before getting the histogram exponential convex hull, a part of gray level pixels with a very small population are removed, and then the histogram is balanced accordingly. The advantage of this method is that it can eliminate the influence of noise in the high brightness area (larger gray value) and low brightness area (smaller gray value) of the image, so that the selected threshold does not fall in the false area where the noise is located.

2.2.2 Boundary image matching

Boundary image matching is a kind of geographical boundary matching in remote sensing images. The basic principle of remote sensing image matching is to find out the location with the largest similarity measurement value or the smallest difference between the two remote sensing images (measured map, benchmark map) according to the similarity measurement of geographical information in the two remote sensing images, while the common similarity measurement includes the minimum distance measurement, the maximum correlation measure, and invariant moment measure, etc. According to the characteristics that the boundary image obtained after threshold segmentation is a binary image, this paper adopts a simple and intuitive similarity measurement-pair function measurement method, which has been applied to the automatic matching of boundary image and achieved good results [10]. Compared with several common similarity measurement methods, the function measurement method is more suitable for binary boundary image matching and has the advantages of simple, fast, and strong denoising performance. This method is introduced as follows:

If the measured image size of the remote sensing image is N 1 × N 2, the reference image size is M 1 × M 2 (usually M 1 > N 1, M 2 > N 2), and the gray values at the coordinates (x, y) of the measured image and the reference image are g(x, y) and f(x, y) respectively, then the measured image can be expressed as the N 1 × N 2 × 1-dimensional image vector, recorded as G, i.e. G = [g(0, 0), g(0, 1),..., g(1, 0), g(1, 1),..., g(N 1 − 1, N 2 − 1)]; similarly, the starting position is set as (u, v) and the reference subimage with the same size as the measured image is also expressed as N 1 × N 2 × 1, i.e., F u,v .

(10) F u , v = [ f ( u , v ) , f ( u , v + 1 ) , , f ( u + 1 , v ) , f ( u + 1 , v + 1 ) , , f ( u + N 1 1 , v + N 2 1 ) ] T , 0 u M 1 N 1 , 0 v M 2 N 2

If the gray level range of the image element in the measured remote sensing image and the reference subimage is [0, n], then the corresponding pair function N ws (u, v) can be established between any gray level w in the measured image and any gray level s in the reference subimage whose starting position is (u, v). The pair function N ws (u, v) is defined as the number of image element composed of pixels with gray level w in the measured image and pixels with gray level s at corresponding points in the reference subimage with starting position (u, v), that is,

(11) N w s ( u , v ) = N U M ( { ( w , s ) | ( g ( x , y ) = w ) ( f ( x + u , y + v ) = s ) , x [ 0 , N 1 ] , y [ 0 , N 2 ] } )

where N w s ( ) is the operation to calculate the number of set elements in brackets, w, s ∈ [0,n].

If the measured image and the reference image are binary boundary images. Then, the definition of the function can refer to the definition of gray image, which is recorded as N ws (u, v), w, s ∈ [0,1], where w, s = 1 represents boundary pixel.

Several kinds of measurement algorithms can be defined by the pair function. In this paper, the product measurement algorithm of the pair function is adopted. For a binary image, the product measurement algorithm of the pair function can be expressed as follows:

(12) { R ( u , v ) = N 00 ( u , v ) N 00 ( u , v ) + N 01 ( u , v ) × N 11 ( u , v ) N 10 ( u , v ) + N 11 ( u , v ) max u , v R ( u , v ) , 0 u M 1 N 1 , 0 v M 2 N 2

where R(u, v) is the product of the ratio of the number of matched pixels’ pairs to the number of pairs of matched pixels. The larger R(u, v) is, the better the similarity is.

When the product measure algorithm of function is used for binary boundary image matching, if there are two or more test points with the same maximum correlation coefficient R(u, v) in the reference image, this paper adopts the method of measuring the boundary center to find the best matching position (u′, v′), that is, the obtained measured image and the boundary barycenter position ( x ¯ g , y ¯ g ) and ( x ¯ f , y ¯ f ) of the reference subimage in the test point (u, v) are as follows:

(13) { x ¯ g = i = 0 N 1 1 j = 0 N 2 1 i g ( i , j ) i = 0 N 1 1 j = 0 N 2 1 i g ( i , j ) y ¯ g = i = 0 N 1 1 j = 0 N 2 1 j g ( i , j ) i = 0 N 1 1 j = 0 N 2 1 g ( i , j )

(14) { x ¯ f = i = 0 N 1 1 j = 0 N 2 1 i f ( i + u , j + v ) i = 0 N 1 1 j = 0 N 2 1 f ( i + u , j + v ) y ¯ f = i = 0 N 1 1 j = 0 N 2 1 j f ( i + u , j + v ) i = 0 N 1 1 j = 0 N 2 1 f ( i + u , j + v )

Then, the best matching point (u′, v′) is obtained by the following method, that is, the matching position D(u, v), min u , v ( u , v ) :

(15) { D ( u , v ) = ( x ¯ g x ¯ f ) 2 + ( x ¯ g x ¯ f ) 2 ¯ min u , v ( u , v ) , 0 u M 1 N 1 , 0 v M 2 N 2

2.2.3 Image matching and positioning based on least squares

After the boundary image matching based on geographic information, we can get a series of image block’s center coordinates (x i , y i ) and (X i , Z i ) corresponding to the measured image and the reference subimage that achieve the best matching under the function product measure. They will be used as the control block pairs between the measured image and the reference image. According to the corresponding geometric transformation relationship between the measured image and the reference image established above, the matching and positioning of the measured remote sensing image and the reference image can be carried out under the least square error criterion, also known as the least square matching and positioning [11,12]. It is assumed that the corresponding geometric transformation relationship between the measured image and the reference image is expressed as a bivariate quadratic polynomial, namely,

(16) { X = a 00 x + a 10 y + a 01 y + a 11 x y + a 20 x 2 + a 02 y 2 Z = b 00 x + b 10 y + b 01 y + b 11 x y + b 20 x 2 + b 02 y 2

where a and b represent subpixels. The problem of matching and positioning between the measured image and the reference image can be transformed into the sum of the linear least-squares solutions A and B of the equations { W A = U W B = V , where

(17) A = [ a 00 a 10 a 01 a 11 a 20 a 02 ] T B = [ b 00 b 10 b 01 b 11 b 20 b 02 ] T U = [ X 1 X 2 X M ] T V = [ Z 1 Z 2 Z M ] T W = [ 1 x 1 y 1 x 1 y 1 x 1 2 y 1 2 1 x 2 y 2 x 2 y 2 x 2 2 y 2 2 1 x M y M x M y M x M 2 y M 2 ]

Then, there are the least-squares solutions of A and B

(18) { A = ( W T W ) 1 W T U B = ( W T W ) 1 W T V

where U and V represent pixels. It should be pointed out that in the least-square image matching and positioning in this paper, it is assumed that the corresponding geometric transformation relationship between the measured image and the reference image can be approximated by bivariate quadratic and bivariate first-order polynomials [13,14,15,16]. The basic starting point is that the optional correction polynomials (bivariate quadratic and bivariate first-order polynomials) are used to directly simulate the geometric distortion model of the measured image. It is considered that the geometric distortion of the measured remote sensing image can be regarded as the comprehensive effect of translation, scaling, rotation, affine, skew, bending, and the basic deformation of a quadratic polynomial [17,18,19,20,21,22]. In this way, we can avoid the complex imaging spatial geometric process of the measured remote sensing image to establish the corresponding geometric transformation relationship between the measured image and the reference image, to reduce the mismatch and improve the subpixel matching accuracy of the image [23,24,25,26].

3 Results

3.1 Enhancement effect

Taking the satellite remote sensing image of Wenchuan earthquake as an example, Figure 2 is the satellite remote sensing image of Wenchuan earthquake before the processing of the proposed method, and Figure 3 is the satellite remote sensing image of Wenchuan earthquake after the processing of the proposed method:

Figure 2 
                  Before enhancement.
Figure 2

Before enhancement.

Figure 3 
                  After enhancement.
Figure 3

After enhancement.

It can be seen from the comparison between Figures 2 and 3 that the resolution of the satellite remote sensing image of the Wenchuan earthquake enhanced by the method in this paper has been improved, and the geographic information in the image is more significant.

3.2 Noise filtering effect

The noise points introduced in Figure 2 are as shown in Figure 4, and Figure 4 is filtered by the methods of this paper, reference [3], and reference [4]. The results are as shown in Figure 5–7:

Figure 4 
                  Remote sensing image of the Wenchuan earthquake with noise.
Figure 4

Remote sensing image of the Wenchuan earthquake with noise.

Figure 5 
                  After the treatment of this method.
Figure 5

After the treatment of this method.

Figure 6 
                  After treatment by reference [3].
Figure 6

After treatment by reference [3].

Figure 7 
                  After treatment by reference [4].
Figure 7

After treatment by reference [4].

It can be seen from Figure 4– 7 that the method in this paper has excellent noise resistance and can effectively filter the noise in Figure 4. The effect of the method in reference [3] shows that the processed image contrast is damaged, which is not as clear as that of the method in this paper. The effect of the method in reference [4] shows that the processed image is distorted. Then, the method in this paper has the best filtering performance.

3.3 Matching effect

Taking Figures 8 and 9 as the experimental objects to perform image subpixel matching, and the results are as shown in Figure 10:

Figure 8 
                  Subject 1.
Figure 8

Subject 1.

Figure 9 
                  Subject 2.
Figure 9

Subject 2.

Figure 10 
                  Matching effect of this method.
Figure 10

Matching effect of this method.

According to the above figure, the method in this paper well matches the subpixel of the image in Figures 8 and 9 and realizes the matching of the remote sensing image of the ground features. Then, genetic algorithm is used to match with the methods of references [3] and [4], and the matching residual of the three methods is tested many times. The results are shown in Table 1:

Table 1

Comparison results of matching residuals of three methods

Test times Article method Reference [3] method Reference [4] method
1 0.0012 0.0254 0.0987
2 0.0012 0.0358 0.0899
3 0.0013 0.0354 0.0101
4 0.0012 0.0365 0.0987
5 0.0012 0.0355 0.0684

In Table 1, the maximum matching the residual value of the method in this paper is 0.0013, and the maximum matching the residual value of the methods in reference [3] and reference [4] are 0.0365 and 0.0101, respectively; so, the matching precision error of the method in this paper is the minimum and the matching precision is the highest.

4 Discussion

After decades of development, image matching algorithms have made great progress, and various algorithms have been proposed one after another. However, due to the complex and changeable shooting environment, the existing algorithms have some unsatisfactory defects in some aspects, and there is no one algorithm that can solve all image matching problems. This paper believes that the following aspects are worth further study:

  1. All image matching methods have their advantages and disadvantages. If we can make full use of their advantages, we will get better matching results. The combination of multiple matching methods is better than the single matching method.

  2. At present, most of the algorithms are based on the features of the image, which are put forward on the premise of rigid transformation between the images, but the global features of the object are generally not easy to obtain, and the global features extracted when there is occlusion between the objects are not reliable. The local features can better solve this problem, so it is necessary to study the local features of the image and the matching algorithm when there is a nonlinear transformation between images.

  3. Researchers have done extensive and in-depth research on the gray image matching algorithm, but less on the color image. Image matching is widely used in image retrieval, three-dimensional reconstruction, target recognition and tracking, face recognition, and other fields. Through reading a large number of literature, it is found that the most research on color image matching is based on color feature image retrieval, while the research on its application is very little, so this is also a problem worthy of attention.

  4. The real-time is very important in the application of cruise guidance, and most of the existing algorithms cannot meet this requirement. The author thinks that this is mainly because the algorithm is too complex and is implemented in serial mode, so we should try to improve the algorithm, make it parallel implementation, and combine the parallel computer and neural network methods to further improve the speed of image matching.

  5. In the field of computer vision, the traditional boundary detection and image matching algorithms all adopt the independent bottom-up process, which does not depend on the high-level knowledge, so the error of the low-level processing results will spread to the high-level without the opportunity of correction; in addition, in many feature-based image matching, it is assumed that the problem of image segmentation has been solved or does not exist in the problem of image segmentation, but these assumptions are unreasonable. The model-based method provides a new idea for the research of boundary detection, image segmentation, and image matching. The existing results also show the advantages of the method, but the research on it is not deep enough, such as the deformation model is more sensitive to noise, the selection of initial contour and template is difficult, the optimization process is easy to fall into local minimum and the calculation is large, which are problems that need to be further studied and overcome.

The starting point of the proposed method is to preprocess the image first, enhance the image quality, to greatly reduce the difficulty of subsequent image feature extraction. And start from the subpixel to match the subpixel, to improve the accuracy of image matching.

5 Conclusions

In recent decades, image matching technology has been the focus and difficulty of people’s research. It is a very important technology in the field of image processing and computer vision. The research of image matching technology focuses on its algorithm, including an image matching algorithm based on gray level and image matching algorithm based on the feature. In view of the shortcomings of the gray matching algorithm, such as sensitivity to gray changes and high computational complexity, this paper focuses on the analysis of image feature-based matching algorithm, proposes a subpixel matching method based on geographic information for remote sensing image of ground features, and verifies its effectiveness through experiments. During the experiment, the following conclusions are drawn:

  1. The resolution of the satellite remote sensing image of the Wenchuan earthquake enhanced by the proposed method is improved, and the enhancement effect is better.

  2. This method has excellent denoise performance, and there is no distortion in the processed remote sensing image.

  3. The maximum residual value of this method is 0.0013, which can be used in the subpixel matching of remote sensing images.

References

[1] Ban Z, Liu J, Li C. Superpixel segmentation using Gaussian mixture model. IEEE Trans Image Process. 2018;27:4105–17.10.1109/TIP.2018.2836306Search in Google Scholar PubMed

[2] Yoshiki T, Noriyuki K, Takaya Y. Evaluation of the performance of deformable image registration between planning CT and CBCT images for the pelvic region: comparison between hybrid and intensity-based DIR. J Radiat Res. 2017;58:1–5.Search in Google Scholar

[3] Yu Z, He LJ, Wang ZF. Image matching algorithm based on point line projection model and geometric error restriction rule. J Electron Meas Instrum. 2018;32(4):87–94.Search in Google Scholar

[4] Su TT, Guo ZY, Zhang YY. Matching method of large array CCD aerial images based on local hashing learning. J Appl Opt. 2019;40:2.Search in Google Scholar

[5] Tran TH, Otepka J, Di W. Classification of image matching point clouds over an urban area. Int J Remote Sens. 2018;39:4145–69.10.1080/01431161.2018.1452069Search in Google Scholar

[6] Igoe DP, Parisi AV, Amar A. Atmospheric total ozone column evaluation with a smartphone image sensor. Int J Remote Sens. 2018;39:2766–83.10.1080/01431161.2018.1433895Search in Google Scholar

[7] Munyati C. The potential for integrating Sentinel 2 MSI with SPOT 5 HRG and Landsat 8 OLI imagery for monitoring semi-arid savannah woody cover. Int J Remote Sens. 2017;38:4888–913.10.1080/01431161.2017.1331057Search in Google Scholar

[8] Sara MA, Hamed AA. Mapping of groundwater prospective zones integrating remote sensing, geographic information systems and geophysical techniques in El-Qaà Plain area, Egypt. Hydrogeology J. 2017;25:1–22.Search in Google Scholar

[9] Lu XB. Simulation of error correction in stereo image registration. Computer Simul. 2017;12:353–5.Search in Google Scholar

[10] Neiva MB, Guidotti P, Bruno OM, Enhancing LBP. by preprocessing via anisotropic diffusion. Int J Mod Phys C. 2018;29:61–70.Search in Google Scholar

[11] Pucuo CR, Hong JC. Research on remote sensing image detection based on deep convolution neural network and significant image. Autom & Instrum. 2018;230:56–59+63.Search in Google Scholar

[12] Davies B, Romanowska I, Harris K, et al. Combining geographic information systems and agent-based models in archaeology: part 2 of 3. Nephron Clin Pract. 2019;7:185–93.10.1017/aap.2019.5Search in Google Scholar

[13] Degbelo A, Kuhn W. Spatial and temporal resolution of geographic information: an observation-based theory. Open Geospatial Data, Softw Stand. 2018;3:12.10.1186/s40965-018-0053-8Search in Google Scholar

[14] Muhammad A, Butt SAM, Hassan Raza SM. GeoWebEX: an open-source online system for synchronous collaboration on geographic information. Appl Geomat. 2018;10:123–45.10.1007/s12518-018-0204-8Search in Google Scholar

[15] Kaur R, Jain AK, Singh H. Development of village information system for Faridkot district using remote sensing and geographic information system. Int J Inf Technol. 2018;4:1–6.10.1007/s41870-018-0180-6Search in Google Scholar

[16] Akyildiz FT, Vajravelu K. Galerkin-Chebyshev Pseudo spectral method and a split step new approach for a class of two dimensional semi-linear parabolic equations of second order. Appl Math Nonlinear Sci. 2018;3:255–64.10.21042/AMNS.2018.1.00019Search in Google Scholar

[17] Devaki P, Sreenadh S, Vajravelu K, Prasad KV, Vaidya H. Wall properties and slip consequences on peristaltic transport of a casson liquid in a flexible channel with heat transfer. Appl Math Nonlinear Sci. 2018;3:277–90.10.21042/AMNS.2018.1.00021Search in Google Scholar

[18] Gao W, Wang W. A tight neighborhood union condition on fractional (G, F, N′, M)-critical deleted graphs. Colloq Mathematicum. 2017;149:291–8.10.4064/cm6959-8-2016Search in Google Scholar

[19] Gao W, Wang W. New isolated toughness condition for fractional (G, F, N)-critical graph. Colloq Mathematicum. 2017;147:55–65.10.4064/cm6713-8-2016Search in Google Scholar

[20] Lokesha V, Shruti R, Deepika T. Reckoning of the dissimilar topological indices of human liver. Appl Math Nonlinear Sci. 2018;3:265–76.10.21042/AMNS.2018.1.00020Search in Google Scholar

[21] Vajravelu K, Li R, Dewasurendra M, Benarroch J, Ossi N, Zhang Y, et al. Effects of second-order slip and drag reduction in boundary layer flows. Appl Math Nonlinear Sci. 2018;3:291–302.10.21042/AMNS.2018.1.00022Search in Google Scholar

[22] Guariglia E. Harmonic sierpinski gasket and applications. Entropy. 2018;20(9):714.10.3390/e20090714Search in Google Scholar PubMed PubMed Central

[23] Frongillo M, Gennarelli G, Riccio G. Diffraction by a structure composed of metallic and dielectric 90° blocks. IEEE Antennas Wirel Propag Lett. 2018;17(5):881–5.10.1109/LAWP.2018.2820738Search in Google Scholar

[24] Guariglia E. Entropy and fractal antennas. Entropy. 2016;18(3):84.10.3390/e18030084Search in Google Scholar

[25] Frongillo M, Gennarelli G, Riccio G. Plane wave diffraction by arbitrary-angled lossless wedges: high-frequency and time-domain solutions. IEEE Trans Antennas Propag. 2018;66:6646–53.10.1109/TAP.2018.2876602Search in Google Scholar

[26] Guariglia E. Primality, fractality, and image analysis. Entropy. 2019.10.3390/e21030304Search in Google Scholar PubMed PubMed Central

Received: 2019-12-17
Revised: 2020-02-28
Accepted: 2020-03-18
Published Online: 2020-07-09

© 2020 Chen Chen, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 29.3.2024 from https://www.degruyter.com/document/doi/10.1515/phys-2020-0101/html
Scroll to top button