Next Article in Journal
Identities Generalizing the Theorems of Pappus and Desargues
Next Article in Special Issue
Reinforced Neighbour Feature Fusion Object Detection with Deep Learning
Previous Article in Journal
Some Operators on Quantum B-Algebras
Previous Article in Special Issue
MRDA-MGFSNet: Network Based on a Multi-Rate Dilated Attention Mechanism and Multi-Granularity Feature Sharer for Image-Based Butterflies Fine-Grained Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Remote Sensing Image Matching with Special Texture Background

Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(8), 1380; https://doi.org/10.3390/sym13081380
Submission received: 4 June 2021 / Revised: 19 July 2021 / Accepted: 26 July 2021 / Published: 29 July 2021
(This article belongs to the Special Issue Symmetry in Computer Vision and Its Applications)

Abstract

:
The purpose of image registration is to find the symmetry between the reference image and the image to be registered. In order to improve the registration effect of unmanned aerial vehicle (UAV) remote sensing imagery with a special texture background, this paper proposes an improved scale-invariant feature transform (SIFT) algorithm by combining image color and exposure information based on adaptive quantization strategy (AQCE-SIFT). By using the color and exposure information of the image, this method can enhance the contrast between the textures of the image with a special texture background, which allows easier feature extraction. The algorithm descriptor was constructed through an adaptive quantization strategy, so that remote sensing images with large geometric distortion or affine changes have a higher correct matching rate during registration. The experimental results showed that the AQCE-SIFT algorithm proposed in this paper was more reasonable in the distribution of the extracted feature points compared with the traditional SIFT algorithm. In the case of 0 degree, 30 degree, and 60 degree image geometric distortion, when the remote sensing image had a texture scarcity region, the number of matching points increased by 21.3%, 45.5%, and 28.6%, respectively and the correct matching rate increased by 0%, 6.0%, and 52.4%, respectively. When the remote sensing image had a large number of similar repetitive regions of texture, the number of matching points increased by 30.4%, 30.9%, and −11.1%, respectively and the correct matching rate increased by 1.2%, 0.8%, and 20.8% respectively. When processing remote sensing images with special texture backgrounds, the AQCE-SIFT algorithm also has more advantages than the existing common algorithms such as color SIFT (CSIFT), gradient location and orientation histogram (GLOH), and speeded-up robust features (SURF) in searching for the symmetry of features between images.

1. Introduction

With the rapid development of modern technology, new aerospace platforms have begun to emerge [1,2]. Remote sensing technology is not only limited to the surveying and mapping of topographic maps, but increasingly plays an extremely important role in fields such as environmental protection [3], disaster monitoring [4], cultivated land surveys [5], smart cities [6], and detection and tracking [7]. Compared with the traditional aerospace platform, unmanned aerial vehicles (UAVs) have the advantages of low cost, flexible take-off and landing, and low requirements for meteorological conditions. Therefore, UAV remote sensing technology has become another important source of geographic remote sensing information in addition to satellite remote sensing and ordinary aerial remote sensing, especially in some areas where field conditions are difficult and hard to reach by humans such as snowy areas, jungles, deserts, and other areas. Using UAV to capture remote sensing images can not only save a lot of material consumption and labor costs, but also greatly improve work efficiency.
Image registration technology as a key link of modern remote sensing technology as the results will directly affect the subsequent scientific research and practical applications. At present, image registration algorithms can be roughly divided into three types: pixel grayscale-, image feature-, and transform domain-based image registration algorithms.
Because the UAV is affected by wind direction, terrain, and other factors in low altitude remote sensing operation, it leads to the deviation of course and attitude angle, making the acquired images often have differences in translation, illumination, rotation, angle of view, zoom, and so on. The registration algorithm based on image features [8] can better overcome these differences, find out the symmetry of the feature between the reference image and the image to be registered, improve the effect of image registration, so it has been widely used in the field of remote sensing. During the remote sensing operation, when shooting farmland, woodland, desert, snowy area, and other topography, the image texture will appear as a large number of similar repetitions or scarcity, that is, remote sensing images with a special texture background. Remote sensing images with special texture background often show problems such as low gray-scale contrast, irregular local gray-scale texture, and the local geometric distortion of the image is significant. In response to the above problems, Schmid, C. [9] used Harris-Laplace to match low-altitude images with small scale changes; Bay, H. [10] proposed a more complicated way of image matching by combining regions and line segments. The KAZE algorithm proposed by Alcantarilla, P.F. [11] could enhance the texture details in remote sensing images; Sun, Y. [12] proposed the method of using the homography of line segment to constrain in online matching, which performs well in the region with gentle landform; and Yuan, X. [13] proposed a more complicated method based on graph theory to match remote sensing images. On the basis of comparing the performance of the most commonly used keypoint detectors and descriptors [14], Moghimi, A. [15] proposed a novel framework to radiometrically correct unregistered multisensor image pairs, where this method has achieved good results. However, most of the current algorithms are only studied for a single problem, however, in the face of a variety of complex situations, the effect is often not ideal.
The scale-invariant feature transform (SIFT) algorithm is currently one of the most representative algorithms in the field of feature registration [16,17]. It is widely used in computer science, medical imaging, remote sensing, and other fields because of its good uniqueness, high speed, and expandability [18,19,20]. However, when the SIFT algorithm is applied to remote sensing image registration with special texture backgrounds, when the image has a texture scarcity region, the feature points cannot be extracted. When the image has a large number of similar repetitive regions of texture, it will lead to the polysemy of the feature points, resulting in a large number of mismatches in the process of matching. Camera jittering affected by the surrounding environment will cause the image to have significant local geometric distortion, which will result in a decrease in the distinguishability of the extracted feature points, resulting in a low matching rate.
According to the problem of the SIFT algorithm, an improved SIFT algorithm combining image color and exposure information based on adaptive quantization strategy is proposed (AQCE-SIFT). First, the color and exposure information are used to enhance the gray contrast of the image. When processing remote sensing images with special texture backgrounds, the number of extracted feature points is greater, the distribution is more uniform, and the number of matching points is increased. Second, the adaptive quantization strategy is used to improve algorithm descriptor, make the feature points extracted more unique when there are geometric distortions or affine changes in the image, and the correct match rate is improved. The experimental results showed that the method proposed in this paper can better obtain the symmetry information between the images to be registered.
The rest of this paper is organized as follows. First, Section 1 introduces the research background; Section 2 gives a detailed introduction to the proposed AQCE-SIFT algorithm; Section 3 introduces the experimental environment and evaluation methods; and Section 4 gives the experimental results and analysis. Finally, Section 5 presents our conclusions.

2. Method

The traditional SIFT algorithm includes the following five steps: (1) grayscale transformation; (2) extreme value detection; (3) locate the feature points; (4) main direction of the feature point; and (5) the feature point description. The proposed AQCE-SIFT algorithm in this paper will mainly improve the grayscale transformation in step1 and the feature point description in step 5. Figure 1 shows the flow chart of the algorithm proposed in this paper.
In order to better describe the algorithm proposed in this paper, the proposed algorithm was divided into three parts in the following content of this section, that is: (1) Feature extraction combining image color and exposure information; (2) Detection area division and direction determination; and (3) Descriptor construction based on adaptive quantization strategy.

2.1. Feature Extraction Combining Image Color and Exposure Information

For remote sensing images with special texture backgrounds, ignoring the color and exposure information will cause the image contrast to decrease. The SIFT algorithm only relies on the gray information of the image to extract image features, which is not suitable for images with special texture backgrounds.
In order to improve the accuracy of feature point extraction for special texture backgrounds without increasing the dimension of the gray image, the color offset of the color image was integrated into the gray image to improve contrast, and the image exposure offset was added to make the texture edge more prominent.

2.1.1. Using Color Offset to Enhance Gray Contrast of Image

The color offset of image was obtained through reducing the dimension of the three-dimensional color space and calculated in the Y C B C R color space.
First, as shown in Equation (1), the original color image in the R G B space was converted to the Y C B C R space.
Y = 0.299 R + 0.587 G + 0.114 B C B = 0.169 R 0.331 G + 0.500 B + 128 C R = 0.500 R 0.419 G 0.081 B + 128
where R ,   G ,   B , and Y represent the red, green, blue, and brightness components of the color image, respectively. C B , C R denote the blue and red chromaticity components, respectively.
Next, the color image from the three-dimensional Y C B C R space was mapped to the two-dimensional C B C R space by Equation (2) and the color offset Y C can be calculated by Equation (3) [21]. From Equation (3), we can see the functional characteristics of solving the color offset Y C ; when C R = C B in the equation, it indicates that the color characteristic is neutral, and the function of solving Y C crosses the origin. Therefore, Y C is a nonlinear function passing through the origin. In Equation (3), m R and m B represent the average value of C R and C B by comparing the values of m R and m B , We can obtain more information on the color characteristics of the input image, if m B is greater than m R , it indicates that the blue feature in the input image has a larger proportion, otherwise, the red feature has a larger proportion, k is the contrast parameter, α is the range parameter, k and α are both positive real numbers. Generally speaking, 1 k 4 , 0.4 α 0.6 .
C B C R = 0.169 0.311 0.500 0.500 0.419 0.081 R G B + 128 128
Y C = k × s g n m R m B × s g n C R C B × C R C B α
Finally, the one-dimensional grayscale image with color offset generated by the color image projection from the three-dimensional space can be represented by the sum of Y and Y C :
P = Y + Y C = Y + k × s g n m R m B × s g n C R C B × C R C B α

2.1.2. Image Exposure Adjustment

The closer the average brightness component is to the intermediate value of 128, the clearer the image details [22]. Therefore, in order to show more details in the image and enlarge the difference between pixels, the adjustment intensity should be the largest for the pixels with a gray value near 0 or 255. In the above-mentioned P , the adjustment intensity of pixels with gray value near 128 was the smallest, so we used the Gaussian weighting function to realize this idea. Exposure offset Y E can be calculated from Equation (5).
Y E = 128 m P × e x p P 0 0.5 2 / 2 σ 2
where m P is the average value of P , so we used 128 m P to adjust the exposure; P 0 is the normalized standard value of P ; and P 0 = P 255 .

2.1.3. Image Grayscale Method Combining Color and Exposure Information

After the above steps, the color offset Y C , exposure offset Y E , and brightness component Y can be obtained, and the improved grayscale image I g combining image color and exposure information can be calculated from Equation (6).
I g = Y + Y C + Y E

2.2. Detection Area Division and Direction Determination

In the grayscale image I g , a circular region R c with a constant radius and a size of 41 × 41 pixels was determined with the feature point as the center. Then, the main direction of feature points was determined, and the local area R 0 normalized by R c was rotated along the main direction of the feature point.

2.3. Descriptor Construction Based on Adaptive Quantization Strategy

How to improve the saliency and robustness of descriptors is the main research goal of the local image descriptor. At present, the most widely used descriptor is the SIFT descriptor based on distribution, which adopts the four-quadrant structure, as shown in Figure 2a.
The SIFT descriptor has good stability for slight viewing angles and affine transformation. However, in order to obtain a larger field of view, the oblique rather than orthographic method is often used to obtain remote sensing images. In addition, affected by wind and other factors, the acquired image will have significant affine changes and local geometric distortion. The descriptor of the SIFT algorithm uses fixed sub-regions to divide the neighborhood of feature points, and the direction number of the gradient histogram in each sub-region is the same. This makes the sub-regions at different positions have the same proportions in the process of combining them into descriptors. Therefore, the SIFT algorithm cannot reduce the influence of distortion on the uniqueness of the descriptors and will lead to a lower matching success rate.
In order to improve the saliency of descriptors, this section proposes a method of constructing feature descriptors using adaptive quantization strategies. First of all, a circular log-polar coordinate structure similar to the gradient location and orientation histogram (GLOH) descriptor [23] will be used to construct the descriptor. Second, in the descriptor structure, the inner circle closer to the feature point will use a smaller angle quantization number than the outer circle. Third, in order to improve the robustness of the descriptor, for the sub-regions far away from the feature point, a smaller histogram direction quantization number should be used to divide the direction of the gradient histogram. The construction process of the descriptor is as follows:
First, the normalized local area R 0 is divided into n non-overlapping rings (including the middle circular sub-region) centered on feature points (i.e., R 0 1 , R 0 2 , , R 0 n , and n is the radial quantization number). In this paper, we used the GLOH segmentation method and set n = 3 and the radii of R 0 1 , R 0 2 ,   R 0 3 were 6, 11, and 15, respectively;
Then, we let the U = u 1 , , u n be the angular quantization set of different rings, and divide each ring R 0 i into the u i sub-region evenly according to the angle. The sub-region is represented by R 0 i , j , where i = 1 , 2 , , n , j = 1 , 2 , , u i . This step can divide n non-overlapping rings (including the middle circular sub-region) into different sub-regions.
Next, we let V = v 1 , , v n be the quantized set of the histogram direction of each subregion. According to the quantization number v i of the histogram direction, the direction of the gradient histogram H i , j of each sub region R 0 i , j is divided.
Finally, the constructed descriptor vector D can be expressed as:
D = H 1 , 1 , , H i , j , , H n , u n i = 1 , 2 , , n , j = 1 , 2 , , u i
The dimension d of the proposed descriptor can be obtained by Equation (8):
d = i = 1 n u i v i
The higher the dimension of the descriptor, the more time-consuming the algorithm. In order to maintain the accuracy of the algorithm and avoid time complexity due to high dimension, we should control the value of d around 128.
Table 1 shows the specific parameters to construct feature descriptors based on the adaptive quantization strategy proposed in this section, and we discuss how these parameters are determined in Section 4.2. Figure 2b shows the descriptor structure according to the parameters of Table 1.

3. Experiment

3.1. Experimental Platform and Data Source

1.
The experimental environment of the software and hardware in this paper are as follows:
  • Development Platform: Matlab2016a
  • CPU: Intel(R) Core (TM) i5-8300H
  • Graphics card: NVIDIA(R) GeForce GTX(R) 1060 6GB
2.
The image data sources in this paper were as follows:
  • The image data used to verify the performance of the algorithm comes from the public database [24]; and
  • The image data used for panoramic stitching comes from the self-built database of the research group.

3.2. Evaluation Method

In order to verify the applicability of the AQCE-SIFT algorithm in remote sensing images with special texture backgrounds, we selected the representative SIFT, color SIFT (CSIFT) [25], GLOH, and speeded-up robust features (SURF) [26] algorithm to compare to the proposed AQCE-SIFT algorithm. The experimental results were analyzed qualitatively and quantitatively from the number of extracted feature points and correct image matching rate.
The number of extracted feature points is an important indicator in evaluating image feature extraction.
The correct matching rate is another main indicator for evaluating image matching performance [27], which can be expressed by Equation (9).
M R = P C P T × 100 %
where M R is the correct matching rate; P T is the total number of matching points; P C is the number of correct matching points; and P C can be obtained by using the random sample consensus (RANSAC) algorithm to remove the number of mismatch points in P T .

4. Result Analysis

4.1. Comparative Experiment and Analysis of Feature Point Extraction

As shown in Figure 3, snowy area and farmland images have special texture backgrounds. Feature point extraction experiments were performed on these two images.

4.1.1. Feature Point Extraction in Texture Scarcity Region

Figure 3a shows the remote sensing image of the snowy area. Due to the characteristics of snow, there was no obvious structure in the snowy area and the surface texture features were lacking. Moreover, the background color of the area also tended to be single, which has a great impact on the extraction of image features.
Comparative experiments were performed by our proposed AQCE-SIFT algorithm and some representative algorithms such as SIFT, CSIFT, SURF, and GLOH. The experimental results are shown in Figure 4, and the numbers of extracted feature points are shown in Figure 5.
From Figure 4d, we can see that the SURF algorithm cannot extract feature points in the snow cover at the upper right of the image, which will affect the subsequent registration work. From Figure 4b,e, we can see that the feature points extracted by the SIFT and GLOH algorithms were mostly concentrated in the lower left of the image, and there were few feature points in the upper right of the image. As shown in Figure 4c, the number of the extracted feature points by the CSIFT algorithm was the highest among the five algorithms, however, there were too many feature points extracted on the edge of the snow mountains and clouds in the image, which will affect the real-time performance and correct matching rate. As shown in Figure 4a, the number of feature points extracted by the AQCE-SIFT algorithm proposed in this paper was slightly greater than that of the SIFT algorithm, and in the upper right of the image, more feature points could be extracted than the other algorithms. The distribution of feature points was more extensive than other algorithms, and the number of feature points extracted on the edge of the snow mountains and clouds in the image was more reasonable. Therefore, the proposed algorithm was more suitable for feature extraction when there were texture scarcity regions in the remote sensing images.

4.1.2. Feature Point Extraction in Texture Similarity Region

Figure 3b shows the remote sensing image of farmland. The texture of the farmland has regular geometric shapes. Generally speaking, the same crop will be planted in the same area, which makes the region show obvious texture repetition. Different areas will show different textures and colors due to different types of crops or crops at different growth stages.
We used our proposed AQCE-SIFT algorithm and the other representative algorithms to extract the feature points of Figure 3b. The results are shown in Figure 6, and the comparison of the number of feature points is shown in Figure 7.
As shown from Figure 6d and Figure 7, the SURF algorithm cannot extract a sufficient number of feature points from farmland images. From Figure 6b,e, we can see that the feature points extracted by the SIFT and GLOH algorithms were mostly concentrated on the edges between different crop areas, and there were few feature points in the same crop area. As shown in Figure 6c and Figure 7, the number of feature points extracted by the CSIFT algorithm was about three times that of the SIFT algorithm, and a certain number of feature points could be extracted in the same crop area. However, the CSIFT algorithm extracted too many feature points on the edges between different crop areas, and the distribution was too dense, which will affect the real-time performance of the algorithm. From Figure 6a and Figure 7, we can see that the number of feature points extracted by the proposed AQCE-SIFT algorithm was twice that of the SIFT algorithm, and a reasonable number of feature points could be extracted in the same crop area and on the edges between different crop areas. Therefore, when there are large numbers of similar repetitive texture regions in the remote sensing image, the algorithm proposed in this paper is more suitable for feature extraction.

4.2. Comparative Experiment and Analysis of Image Matching

It is difficult to process the remote sensing image with special texture background, especially in the case of large affine change and local geometric distortion. In order to verify the performance of the proposed AQCE-SIFT algorithm, a group of snow images with few texture features and a group of farmland images with mostly similar texture features were selected for matching. Figure 3a,b are the benchmark images of snowy area and farmland, respectively, and Figure 8 and Figure 9 are the images to be matched of the snowy area and farmland, respectively. Figure 8b,c and Figure 9b,c are simulated distortion images for different angles generated by model simulation based on projection transformation.
The adaptive quantization strategy descriptor of the proposed AQCE-SIFT algorithm contains three key parameters, namely the radial quantization number n , the angle quantization set U , and the histogram direction quantization set V . As shown in Table 2, different parameter values were set to generate algorithm descriptors. In order to ensure the accuracy of the descriptor without affecting the speed of the algorithm, the dimension of the descriptor should be between 120–140. Therefore, we let n = 3 in this paper, and as shown in Table 2, different parameter values of U and V were set to generate algorithm descriptors. For the values of U and V , these were selected from the sets 2 , 4 , 5 , 6 , 8 , 10 , 12 and 4 , 6 , 8 , 10 , respectively.
According to the principle introduction in Section 2.3, we can see that the larger the histogram direction quantization number v i of the inner ring and the greater the number of sub regions (angle quantization number u i ) of the outer ring, the higher the correct matching rate M R is. However, by analyzing the data in Table 2, we can see that when the u i is more than 5, the correct matching rate of the image will begin to decline. By observing the data, it can be seen that the matching effect of the descriptor designed by U = 5 , 8 , 10 , V = 10 , 6 , 4 was slightly better, and the descriptor designed by the U = 5 , 8 , 10 ,   V = 8 , 6 , 4 had a lower dimensionality and shorter matching time. Affected by the environment and equipment, the remote sensing image captured by an UAV will have the problem of geometric distortion or affine change. In order to obtain a higher correct matching rate when there are geometric distortions or affine changes, in this paper, n = 3 , U = 5 , 8 , 10 , V = 10 , 6 , 4 were set as the optimal values of the AQCE-SIFT algorithm descriptor parameters.
In this paper, Figure 3a,b shows the remote sensing images with special texture backgrounds, and Figure 8 and Figure 9 are geometric distortion images of different angles. Image matching experiments were performed on Figure 3a,b, Figure 8, and Figure 9 with the proposed AQCE-SIFT algorithm and the selected representative algorithms. Figure 10, Figure 11 and Figure 12 show the matching results of various algorithms in the case of snow remote sensing images with different geometric distortion angles. Figure 13 shows the matching points and correct matching rate of different algorithms in snow area image matching. Figure 14, Figure 15 and Figure 16 show the matching results of various algorithms in the case of farmland remote sensing images with different geometric distortion angles. Figure 17 shows the matching points and correct matching rate of different algorithms in farmland image matching.
As shown in Figure 10, Figure 11, Figure 12 and Figure 13, with the increase in the geometric distortion angle of the image, the number of matching points of each algorithm showed a downward trend. When the image has a large geometric distortion angle, the correct matching rate will be greatly reduced.
When the SURF algorithm is used for snow image registration, because the number of matching points is too small, it is not suitable for texture-less remote sensing images. The number of matching points extracted by the SIFT algorithm and the GLOH algorithm was less than that of the AQCE-SIFT algorithm, and the number of matching points extracted at the top right of the snow image was less, and the matching points were unevenly distributed, so the SIFT algorithm and the GLOH algorithm are also not suitable for texture-less remote sensing images. For the CSIFT algorithm, most of the matching points were concentrated on the edge of snowy mountains and clouds and the distribution was relatively concentrated. As shown in Figure 13, the AQCE-SIFT algorithm was better than other algorithms in the correct matching rate, therefore, the AQCE-SIFT algorithm proposed in this paper is more suitable for texture-less remote sensing images.
As shown in Figure 14, Figure 15, Figure 16 and Figure 17, with the increase in the geometric distortion angle of the image, the number of matching points of each algorithm showed a downward trend. When the SURF algorithm was used for the farmland remote sensing image registration, its matching points were concentrated on the edges of different crop areas, and the number was too small when the image distortion reached 60 degrees, so the image matching cannot be performed. Therefore, the SURF algorithm is not suitable for images with a large number of similar repeating regions in texture. As shown in Figure 14, Figure 15, Figure 16 and Figure 17, the AQCE-SIFT algorithm was better than other algorithms in the correct matching rate, and the distribution of matching points was more reasonable. Therefore, the AQCE-SIFT algorithm proposed in this paper is more suitable for remote sensing images with a large number of similar repeating regions in texture.

4.3. Algorithm Performance Verification

The SIFT algorithm has good stability when processing images with changes in scale, rotation, viewpoint, and brightness. In order to fully understand the performance of the algorithm proposed in this paper, we used the public dataset to evaluate and compare the AQCE-SIFT algorithm with other representative algorithms. Figure 18 shows an example of an image pair used for evaluation.
Through the analysis of the experimental process and the data in Figure 19, we can see that the algorithm proposed in this paper had a certain improvement in the number of matching points and the correct matching rate. In terms of matching time, the algorithm in this paper was much better than the CSIFT algorithm, but was still slower than the SIFT, GLOH, and SURF algorithms. Therefore, the AQCE-SIFT algorithm proposed in this paper is more suitable for scenes that do not require high real-time performance but have certain requirements for the number of matching points and the correct matching rate.

4.4. Practical Application Value

In order to verify the stability and practical application value of the AQCE-SIFT algorithm studied in this paper, in this section, the UAV remote sensing image containing a special texture background will be applied for a panoramic mosaic.
The data source used in the experiment was a self-built database of our research group, and the database was taken by a DJI UAV equipped with a SNOY ILCE-6000 camera.
In order to better present the overall mosaic effect of panorama, in this section, the weighted average fusion method was used to fuse the matched remote sensing image.
Figure 20a shows a panoramic mosaic image of the field (the image contains a large number of texture similarity region) where the panorama was composed of 10 UAV aerial images with significant distortion. Figure 20b shows a panoramic mosaic image of the water area (the image contains texture scarcity region) where the panorama was composed of eight UAV aerial images with significant distortion.
From the final results, the overall effect and image details of the panoramic mosaic of UAV remote sensing images were very prominent, showing the good performance and application ability of the algorithm in this paper when processing remote sensing images with special texture backgrounds.

5. Conclusions

For remote sensing images with special texture backgrounds, it is difficult to improve the accuracy of feature point extraction. For this problem, this paper proposed an improved SIFT algorithm combining image color and exposure information based on the adaptive quantization strategy (AQCE-SIFT).
First, in order to highlight the texture information of the remote sensing image with special texture backgrounds, the color offset and exposure offset of the image were integrated into the grayscale image. This process can make the grayscale image have a contrast similar to that of the color image without increasing the image dimension, which can make it easier to extract features from remote sensing images with special texture backgrounds. Then, the adaptive quantization strategy was used to improve the descriptor of the SIFT algorithm, which can improve the accuracy of the matching rate when dealing with remote sensing images with significant geometric distortion or affine changes. Experimental results showed that the AQCE-SIFT algorithm is more suitable for processing remote sensing image registration problems with special texture backgrounds.

Author Contributions

Data curation, W.Z.; Investigation, C.W.; Methodology, X.S.; Software, P.L.; Validation, K.X.; Writing–original draft, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Universities in Heilongjiang Province, grant number 2018-KYYWF-1681; the University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province, grant number UNPYSCT-2017086; and National Natural Science Foundation of China, grant number 61671190, 61571168.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Obanawa, H.; Shibata, H. Applications of UAV remote sensing to topographic and vegetation surveys. Unmanned Aer. Veh. Appl. Agric. Environ. 2020, 131–142. [Google Scholar] [CrossRef]
  2. Yao, H.; Qin, R.; Chen, X. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef] [Green Version]
  3. Wan, Y.; Hu, X.; Zhong, Y.; Ma, A.; Wei, L.; Zhang, L. Tailings reservoir disaster and environmental monitoring using the UAV-ground hyperspectral joint observation and processing: A case of study in xinjiang, the belt and road. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium IEEE, Yokohama, Japan, 28 July–2 August 2019; pp. 9713–9716. [Google Scholar]
  4. Nho, H.; Shin, D.Y.; Kim, S.S. UAV image fast geocoding method for disaster scene monitoring. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 44, 107–110. [Google Scholar] [CrossRef]
  5. Gao, M.; Luo, Y.; Shi, L.; Shang, C. Analysis on the characteristics of cultivated land change in dianchi lake basin based on remote sensing image processing technonlogy in the past 20 years. IOP Conf. Ser. Earth Environ. Sci. 2021, 658, 012006. [Google Scholar] [CrossRef]
  6. Tang, C.Y.; Wu, Y.L.; Hor, M.K. Modified sift descriptor for image matching under interference. In Proceedings of the 2008 International Conference on Machine Learning and Cybernetics, Kunming, China, 12–15 July 2008; pp. 3294–3300. [Google Scholar]
  7. Yavariabdi, A.; Kusetogullari, H.; Celik, T.; Cicek, H. FastUAV-NET: A multi-uav detection algorithm for embedded platforms. Electronics 2021, 10, 724. [Google Scholar] [CrossRef]
  8. Yasein, M.S.; Agathoklis, P. A feature-based image registration technique for images of different scale. In Proceedings of the IEEE International Symposium on Circuits & Systems, Seattle, WA, USA, 18–21 May 2008; pp. 3558–3561. [Google Scholar]
  9. Schmid, C.; Mohr, R.; Bauckhage, C. Evaluation of interest point detectors. Int. J. Comput. Vis. 2000, 37, 151–172. [Google Scholar] [CrossRef] [Green Version]
  10. Bay, H.; Ferrari, V.; Gool, L.V. Wide-baseline stereo matching with line segments. In Proceedings of the IEEE Computer Society, San Diego, CA, USA, 20–25 June 2005; pp. 329–336. [Google Scholar]
  11. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE Features; Computer Vision-ECCV; Springer: Florence, Italy, 2012; pp. 214–227. [Google Scholar]
  12. Sun, Y.; Zhao, L.; Huang, S. Line matching based on planar homography for stereo aerial images. ISPRS J. Photogramm. Remote Sens. 2015, 104, 1–17. [Google Scholar] [CrossRef] [Green Version]
  13. Yuan, X.; Chen, S.; Yuan, W.; Cai, Y. Poor textural image tie point matching via graph theory. ISPRS J. Photogramm. Remote Sens. 2017, 129, 21–31. [Google Scholar] [CrossRef]
  14. Moghimi, A.; Celik, T.; Mohammadzadeh, A.; Kusetogullari, H. Comparison of keypoint detectors and descriptors for relative radiometric normalization of bitemporal remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4063–4073. [Google Scholar]
  15. Moghimi, A.; Sarmadian, A.; Mohammadzadeh, A.; Celik, T.; Amani, M.; Kusetogullari, H. Distortion robust relative radiometric normalization of multitemporal and multisensor remote sensing images using image features. IEEE Trans. Geosci. Remote Sens. 2021, 99, 1–20. [Google Scholar] [CrossRef]
  16. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra (Corfu), Greece, 20–25 September 1999; pp. 1150–1157. [Google Scholar]
  17. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  18. Wang, Y.; Yuan, Y.; Lei, Z. Fast SIFT feature matching algorithm based on geometric transformation. IEEE Access 2020, 8, 88133–88140. [Google Scholar] [CrossRef]
  19. Cui, S.; Jiang, H.; Wang, Z.; Shen, C. Application of neural network based on SIFT local feature extraction in medical image classification. In Proceedings of the 2017 2nd International Conference on Image, Vision and Computing (ICIVC) IEEE, Chengdu, China, 2–4 June 2017; pp. 92–97. [Google Scholar]
  20. Zhao, J.; Zhang, X.; Gao, C.; Qiu, X.; Tian, Y.; Zhu, Y.; Cao, W. Rapid mosaicking of unmanned aerial vehicle (UAV) images for crop growth monitoring using the SIFT algorithm. Remote Sens. 2019, 11, 1226. [Google Scholar] [CrossRef] [Green Version]
  21. Lee, T.H.; Kim, B.K.; Song, W.J. Converting color images to grayscale images by reducing dimensions. Opt. Eng. 2010, 49, 057006. [Google Scholar] [CrossRef] [Green Version]
  22. Wu, T.R.; Toet, A. Color-to-grayscale conversion through weighted multiresolution channel fusion. J. Electron. Imaging 2014, 23, 1–6. [Google Scholar] [CrossRef]
  23. Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1615–1630. [Google Scholar] [CrossRef] [Green Version]
  24. Nister, D.; Stewenius, D. Scalable recognition with a vocabulary tree. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogniton, New York, NY, USA, 17–22 June 2006; pp. 2161–2168. [Google Scholar]
  25. Abdel-Hakim, A.E. CSIFT: A SIFT descriptor with color invariant characteristics. In Proceedings of the CVPR’06, New York, NY, USA, 17–22 June 2006; p. 2. [Google Scholar]
  26. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  27. Kwon, O.S.; Ha, Y.J. Panoramic video using scale-invariant feature transform with embedded color-invariant values. IEEE Trans. Consum. Electron. 2010, 56, 792–798. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the AQCE-SIFT algorithm.
Figure 1. Flow chart of the AQCE-SIFT algorithm.
Symmetry 13 01380 g001
Figure 2. Descriptor structure diagram. (a) SIFT descriptor. (b) Feature descriptor structure based on adaptive quantization strategy.
Figure 2. Descriptor structure diagram. (a) SIFT descriptor. (b) Feature descriptor structure based on adaptive quantization strategy.
Symmetry 13 01380 g002
Figure 3. Remote sensing images with special texture backgrounds.
Figure 3. Remote sensing images with special texture backgrounds.
Symmetry 13 01380 g003
Figure 4. Comparison experiment of the feature points extracted by each algorithm.
Figure 4. Comparison experiment of the feature points extracted by each algorithm.
Symmetry 13 01380 g004
Figure 5. The number of feature points extracted by each algorithm.
Figure 5. The number of feature points extracted by each algorithm.
Symmetry 13 01380 g005
Figure 6. Comparison of feature points extracted by each algorithm.
Figure 6. Comparison of feature points extracted by each algorithm.
Symmetry 13 01380 g006aSymmetry 13 01380 g006b
Figure 7. The number of feature points extracted by each algorithm.
Figure 7. The number of feature points extracted by each algorithm.
Symmetry 13 01380 g007
Figure 8. The snow image to be matched with different geometric distortion angles.
Figure 8. The snow image to be matched with different geometric distortion angles.
Symmetry 13 01380 g008
Figure 9. The farmland image to be matched with different geometric distortion angles.
Figure 9. The farmland image to be matched with different geometric distortion angles.
Symmetry 13 01380 g009
Figure 10. The matching result of the snowy area image with 0 degree geometric distortion.
Figure 10. The matching result of the snowy area image with 0 degree geometric distortion.
Symmetry 13 01380 g010
Figure 11. The matching result of the snowy area image with 30 degree geometric distortion.
Figure 11. The matching result of the snowy area image with 30 degree geometric distortion.
Symmetry 13 01380 g011aSymmetry 13 01380 g011b
Figure 12. The matching result of the snowy area image with 60 degree geometric distortion.
Figure 12. The matching result of the snowy area image with 60 degree geometric distortion.
Symmetry 13 01380 g012
Figure 13. Comparison of matching points and correct matching rate of different algorithms. (a) Comparison of matching points of different algorithms. (b) Comparison of correct matching rate of different algorithm.
Figure 13. Comparison of matching points and correct matching rate of different algorithms. (a) Comparison of matching points of different algorithms. (b) Comparison of correct matching rate of different algorithm.
Symmetry 13 01380 g013
Figure 14. The matching result of the farmland image with 0 degree geometric distortion.
Figure 14. The matching result of the farmland image with 0 degree geometric distortion.
Symmetry 13 01380 g014
Figure 15. The matching result of the farmland image with 30 degree geometric distortion.
Figure 15. The matching result of the farmland image with 30 degree geometric distortion.
Symmetry 13 01380 g015
Figure 16. The matching result of the farmland image with 60 degree geometric distortion.
Figure 16. The matching result of the farmland image with 60 degree geometric distortion.
Symmetry 13 01380 g016aSymmetry 13 01380 g016b
Figure 17. Comparison of matching points and correct matching rate of different algorithms. (a) Comparison of matching points of different algorithms. (b) Comparison of correct matching rate of different algorithms.
Figure 17. Comparison of matching points and correct matching rate of different algorithms. (a) Comparison of matching points of different algorithms. (b) Comparison of correct matching rate of different algorithms.
Symmetry 13 01380 g017
Figure 18. Image pair example.
Figure 18. Image pair example.
Symmetry 13 01380 g018aSymmetry 13 01380 g018b
Figure 19. Comparison of the performance by each algorithm. (a) Comparison of the number of matching points by each algorithm. (b) Comparison of correct matching rate by each algorithm. (c) Comparison of matching time by each algorithm.
Figure 19. Comparison of the performance by each algorithm. (a) Comparison of the number of matching points by each algorithm. (b) Comparison of correct matching rate by each algorithm. (c) Comparison of matching time by each algorithm.
Symmetry 13 01380 g019
Figure 20. The panoramic mosaic image.
Figure 20. The panoramic mosaic image.
Symmetry 13 01380 g020
Table 1. Feature descriptor parameters based on adaptive quantization strategy.
Table 1. Feature descriptor parameters based on adaptive quantization strategy.
Parameter NameAdaptive Parameters
Radial quantization number n n = 3
Angle quantization set U U = 5 , 8 , 10
Histogram direction quantization set V V = 10 , 6 , 4
Dimension d d = 5 × 10 + 8 × 6 + 10 × 4 = 138
Table 2. Performance comparison of descriptors under different parameter settings.
Table 2. Performance comparison of descriptors under different parameter settings.
Parameter SettingDescriptor Dimension Correct   Matching   Rate   M R
30°60°
Snowy AreaFarm-LandSnowy AreaFarm-LandSnowy AreaFarm-Land
U = 2 , 6 , 10 , V = 10 , 8 , 6 12897.8%93.3%90.5%89.8%37.5%58.3%
U = 4 , 8 , 12 , V = 8 , 6 , 4 12898.2%93.9%92.0%90.6%50.0%66.7%
U = 4 , 6 , 8 , V = 10 , 8 , 6 136100%94.6%93.7%90.5%55.6%75.0%
U = 5 , 8 , 10 , V = 8 , 6 , 4 128100%95.1%96.8%91.2%66.7%87.5%
U = 5 , 8 , 10 , V = 10 , 6 , 4 138100%95.8%96.9%91.7%66.7%87.5%
U = 6 , 8 , 10 , V = 8 , 6 , 4 136100%95.1%94.1%90.3%63.6%77.8%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, S.; Sun, X.; Liu, P.; Xu, K.; Zhang, W.; Wu, C. Research on Remote Sensing Image Matching with Special Texture Background. Symmetry 2021, 13, 1380. https://doi.org/10.3390/sym13081380

AMA Style

Wang S, Sun X, Liu P, Xu K, Zhang W, Wu C. Research on Remote Sensing Image Matching with Special Texture Background. Symmetry. 2021; 13(8):1380. https://doi.org/10.3390/sym13081380

Chicago/Turabian Style

Wang, Sen, Xiaoming Sun, Pengfei Liu, Kaige Xu, Weifeng Zhang, and Chenxu Wu. 2021. "Research on Remote Sensing Image Matching with Special Texture Background" Symmetry 13, no. 8: 1380. https://doi.org/10.3390/sym13081380

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop