Skip to main content
Log in

Line and point matching based on the maximum number of consecutive matching edge segment pairs for large viewpoint changing images

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

The traditional line matching method has the problems of inaccurate endpoint location, small matching quantity and long time-consuming for images with large viewpoint variation, and the present point matching technology for the same type of image is facing the challenge of improving the matching quantity and accuracy. To solve these problems, a feature matching method based on the maximum number of consecutive matching edge segment pairs (MNCME) is proposed. Firstly, structural edges are extracted from the comparative images and segmented by edge corners. Then, the disambiguating geometric attributes are utilized to describe the edge segment, and the similarity of edge segments is represented by the sum of the differences between their geometric attributes. Finally, the edge correspondence is established based on the maximum number of consecutive matching edge segment pairs. The generated matches contain lines and points. Through combining MNCME with SIFT, the number of point matches improves greatly. To verify the performance of this method, it is compared with the state-of-the-art feature matching technology, and a series of large viewpoint changing image pairs with high-definition and abundant structural information are adopted. The results of line matching show that MNCME has better matching accuracy and time efficiency than relevant line matching technologies. The point matching results show that MNCME + SIFT is superior to other related algorithms in matching quantity, speed and correctness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability Statement

Graffiti and Boat image sets are taken from [18]; Magazine, 3D model and PC box image sets are cited from [17, 19] and [20], respectively.

References

  1. Wang, Z., Wu, F., Hu, Z.: MSLD: a robust descriptor for line matching. Pattern Recognit. 42(5), 941–953 (2009)

    Article  Google Scholar 

  2. Zhang, L., Koch, R., Jvcir, J.: An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 24(7), 794-805 (2013)

  3. Wang, L., Neumann, U., You, S.: Wide-baseline image matching using Line Signatures. ICCV (2010)

  4. Zhang, L., Koch, R.: Line Matching Using Appearance Similarities and Geometric Constraints. Patt. Reco. Springer, Berlin Heidelberg (2012)

    Book  Google Scholar 

  5. Sagues, C.: Robust line matching in image pairs of scenes with dominant planes. Opt. Eng. 45(6), 1–12 (2006)

    Article  Google Scholar 

  6. Schmid, C., Zisserman, A.: Automatic line matching across views. CVPR. pp. 666-671 (1997)

  7. Mokhtarian, F., Mackworth, A.K.: Curvature Scale Space Representation: Theory, Applications, and MPEG-7 Standardization. Springer, Netherlands (2002)

    Google Scholar 

  8. Lowe, D.G.: Distinctive image features from scale-invariant key-points. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  9. Mikolajczyk, K., Schmid, C.: An affine invariant interest point detector. ECCV. Springer, Berlin, Heidelberg (1932)

    MATH  Google Scholar 

  10. Mikolajczyk, K., Schmid, C.: Scale and Affine invariant interest point detectors. Int. J. Comput. Vis. 60(1), 63–86 (2004)

    Article  Google Scholar 

  11. Matas, J., Chum, O., Urban, M.: Robust wide-baseline stereo from maximally stable extremal regions. Image and Visi. Comm. 22(10), 761–767 (2004)

    Article  Google Scholar 

  12. Li, Y., Li, Q., Liu, Y., Xie, W.: A spatial-spectral SIFT for hyperspectral image matching and classification. Pattern. Recognit. Lett. 127, 18–26 (2018)

    Article  Google Scholar 

  13. Leutenegger, S., Chli, M., Siegwart, R.: BRISK: Binary robust invariant scalable keypoints. In: IEEE International conference on computer vision. Barcelona, Spain, pp. 6-13. IEEE (2011)

  14. Dou, J., Qin, Q., Tu, Z.: Robust image matching based on the information of SIFT. Optik. 171, 850–861 (2018)

    Article  Google Scholar 

  15. He, Y., Deng, G., Wang, Y., Wei, L., Yang, J., Li, X., Zhang, Y.: Optimization of SIFT algorithm for fast-image feature extraction in line-scanning ophthalmoscope. Optik. 152, 21–28 (2018)

    Article  Google Scholar 

  16. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartogramphy. Commun. ACM 24, 381–395 (1981)

    Article  Google Scholar 

  17. http://demo.ipol.im/demo/my_affine_sift

  18. http://www.robots.ox.ac.uk/

  19. http://perso.lcpc.fr/tarel.jean-philippe/syntim/paires.html

  20. Tuytelaars, T., Gool, L.V.: Matching widely separated views based on affine invariant regions. Int. J Comput. Vis. 59(1), 61–85 (2004)

    Article  Google Scholar 

  21. Li, K., Yao, J., Lu, X.: Hierarchical line matching based on Line-Junction-Line structure descriptor and local homography estimation. Neurocomputing 184, 207–220 (2016)

    Article  Google Scholar 

  22. https://github.com/MasteringOpenCV/code

  23. http://www.pudn.com/Download/item/id/3273695.html

  24. http://www.pudn.com/Download/item/id/2373728.html

  25. http://cvrs.whu.edu.cn/projects/ljlLineMatcher/

  26. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intel. 27, 1615–1630 (2005)

    Article  Google Scholar 

  27. Heinly, J., Dunn, E., Frahm, J.M.: Comparative evaluation of binary features, pp. 759–773. In Euro. Conf. on Comp. Visi. Springer. Berlin/Heidelberg, Germany (2012)

  28. https://www.ymcn.org/d-cV6h.html

  29. https://github.com/SmallMunich/

Download references

Funding

Not applicable

Author information

Authors and Affiliations

Authors

Contributions

L.W. and Y.Q. took part in conceptualization; L.W. was involved in methodology, software, writing and preparing the original draft and visualization; L.W., Y.Q. and X.K. had contributed to validation; formal analysis, X.K. took part in investigation, resources and data curation; Y.Q. carried out writing, reviewing, editing, supervision and project administration. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Yunsheng Qian.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Code availability

Not applicable

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, L., Qian, Y. & Kong, X. Line and point matching based on the maximum number of consecutive matching edge segment pairs for large viewpoint changing images. SIViP 16, 11–18 (2022). https://doi.org/10.1007/s11760-021-01959-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-021-01959-6

Keywords

Navigation