当前位置: X-MOL 学术Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Point2CN: Progressive Two-View Correspondence Learning via Information Fusion
Signal Processing ( IF 3.4 ) Pub Date : 2021-08-28 , DOI: 10.1016/j.sigpro.2021.108304
Xin Liu 1, 2 , Guobao Xiao 2 , Zuoyong Li 2 , Riqing Chen 1
Affiliation  

Finding reliable correspondences between two views is crucial for solving feature matching based tasks. Given initial correspondences of feature points, recent researches propose an end-to-end permutation-equivariant classification network based on the multi-layer perceptron to label each initial correspondence as inlier or outlier and then regress the camera pose. However, they utilize the PointCN block as network backbones, which cannot gather sufficient contextual information for network learning due to the single sequential structure. In this paper, we propose a new modified block, called Point2CN block, via multi-subsets information fusion, for feature matching. Specifically, the Point2CN block exploits a hierarchical residual-like manner to connect feature map subsets, and then fuses their mutual information by the weighted addition operation to improve feature representation capacity. The simple yet effective Point2CN block can significantly improve the performance of current learning-based feature matching methods with the negligible increase of network parameters. The Point2CN block also will not bring expensive computational overheads since it only involves some novel information fusion operations. Extensive experiments over different challenging datasets on the task of outlier removal and camera pose estimation consistently demonstrate that our Point2CN block is able to gain outstanding performance improvements compared with the existing state-of-the-art methods.



中文翻译:

Point2CN:基于信息融合的渐进式二视图对应学习

找到两个视图之间的可靠对应关系对于解决基于特征匹配的任务至关重要。鉴于特征点的初始对应关系,最近的研究提出了一种基于多层感知器的端到端置换等变分类网络,将每个初始对应关系标记为内点或离群点,然后回归相机姿态。然而,他们利用 PointCN 块作为网络主干,由于单一的顺序结构,无法为网络学习收集足够的上下文信息。在本文中,我们提出了一个新的修改块,称为 Point2CN 块,通过多子集信息融合,用于特征匹配。具体来说,Point2CN 块利用分层残差方式来连接特征图子集,然后通过加权相加操作融合它们的互信息以提高特征表示能力。简单而有效的 Point2CN 块可以显着提高当前基于学习的特征匹配方法的性能,而网络参数的增加可以忽略不计。Point2CN 块也不会带来昂贵的计算开销,因为它只涉及一些新颖的信息融合操作。在异常值去除和相机姿态估计任务上对不同具有挑战性的数据集进行的大量实验一致表明,与现有的最先进方法相比,我们的 Point2CN 块能够获得出色的性能改进。简单而有效的 Point2CN 块可以显着提高当前基于学习的特征匹配方法的性能,而网络参数的增加可以忽略不计。Point2CN 块也不会带来昂贵的计算开销,因为它只涉及一些新颖的信息融合操作。在异常值去除和相机姿态估计任务上对不同具有挑战性的数据集进行的大量实验一致表明,与现有的最先进方法相比,我们的 Point2CN 块能够获得出色的性能改进。简单而有效的 Point2CN 块可以显着提高当前基于学习的特征匹配方法的性能,而网络参数的增加可以忽略不计。Point2CN 块也不会带来昂贵的计算开销,因为它只涉及一些新颖的信息融合操作。在异常值去除和相机姿态估计任务上对不同具有挑战性的数据集进行的大量实验一致表明,与现有的最先进方法相比,我们的 Point2CN 块能够获得出色的性能改进。

更新日期:2021-08-29
down
wechat
bug