当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A content-based late fusion approach applied to pedestrian detection
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2021-03-27 , DOI: 10.1016/j.jvcir.2021.103091
Jessica Sena , Artur Jordão , William Robson Schwartz

The diversity of pedestrians detectors proposed in recent years has encouraged some works to fuse them to achieve a more accurate detection. The intuition behind it is to combine the detectors based on its spatial consensus. The hypothesis is that a location pointed by multiple detectors has a high probability of actually belonging to a pedestrian, while false positive regions have little consensus among detectors (small support) which allows discarding the false positives in these regions. We proposed a novel method called Content-Based Spatial Consensus (CSBC), which, in addition to relying on spatial consensus, considers the content of the detection windows to learn a weighted-fusion of pedestrian detectors. The result is a reduction in false alarms and an enhancement in the detection. In this work, we also demonstrated that there is small influence of the feature used to learn the contents of the windows of each detector, which enables our method to be efficient even employing simple features. The CSBC overcomes state-of-the-art fusion methods in the ETH dataset and the Caltech dataset. Particularly, our method is also more efficient, since fewer detectors are necessary to achieve expressive results.



中文翻译:

基于内容的后期融合方法应用于行人检测

近年来提出的行人检测器的多样性鼓励了一些融合它们以实现更准确检测的工作。其背后的直觉是根据其空间共识来组合检测器。假设是多个检测器指向的位置实际上属于行人的可能性很高,而误报区域在检测器之间几乎没有共识(小的支持),这允许丢弃这些区域中的误报。我们提出了一种称为基于内容的空间共识(CSBC)的新方法,该方法除了依赖于空间共识之外,还考虑了检测窗口的内容以学习行人检测器的加权融合。结果是减少了误报并增强了检测能力。在这项工作中,我们还证明,用于学习每个检测器窗口内容的功能影响很小,即使使用简单的功能也可以使我们的方法高效。CSBC克服了ETH数据集和Caltech数据集中的最新融合方法。特别是,我们的方法也更加有效,因为需要更少的检测器即可获得表达效果。

更新日期:2021-04-04
down
wechat
bug