Image and Vision Computing ( IF 4.7 ) Pub Date : 2021-06-16 , DOI: 10.1016/j.imavis.2021.104242 Yinfeng Xia , Yuqiang He , Sifan Peng , Qianqian Yang , Baoqun Yin
Long skip connection or encoder-decoder networks for crowd counting have proven to be effective methods to generate high-resolution density maps. However, the simple and coarse feature fusion ignores the disharmony between features, that is, spatial misalignment and semantic inconsistency, which will weaken feature representation and degrade network performance. In this paper, we propose an end-to-end trainable architecture called Coordinated Feature Fusion Network (CFFNet) to tackle the aforementioned problems. The proposed model contains a powerful baseline network and embeds two primary modules: Spatial Alignment Module (SAM) and Semantic Consistency Module (SCM). Specifically, the SAM can learn the transformation offset of pixels to alleviate the spatial misalignment caused by the feature resolution difference; the SCM based on the multi-scale attention mechanism can capture pixel-wise weight to alleviate the semantic inconsistency due to the feature level gap. Extensive experiments on four benchmark crowd datasets (the ShanghaiTech, the UCF-QNRF, the JHU-CRWORD ++ and the NWPU-Crowd), indicate that CFFNet can achieve state-of-the-art counting performance and high robustness.
中文翻译:
CFFNet:用于人群计数的协调特征融合网络
用于人群计数的长跳跃连接或编码器-解码器网络已被证明是生成高分辨率密度图的有效方法。然而,简单粗略的特征融合忽略了特征之间的不协调,即空间错位和语义不一致,这会削弱特征表示并降低网络性能。在本文中,我们提出了一种称为协调特征融合网络 (CFFNet) 的端到端可训练架构来解决上述问题。所提出的模型包含一个强大的基线网络并嵌入了两个主要模块:空间对齐模块 (SAM) 和语义一致性模块 (SCM)。具体来说,SAM可以通过学习像素的变换偏移来缓解特征分辨率差异导致的空间错位;基于多尺度注意力机制的 SCM 可以捕获像素级权重,以缓解由于特征级别差距导致的语义不一致。在四个基准人群数据集(ShanghaiTech、UCF-QNRF、JHU-CRWORD ++ 和 NWPU-Crowd),表明 CFFNet 可以实现最先进的计数性能和高鲁棒性。