当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
True-Color and Grayscale Video Person Re-Identification
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 5-15-2019 , DOI: 10.1109/tifs.2019.2917160
Fei Ma , Xiao-Yuan Jing , Xiao Zhu , Zhenmin Tang , Zhiping Peng

Person re-identification is an important task in forensics applications. Most existing person re-identification methods focus on matching persons captured by different true-color cameras. In practice, the captured pedestrian videos may be grayscale in some cases due to camera malfunction or special treatment for gray mode. In these cases, the person re-identification between true-color and grayscale pedestrian videos, which we call color to gray video person re-identification (CGVPR), will be needed. Since the color information that is very important to represent a pedestrian is usually intensity information and monochrome in grayscale videos, the CGVPR problem is very challenging. To relieve the difficulties in CGVPR, we propose an asymmetric within-video projection based Semicoupled Dictionary Pair Learning (SDPL) approach. SDPL simultaneously learns two within-video projection matrices, a pair of true-color and grayscale dictionaries, as well as a semi-coupled mapping matrix. The learned within-video projection matrices can make each video (true-color or grayscale) more compact. The learned dictionary pair and the mapping matrix can work together to bridge the gap between features of true-color and grayscale videos. To date there exists no true-color and grayscale pedestrian video dataset, so we contribute a new one, called true-color and grayscale video person re-identification dataset (CGVID). Our dataset is collected under a real-world scenario and consists of over 50K frames. Extensive evaluations demonstrate that the collected CGVID dataset is very challenging and can be used for further research on person re-identification. The experimental results show that our approach outperforms the compared methods on the CGVPR task.

中文翻译:


真彩色和灰度视频人员重新识别



人员重新识别是取证应用中的一项重要任务。大多数现有的行人重识别方法侧重于匹配不同真彩相机捕获的行人。在实际应用中,由于相机故障或对灰度模式进行特殊处理,在某些情况下拍摄的行人视频可能是灰度的。在这些情况下,需要在真彩色和灰度行人视频之间进行行人重新识别,我们称之为彩色到灰度视频行人重新识别(CGVPR)。由于对于表示行人非常重要的颜色信息通常是灰度视频中的强度信息和单色,因此CGVPR问题非常具有挑战性。为了缓解 CGVPR 的困难,我们提出了一种基于非对称视频内投影的半耦合字典对学习(SDPL)方法。 SDPL 同时学习两个视频内投影矩阵、一对真彩色和灰度字典以及一个半耦合映射矩阵。学习到的视频内投影矩阵可以使每个视频(真彩色或灰度)更加紧凑。学习到的字典对和映射矩阵可以一起工作,以弥合真彩色和灰度视频特征之间的差距。迄今为止,还没有真彩色和灰度行人视频数据集,因此我们贡献了一个新的数据集,称为真彩色和灰度视频行人重新识别数据集(CGVID)。我们的数据集是在真实场景下收集的,由超过 50K 帧组成。广泛的评估表明,收集的CGVID数据集非常具有挑战性,可用于人员重新识别的进一步研究。实验结果表明,我们的方法在 CGVPR 任务上优于对比方法。
更新日期:2024-08-22
down
wechat
bug