当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Continuous and Unified Person Re-Identification
IEEE Signal Processing Letters ( IF 3.2 ) Pub Date : 2022-09-09 , DOI: 10.1109/lsp.2022.3205473
Zhu Mao, Xiao Wang, Xin Xu, Zheng Wang, Chia-Wen Lin

Person re-identification (ReID) aims to match pedestrian images across disjoint cameras. Mainstream Re-ID tasks focus on training ReID models once using all the data, which become limited in some real-world scenarios where training data tends to arrive in stages. To match scenarios where training data is incrementally available, some works began to explore ReID task that can make efficient use of piecemeal new data. However, due to the limitations of the training and testing setups, these efforts are still preliminary explorations. In this letter, we explore a novel yet harder Continuous and Unified ReID (CUReID), which not only enables to continuously learn discrimination knowledge from data streams with style differences, but also to be uniformly evaluated discriminatory capability on all the data (seen and unseen). Furthermore, we propose a novel Generalized Feature Decoupled Learning (GFDL) framework for CUReID, which characterizes by introducing alternate training with extra images to solve the problem of optimization divergence between regularisation (learning new knowledge) and generalization (anti-forgetting old knowledge) tasks. In our newly proposed benchmark setup, GFDL achieves the state-of-the-art performance.

中文翻译:

持续统一的人员重新识别

行人重新识别 (ReID) 旨在通过不相交的摄像机匹配行人图像。主流的 Re-ID 任务侧重于使用所有数据训练 ReID 模型,这在一些训练数据往往分阶段到达的现实场景中变得有限。为了匹配训练数据增量可用的场景,一些工作开始探索可以有效利用零碎新数据的 ReID 任务。然而,由于训练和测试设置的限制,这些努力仍处于初步探索阶段。在这封信中,我们探索了一种新颖但更难的连续统一 ReID(CUReID),它不仅可以从具有风格差异的数据流中不断学习辨别知识,而且可以统一评估所有数据(可见和不可见)的辨别能力)。此外,我们为 CUReID 提出了一种新颖的广义特征解耦学习 (GFDL) 框架,其特点是通过引入具有额外图像的交替训练来解决正则化(学习新知识)和泛化(反遗忘旧知识)任务之间的优化分歧问题。在我们新提出的基准设置中,GFDL 实现了最先进的性能。
更新日期:2022-09-09
down
wechat
bug