当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Entropy Optimized Deep Feature Compression
IEEE Signal Processing Letters ( IF 3.9 ) Pub Date : 2021-01-18 , DOI: 10.1109/lsp.2021.3052097
Benben Niu , Xiaoran Cao , Ziwei Wei , Yun He

This letter focuses on the compression of deep features. With the rapid expansion of deep feature data in various CNN-based analysis and processing tasks, the demand for efficient compression continues to increase. Product quantization (PQ) is widely used in the compact expression of features. In the quantization process, feature vectors are mapped into fixed-length codes based on a pre-trained codebook. However, PQ is not specifically designed for data compression, and the fixed-length codes are not suitable for further compression such as entropy coding. In this letter, we propose an entropy-optimized compression scheme for deep features. By introducing entropy into the loss function in the training process of quantization, the quantization and entropy coding modules are jointly optimized to minimize the total coding cost. We evaluate the proposed methods in retrieval tasks. Compared with fixed-length coding, the proposed scheme can be generally combined with PQ and its extended method and can achieve a better compression performance consistently.

中文翻译:

熵优化的深度特征压缩

这封信重点介绍深层功能的压缩。随着各种基于CNN的分析和处理任务中深层特征数据的快速扩展,对有效压缩的需求持续增长。产品量化(PQ)被广泛用于特征的紧凑表达。在量化过程中,基于预先训练的码本将特征向量映射到固定长度的代码中。但是,PQ不是专门为数据压缩而设计的,并且固定长度代码不适合进一步压缩,例如熵编码。在这封信中,我们针对深层特征提出了一种熵优化的压缩方案。通过在量化的训练过程中将熵引入损失函数中,可以对量化和熵编码模块进行联合优化,以最大程度地降低总编码成本。我们评估在检索任务中提出的方法。与定长编码相比,所提出的方案可以与PQ及其扩展方法结合使用,并能始终获得较好的压缩性能。
更新日期:2021-02-12
down
wechat
bug