当前位置: X-MOL 学术IEEE Trans. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Rate Distortion Via Deep Learning
IEEE Transactions on Communications ( IF 8.3 ) Pub Date : 2020-01-01 , DOI: 10.1109/tcomm.2019.2950714
Qing Li , Yang Chen

We explore the connections between rate distortion/lossy source coding and deep learning models, the Restricted Boltzmann Machines (RBMs) and Deep Belief Networks (DBNs). We show that rate distortion is a function of the RBM log partition function and that RBM/DBN can be used to learn the rate distortion approaching posterior as in the Blahut-Arimoto algorithm. We propose an algorithm for lossy compressing of binary sources. The algorithm consists of two stages, a training stage that learns the posterior with training data of the same class as the source, and a compression/reproduction stage that is comprised of a lossless compression and a lossless reproduction. Theoretical results show that the proposed algorithm achieves the optimum rate distortion function for stationary ergodic sources asymptotically. Numerical experiments show that the proposed algorithm outperforms the reported best results.

中文翻译:

通过深度学习进行速率失真

我们探索率失真/有损源编码与深度学习模型、受限玻尔兹曼机 (RBM) 和深度信念网络 (DBN) 之间的联系。我们表明速率失真是 RBM 对数分区函数的函数,并且 RBM/DBN 可用于学习接近后验的速率失真,如 Blahut-Arimoto 算法。我们提出了一种用于二进制源有损压缩的算法。该算法由两个阶段组成,一个是用与源相同类别的训练数据学习后验的训练阶段,另一个是由无损压缩和无损再现组成的压缩/再现阶段。理论结果表明,该算法渐近地实现了平稳遍历源的最优率失真函数。
更新日期:2020-01-01
down
wechat
bug