当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
COIN: COmpression with Implicit Neural representations
arXiv - CS - Machine Learning Pub Date : 2021-03-03 , DOI: arxiv-2103.03123
Emilien Dupont, Adam Goliński, Milad Alizadeh, Yee Whye Teh, Arnaud Doucet

We propose a new simple approach for image compression: instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image. Specifically, to encode an image, we fit it with an MLP which maps pixel locations to RGB values. We then quantize and store the weights of this MLP as a code for the image. To decode the image, we simply evaluate the MLP at every pixel location. We found that this simple approach outperforms JPEG at low bit-rates, even without entropy coding or learning a distribution over weights. While our framework is not yet competitive with state of the art compression methods, we show that it has various attractive properties which could make it a viable alternative to other neural data compression approaches.

中文翻译:

硬币:内隐神经表示的共压

我们提出了一种用于图像压缩的新的简单方法:代替存储图像的每个像素的RGB值,我们存储过度适合图像的神经网络的权重。具体来说,为了对图像进行编码,我们将其与MLP配合使用,该MLP可将像素位置映射到RGB值。然后,我们将此MLP的权重量化并存储为图像代码。要解码图像,我们只需在每个像素位置评估MLP。我们发现,即使没有熵编码或学习权重分布,这种简单方法在低比特率下也优于JPEG。虽然我们的框架与最新的压缩方法尚不具有竞争力,但我们证明它具有各种吸引人的特性,这使其可能成为其他神经数据压缩方法的可行替代方案。
更新日期:2021-03-05
down
wechat
bug