当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lossless Compression of Structured Convolutional Models via Lifting
arXiv - CS - Artificial Intelligence Pub Date : 2020-07-13 , DOI: arxiv-2007.06567 Gustav Sourek, Filip Zelezny
arXiv - CS - Artificial Intelligence Pub Date : 2020-07-13 , DOI: arxiv-2007.06567 Gustav Sourek, Filip Zelezny
Lifting is an efficient technique to scale up graphical models generalized to
relational domains by exploiting the underlying symmetries. Concurrently,
neural models are continuously expanding from grid-like tensor data into
structured representations, such as various attributed graphs and relational
databases. To address the irregular structure of the data, the models typically
extrapolate on the idea of convolution, effectively introducing parameter
sharing in their, dynamically unfolded, computation graphs. The computation
graphs themselves then reflect the symmetries of the underlying data, similarly
to the lifted graphical models. Inspired by lifting, we introduce a simple and
efficient technique to detect the symmetries and compress the neural models
without loss of any information. We demonstrate through experiments that such
compression can lead to significant speedups of structured convolutional
models, such as various Graph Neural Networks, across various tasks, such as
molecule classification and knowledge-base completion.
中文翻译:
通过提升对结构化卷积模型进行无损压缩
提升是一种有效的技术,可以通过利用底层对称性来扩大泛化到关系领域的图形模型。同时,神经模型不断从网格状张量数据扩展到结构化表示,例如各种属性图和关系数据库。为了解决数据的不规则结构,模型通常基于卷积的思想进行推断,有效地在其动态展开的计算图中引入参数共享。然后计算图本身反映了底层数据的对称性,类似于提升的图形模型。受提升的启发,我们引入了一种简单有效的技术来检测对称性并压缩神经模型而不会丢失任何信息。
更新日期:2020-07-15
中文翻译:
通过提升对结构化卷积模型进行无损压缩
提升是一种有效的技术,可以通过利用底层对称性来扩大泛化到关系领域的图形模型。同时,神经模型不断从网格状张量数据扩展到结构化表示,例如各种属性图和关系数据库。为了解决数据的不规则结构,模型通常基于卷积的思想进行推断,有效地在其动态展开的计算图中引入参数共享。然后计算图本身反映了底层数据的对称性,类似于提升的图形模型。受提升的启发,我们引入了一种简单有效的技术来检测对称性并压缩神经模型而不会丢失任何信息。