当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning disentangled user representation with multi-view information fusion on social networks
Information Fusion ( IF 18.6 ) Pub Date : 2021-04-03 , DOI: 10.1016/j.inffus.2021.03.011
Wenyi Tang , Bei Hui , Ling Tian , Guangchun Luo , Zaobo He , Zhipeng Cai

User representation learning is one prominent and critical task of user analysis on social networks, which derives conceptual user representations to improve the inference of user intentions and behaviors. Previous efforts have shown its substantial value in multifarious real-world applications, including product recommendation, textual content modeling, link prediction, and many more. However, existing studies either underutilize multi-view information, or neglect the stringent entanglement among underlying factors that govern user intentions, thus deriving deteriorated representations. To overcome these shortages, this paper proposes an adversarial fusion framework to fully exploit substantial multi-view information for user representation, consisting of a generator and a discriminator. The generator learns representations with a variational autoencoder, and is forced by the adversarial fusion framework to pay specific attention to substantial informative signs, thus integrating multi-view information. Furthermore, the variational autoencoder used in the generator is novelly designed to capture and disentangle the latent factors behind user intentions. By fully utilizing multi-view information and achieving disentanglement, our model learns robust and interpretable user representations. Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of our proposed model.



中文翻译:

通过社交网络上的多视图信息融合来学习纠缠的用户表示

用户表示学习是社交网络上用户分析的一项突出而关键的任务,它衍生出概念性的用户表示以改善对用户意图和行为的推断。先前的努力已在多种现实应用中显示了其巨大价值,包括产品推荐,文本内容建模,链接预测等。但是,现有研究要么未充分利用多视图信息,要么忽略了控制用户意图的潜在因素之间的严格纠缠,从而推导了退化的表示形式。为了克服这些不足,本文提出了一种对抗融合框架,以充分利用大量的多视图信息进行用户表示,包括生成器和鉴别器。生成器使用变体自动编码器学习表示形式,并在对抗性融合框架的推动下特别注意实质性的信息符号,从而集成了多视图信息。此外,生成器中使用的可变自动编码器经过新颖设计,可以捕获和消除用户意图背后的潜在因素。通过充分利用多视图信息并实现解缠结,我们的模型学习了可靠且可解释的用户表示形式。在合成数据集和真实数据集上的大量实验证明了我们提出的模型的优越性。发生器中使用的可变自动编码器经过新颖设计,可以捕获和消除用户意图背后的潜在因素。通过充分利用多视图信息并实现解缠结,我们的模型学习了可靠且可解释的用户表示形式。在合成数据集和真实数据集上的大量实验证明了我们提出的模型的优越性。发生器中使用的可变自动编码器经过新颖设计,可以捕获和消除用户意图背后的潜在因素。通过充分利用多视图信息并实现解缠结,我们的模型学习了可靠且可解释的用户表示形式。在合成数据集和真实数据集上的大量实验证明了我们提出的模型的优越性。

更新日期:2021-04-09
down
wechat
bug