当前位置: X-MOL 学术Knowl. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning diffusion model-free and efficient influence function for influence maximization from information cascades
Knowledge and Information Systems ( IF 2.7 ) Pub Date : 2021-03-19 , DOI: 10.1007/s10115-021-01556-6
Qi Cao , Huawei Shen , Jinhua Gao , Xueqi Cheng

When considering the problem of influence maximization from information cascades, one essential component is influence estimation. Traditional approaches for influence estimation generally follow a two-stage framework, i.e., learn a hypothetical diffusion model from information cascades and then calculate the influence spread according to the learned diffusion model via Monte Carlo simulation or heuristic approximation. The effectiveness of these approaches heavily relies on the correctness of the diffusion model, suffering from the problem of model misspecification. Meanwhile, these approaches are inefficient when influence estimation is conducted via lots of Monte Carlo simulations. In this paper, without assuming a diffusion model a priori, we directly learn a monotone and submodular influence function from information cascades. Once the influence function is obtained, greedy algorithm is applied to efficiently solve influence maximization. Experimental results on both synthetic and real-world datasets show the effectiveness and efficiency of the learned influence function for both influence estimation and influence maximization tasks.



中文翻译:

从信息级联中学习无扩散模型的高效影响函数,以实现影响最大化

当考虑来自信息级联的影响最大化的问题时,影响评估的一个基本组成部分。传统的影响估计方法通常遵循两阶段框架,即从信息级联中学习假设的扩散模型,然后根据学习的扩散模型通过蒙特卡洛模拟或启发式近似来计算影响扩散。这些方法的有效性在很大程度上依赖于扩散模型的正确性,存在模型指定错误的问题。同时,当通过大量的蒙特卡洛模拟进行影响估计时,这些方法效率低下。在本文中,无需先验地假设扩散模型,我们就可以直接从信息级联中学习单调和亚模影响函数。一旦获得了影响函数,便会应用贪婪算法来有效地解决影响最大化。在合成数据集和实际数据集上的实验结果表明,学习的影响函数对于影响估计和影响最大化任务的有效性和效率。

更新日期:2021-03-21
down
wechat
bug