当前位置: X-MOL 学术Comput. Soc. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Influence maximization in social media networks concerning dynamic user behaviors via reinforcement learning
Computational Social Networks Pub Date : 2021-02-22 , DOI: 10.1186/s40649-021-00090-3
Mengnan Chen , Qipeng P. Zheng , Vladimir Boginski , Eduardo L. Pasiliao

This study examines the influence maximization (IM) problem via information cascades within random graphs, the topology of which dynamically changes due to the uncertainty of user behavior. This study leverages the discrete choice model (DCM) to calculate the probabilities of the existence of the directed arc between any two nodes. In this IM problem, the DCM provides a good description and prediction of user behavior in terms of following or not following a neighboring user. To find the maximal influence at the end of a finite-time horizon, this study models the IM problem by using multistage stochastic programming, which can help a decision-maker to select the optimal seed nodes by which to broadcast messages efficiently. Since computational complexity grows exponentially with network size and time horizon, the original model is not solvable within a reasonable time. This study then uses two different approaches by which to approximate the optimal decision: myopic two-stage stochastic programming and reinforcement learning via the Markov decision process. Computational experiments show that the reinforcement learning method outperforms the myopic two-stage stochastic programming method.

中文翻译:

通过强化学习在社交媒体网络中影响动态用户行为的影响最大化

这项研究通过随机图内的信息级联研究了影响最大化(IM)问题,该图的拓扑由于用户行为的不确定性而动态变化。本研究利用离散选择模型(DCM)来计算任意两个节点之间存在有向弧的概率。在此IM问题中,DCM根据关注或不关注相邻用户提供了对用户行为的良好描述和预测。为了在有限时间范围的末尾找到最大影响,本研究使用多阶段随机规划对IM问题进行建模,这可以帮助决策者选择最佳的种子节点,从而有效地广播消息。由于计算复杂度随网络规模和时间范围呈指数增长,原始模型在合理的时间内无法解决。然后,本研究使用两种不同的方法来近似最佳决策:近视两阶段随机规划和通过马尔可夫决策过程进行强化学习。计算实验表明,强化学习方法优于近视两阶段随机规划方法。
更新日期:2021-02-22
down
wechat
bug