Skip to main content
Log in

Deep reinforcement learning-based incentive mechanism design for short video sharing through D2D communication

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

With the development of 5th generation (5G) wireless communication networks and the popularity of short video applications, there has been a rapid increase in short video traffic in cellular networks. Device-to-device (D2D) communication-based short video sharing is considered to be an effective way to offload traffic from cellular networks. Due to the selfish nature of mobile user equipment (MUEs), how to dynamically motivate MUEs to engage in short video sharing while ensuring the Quality of Service, which makes it critical to design an appropriate incentive mechanism. In this paper, we firstly analyze the rationale for dynamically setting rewards and penalties and then define the rewards and penalties setting dynamically for maximizing the utility of the mobile edge computing server (RPSDMU) problem. The problem is proved NP-hard. Furthermore, we formulate the dynamic incentive process as the Markov Decision Process problem. Considering the complexity and dynamics of the problem, we design a Dynamic Incentive Mechanism algorithm of D2D-based Short Video Sharing based on Asynchronous Advantage Actor-Critic (DIM-A3C) to solve the problem. Simulation results show that the proposed dynamic incentive mechanism can increase the utility of mobile edge computing server by an average of 22% and 16% compared with the existing proportional incentive mechanism (PIM) and scoring-based incentive mechanism (SIM). Meanwhile, DIM-A3C achieves a higher degree of satisfaction than PIM and SIM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. DouYin, https://www.douyin.com/. Accessed 1 Nov 2020

  2. Youtube Go, https://www.youtube.com/. Accessed 1 Nov 2020

  3. KuaiShou, https://www.kuaishou.com/. Accessed 1 Nov 2020

  4. Oughton E, Frias Z, Russell T, et al. (2018) Towards 5g: scenario-based assessment of the future supply and demand for mobile telecommunications infrastructure. Technol Forecast Soc Chang 133:141–155

    Article  Google Scholar 

  5. Cisco Annual Internet Report (2018–2023) White Paper, https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.htmlhttps://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html. Accessed 1 Nov 2020

  6. Zhang Y, et al. (2020) Autosight: distributed edge caching in short video network. IEEE Netw 34:194–199

    Article  Google Scholar 

  7. Din IU, Hassan S, Khan MK, Guizani M, Ghazali O, Habbal A (2018) Caching in information-centric networking: strategies, challenges, and future research directions. IEEE Communications Surveys & Tutorials 20:1443–1474

    Article  Google Scholar 

  8. Bastug E, Bennis M, Debbah M (2014) Living on the edge: the role of proactive caching in 5G wireless networks. IEEE Commun Mag 52(8):82–89

    Article  Google Scholar 

  9. Chen Z, Kountouris M (2016) D2D caching vs. small cell caching: where to cache content in a wireless network?. In: 2016 IEEE 17th international workshop on signal processing advances in wireless communications (SPAWC), pp 1–6

  10. Wu D, Zhou L, Cai Y, Qian Y (2018) Collaborative caching and matching for D2D content sharing. In: IEEE wireless communications, vol 25, pp 43–49

  11. Yu X, Tan C, Ma L, Zheng M, Bu Z (2017) Maximized traffic offloading by content sharing in D2D communication. In: 2017 IEEE 86th vehicular technology conference (VTC-Fall), pp 1–5

  12. Asadi A, Wang Q, Mancuso V (2014) A survey on device-to-device communication in cellular networks. IEEE Commun 16(4):1801–1819

    Google Scholar 

  13. Wu D, Zhou L, Cai Y (2017) Social-aware rate based content sharing mode selection for D2D content sharing scenarios. IEEE Transactions on Multimedia 19(11):2571–2582

    Article  Google Scholar 

  14. Chen Z, Liu Y, Zhou B, Tao M (2016) Caching incentive design in wireless D2D networks: a Stackelberg game approach. In: 2016 IEEE international conference on communications (ICC), Kuala Lumpur, pp 1–6

  15. Satyanarayanan M (2017) The emergence of edge computing, vol 50, pp 30–39

  16. Zhao M, Wang W, Wang Y et al (2019) Load scheduling for distributed edge computing: a communication-computation tradeoff. Peer-to-Peer Netw Appl 12:1418–1432

    Article  Google Scholar 

  17. Yang R, Yu FR, Si P, Yang Z, Zhang Y (2019) Integrated blockchain and edge computing systems: a survey, some research issues and challenges. IEEE Communications Surveys & Tutorials 21(2):1508–1532

    Article  Google Scholar 

  18. Imran A, Zoha A (2014) Challenges in 5G: how to empower SON with big data for enabling 5G. Network IEEE 28(6)

  19. Al-Habashna A, Wainer G (2020) QoE awareness in progressive caching and DASH-based D2D video streaming in cellular networks[J]. Wireless Networks: The Journal of Mobile Communication, Computation and Information 26(3)

  20. Pan Y, Pan C, Zhu H, Ahmed QZ, Chen M, Wang J (2017) On consideration of content preference and sharing willingness in D2D assisted offloading. IEEE Journal on Selected Areas in Communications 35(4):978–993

    Google Scholar 

  21. Sun Q, Tian L, Zhou Y, Shi J, Zhang Z (2020) Incentive scheme for slice cooperation based on D2D communication in 5G networks. China Communications 17(1):28–41

    Article  Google Scholar 

  22. Zhang Y, Song L, Saad W et al (2015) Contract-based incentive mechanisms for device-to-device communications in cellular networks. IEEE Journal on Selected Areas in Communications 33(10):1–1

    Article  Google Scholar 

  23. Yang L, Zhu H, Wang H et al (2019) Incentive propagation mechanism of computation offloading in fog-enabled D2D networks. In: 2018 IEEE 23rd international conference on digital signal processing (DSP)

  24. Zhou Z, Liu P, Feng J et al (2019) Computation resource allocation and task assignment optimization in vehicular fog computing: a contract-matching approach. IEEE Transactions on Vehicular Technology 68(4):3113–3125

    Article  Google Scholar 

  25. Jiang J, Zhang S, Li B et al (2015) Maximized cellular traffic offloading via device-to-device content sharing, vol 34, pp 82–91

  26. Jiang J, Zhang S, Li B et al (2015) Maximized cellular traffic offloading via device-to-device content sharing. IEEE Journal on Selected Areas in Communications 34(1):82–91

    Article  Google Scholar 

  27. Zeng D, Gu L, Pan S, Cai J, Guo S (2019) Resource management at the network edge: a deep reinforcement learning approach. IEEE Network 33(3):26–33

    Article  Google Scholar 

  28. Zhan Y, Liu CH, Zhao Y, Zhang J, Tang J (2020) Free market of multi-leader multi-follower mobile crowdsensing: an incentive mechanism design by deep reinforcement learning. IEEE Transactions on Mobile Computing 19(10):2316–2329

    Article  Google Scholar 

  29. Zhao Y, Liu CH (2021) Social-aware incentive mechanism for vehicular crowdsensing by deep reinforcement learning. IEEE Trans Intell Transp Syst 22(4):2314–2325

    Article  Google Scholar 

  30. Zhan Y, Li P, Qu Z, Zeng D, Guo S (2020) A learning-based incentive mechanism for federated learning. IEEE Internet of Things Journal 7(7):6360–6368

    Article  Google Scholar 

  31. Zhang R, Yu FR, Liu J et al (2020) Blockchain-incentivized D2D and mobile edge caching: a deep reinforcement learning approach. IEEE Network 34(4):150–157

    Article  Google Scholar 

  32. Volodymyr M, Koray K, David S et al (2019) Human-level control through deep reinforcement learning. Nature 518(7540):529–33

    Google Scholar 

  33. Chen M, Wang H, Chu X (2020) Continuous incentive mechanism for d2d content sharing: a deep reinforcement learning approach. In: 2020 IEEE international conference on communications workshops (ICC Workshops), pp 1–6

  34. Zhao N, Liang Y, Pei Y (2018) Dynamic contract incentive mechanism for cooperative wireless networks. In: IEEE transactions on vehicular technology, vol 67, pp 10970–10982

  35. Wang H, Yang Y, Wang E, Wang L, Li Q, Yu Z (2020) Incentive mechanism for mobile devices in dynamic crowd sensing system. In: IEEE transactions on human-machine systems

  36. Wang H, Guo S, Cao J, Guo M (2018) Melody: a long-term dynamic quality-aware incentive mechanism for crowd sourcing. In: IEEE transactions on parallel and distributed systems, vol 29, pp 901–914

  37. Zhang T, Wang H, He J (2016) An incentive mechanism under Hidden-Action for device-to-device content sharing. In: 2016 IEEE 13th international conference on signal processing (ICSP), pp 1288–1292

  38. Asadi A, Wang Q, Mancuso V (2014) A survey on device-to-device communication in cellular networks. IEEE Communications Surveys & Tutorials 16(4):1801–1819

    Article  Google Scholar 

  39. La QD, Quek TQS, Lee J et al (2016) Deceptive attack and defense game in honeypot-enabled networks for the internet of things, vol 3, pp 1025–1035

  40. Gu L, Zeng D, Li W, Guo S, Zomaya AY, Jin H (2020) Intelligent VNF orchestration and flow scheduling via model-assisted deep reinforcement learning. IEEE Journal on Selected Areas in Communications 38(2):279–291

    Article  Google Scholar 

  41. Sutton RS, Mcallester D, Singh S et al (1999) Policy gradient methods for reinforcement learning with function approximation. Submitted to Advances in Neural Information Processing Systems

  42. Van Hasselt H, Guez A, Silver D (2015) Deep reinforcement learning with double q-learning. arXiv:1509.06461

  43. Lillicrap TP, Hunt JJ, Pritzel A et al (2015) Continuous control with deep reinforcement learning. Computerence 8(6):A187

    Google Scholar 

  44. Mnih V, Badia AP, Mirza M et al (2016) Asynchronous methods for deep reinforcement learning. In: International conference on machine learning, pp 1928–1937

  45. Li Z, Chen H, Lin K et al (2021) From edge data to recommendation: a double attention-based deformable convolutional network. Peer-to-Peer Netw Appl

Download references

Acknowledgements

This research is partly supported by the National Natural Science Foundation of China (Nos.61872044), Beijing Municipal Program for Top Talent, Beijing Municipal Program for Top Talent Cultivation (CIT & TCD201804055), Open Program of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research (ICDDXN001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhuo Li.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection: Special Issue on Convergence of Edge Computing and Next Generation Networking Guest Editors: Deze Zeng, Geyong Min, Qiang He, and Song Guo

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Z., Dong, W. & Chen, X. Deep reinforcement learning-based incentive mechanism design for short video sharing through D2D communication. Peer-to-Peer Netw. Appl. 14, 3946–3958 (2021). https://doi.org/10.1007/s12083-021-01146-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12083-021-01146-x

Keywords

Navigation