当前位置: X-MOL 学术Wireless Pers. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Using Markov Learning Utilization Model for Resource Allocation in Cloud of Thing Network
Wireless Personal Communications ( IF 2.2 ) Pub Date : 2020-07-29 , DOI: 10.1007/s11277-020-07591-w
Seyedeh Maedeh Mirmohseni , Chunming Tang , Amir Javadpour

The integration of the Internet of Things (IoT) and cloud environment has led to the creation of Cloud of Things, which has given rise to new challenges in IoT area. In this paper, using the Markov model learning method and calculating the need probability of each object to resources shortly to reduce latency and maximize network utilization, allocating resources in the fog layer has been possible and processed. By using simulations in the CloudSim platform, it is examined the processor productivity for the number of tasks, the workflow overhead for the number of tasks, physical machine’s energy consumption for the number of tasks, the data locality for the number of tasks, resource utilization for the number of tasks, and completion of task for the number of tasks and compared with the SMDP (SemiMarkov decision processes) and MDP methods, results show that the proposed research is effective and promising.



中文翻译:

基于马尔可夫学习利用模型的物联网云资源分配

物联网(IoT)和云环境的集成导致了物联网的创建,这给物联网领域带来了新的挑战。在本文中,使用马尔可夫模型学习方法并计算每个对象对资源的需求概率,以减少延迟并最大程度地利用网络,在雾层中分配资源已成为可能并得到处理。通过在CloudSim平台中使用仿真,可以检查任务数量的处理器生产率,任务数量的工作流开销,任务数量的物理机能耗,任务数量的数据局部性,资源利用率任务数量,完成任务的数量,并与SMDP(SemiMarkov决策过程)和MDP方法进行比较,

更新日期:2020-07-30
down
wechat
bug