当前位置: X-MOL 学术Big Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Reinforcement Learning-Based Framework for Crowdsourcing in Massive Health Care Internet of Things
Big Data ( IF 2.6 ) Pub Date : 2022-04-08 , DOI: 10.1089/big.2021.0058
Alaa Omran Almagrabi 1 , Rashid Ali 2 , Daniyal Alghazzawi 1 , Abdullah AlBarakati 1 , Tahir Khurshaid 3
Affiliation  

Rapid advancements in the internet of things (IoT) are driving massive transformations of health care, which is one of the largest and critical global industries. Recent pandemics, such as coronavirus 2019 (COVID-19), include increasing demands for ubiquitous, preventive, and personalized health care to be provided to the public at reduced risks and costs with rapid care. Mobile crowdsourcing could potentially meet the future massive health care IoT (mH-IoT) demands by enabling anytime, anywhere sense and analyses of health-related data to tackle such a pandemic situation. However, data reliability and availability are among the many challenges for the realization of next-generation mH-IoT, especially in COVID-19 epidemics. Therefore, more intelligent and robust health care frameworks are required to tackle such pandemics. Recently, reinforcement learning (RL) has proven its strengths to provide intelligent data reliability and availability. The action-state learning procedure of RL-based frameworks enables the learning system to enhance the optimal use of the information as the time passes and data increases. In this article, we propose an RL-based crowd-to-machine (RLC2M) framework for mH-IoT, which leverages crowdsourcing and an RL model (Q-learning) to address the health care information processing challenges. The simulation results show that the proposed framework rapidly converges with accumulated rewards to reveal the sensing environment situation.

中文翻译:

基于强化学习的大规模医疗保健物联网众包框架

物联网 (IoT) 的快速发展正在推动医疗保健的大规模转型,这是全球最大和最重要的行业之一。最近的流行病,如 2019 年冠状病毒 (COVID-19),包括越来越多的需求,即通过快速护理以降低风险和成本向公众提供无处不在、预防性和个性化的医疗保健。移动众包可以通过随时随地感知和分析健康相关数据来应对这种大流行情况,从而有可能满足未来大规模医疗保健物联网 (mH-IoT) 的需求。然而,数据可靠性和可用性是实现下一代 mH-IoT 的众多挑战之一,尤其是在 COVID-19 流行病中。因此,需要更智能和更强大的医疗保健框架来应对此类流行病。最近,强化学习 (RL) 已证明其在提供智能数据可靠性和可用性方面的优势。基于 RL 的框架的动作状态学习过程使学习系统能够随着时间的推移和数据的增加而增强对信息的优化使用。在本文中,我们为 mH-IoT 提出了一个基于 RL 的众对机器 (RLC2M) 框架,该框架利用众包和 RL 模型 (Q-learning) 来解决医疗保健信息处理挑战。仿真结果表明,所提出的框架与累积奖励快速收敛,以揭示传感环境情况。基于 RL 的框架的动作状态学习过程使学习系统能够随着时间的推移和数据的增加而增强对信息的优化使用。在本文中,我们为 mH-IoT 提出了一个基于 RL 的众对机器 (RLC2M) 框架,该框架利用众包和 RL 模型 (Q-learning) 来解决医疗保健信息处理挑战。仿真结果表明,所提出的框架与累积奖励快速收敛,以揭示传感环境情况。基于 RL 的框架的动作状态学习过程使学习系统能够随着时间的推移和数据的增加而增强对信息的优化使用。在本文中,我们为 mH-IoT 提出了一个基于 RL 的众对机器 (RLC2M) 框架,该框架利用众包和 RL 模型 (Q-learning) 来解决医疗保健信息处理挑战。仿真结果表明,所提出的框架与累积奖励快速收敛,以揭示传感环境情况。
更新日期:2022-04-08
down
wechat
bug