当前位置: X-MOL 学术IEEE Trans. Wirel. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Remote Proximity Sensing With a Novel Q-Learning in Bluetooth Low Energy Network
IEEE Transactions on Wireless Communications ( IF 10.4 ) Pub Date : 2022-04-05 , DOI: 10.1109/twc.2022.3147411
Pai Chet Ng 1 , James She 2
Affiliation  

This paper presents a novel Q-Learning method in forwarding the proximity sensing information to the remote server through low-power mesh network overlays on Bluetooth Low Energy (BLE) technology. Even though proximity sensing information can be easily monitored with pervasive smartphones, it is almost impossible to remotely monitor this information in a harsh location where it is not easy to access the Internet. With our overlay mesh network, each node should decide to either forward the packet or continue with their own activity when receiving the packet forwarding request, so as to minimize the end-to-end packet delivery latency but maximize the utilization of underlying infrastructures. Reinforcement learning (RL) is employed to train each node to make the above decision. Despite extensive upfront training, there is a high possibility that each node might still encounter an unseen state owing to the network dynamics. However, our novel Q-learning is able to deal with above challenges by constructing a Q-table during online learning, and then use the Q-table as input data for offline training. The experimental results indicate the substantial performance gain of our proposed approach in comparison to the existing Q-learning methods.

中文翻译:

蓝牙低功耗网络中具有新型 Q-Learning 的远程接近感应

本文提出了一种新颖的 Q-Learning 方法,通过基于蓝牙低功耗 (BLE) 技术的低功耗网状网络覆盖将接近感应信息转发到远程服务器。尽管可以使用无处不在的智能手机轻松监控接近感应信息,但在难以访问 Internet 的恶劣环境中远程监控这些信息几乎是不可能的。在我们的覆盖网状网络中,每个节点在收到数据包转发请求时应该决定是转发数据包还是继续自己的活动,以最大限度地减少端到端数据包传递延迟,但最大限度地利用底层基础设施。强化学习 (RL) 用于训练每个节点做出上述决定。尽管进行了广泛的前期培训,由于网络动态,每个节点很有可能仍然会遇到看不见的状态。然而,我们新颖的 Q-learning 能够通过在线学习期间构建 Q-table 来应对上述挑战,然后将 Q-table 作为离线训练的输入数据。实验结果表明,与现有的 Q 学习方法相比,我们提出的方法具有显着的性能增益。
更新日期:2022-04-05
down
wechat
bug