当前位置: X-MOL 学术Comput. Aided Civ. Infrastruct. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Collaborative duty cycling strategies in energy harvesting sensor networks
Computer-Aided Civil and Infrastructure Engineering ( IF 8.5 ) Pub Date : 2019-12-19 , DOI: 10.1111/mice.12522
James Long 1 , Oral Büyüköztürk 1
Affiliation  

Energy harvesting wireless sensor networks are a promising solution for low cost, long lasting civil monitoring applications. But management of energy consumption is a critical concern to ensure these systems provide maximal utility. Many common civil applications of these networks are fundamentally concerned with detecting and analyzing infrequently occurring events. To conserve energy in these situations, a subset of nodes in the network can assume active duty, listening for events of interest, while the remaining nodes enter low power sleep mode to conserve battery. However, judicious planning of the sequence of active node assignments is needed to ensure that as many nodes as possible can be reached upon the detection of an event, and that the system maintains capability in times of low energy harvesting capabilities. In this article, we propose a novel reinforcement learning (RL) agent, which acts as a centralized power manager for this system. We develop a comprehensive simulation environment to emulate the behavior of an energy harvesting sensor network, with consideration of spatially varying energy harvesting capabilities, and wireless connectivity. We then train the proposed RL agent to learn optimal node selection strategies through interaction with the simulation environment. The behavior and performance of these strategies are tested on real unseen solar energy data, to demonstrate the efficacy of the method. The deep RL agent is shown to outperform baseline approaches on both seen and unseen data.

中文翻译:

能量收集传感器网络中的协同工作循环策略

能量收集无线传感器网络是低成本,持久民用监控应用的有前途的解决方案。但是,能耗管理是确保这些系统提供最大效用的关键问题。这些网络的许多常见民用应用从根本上讲都与检测和分析不经常发生的事件有关。为了在这些情况下节省能量,网络中的一部分节点可以承担活动任务,侦听感兴趣的事件,而其余节点进入低功耗睡眠模式以节省电池。但是,需要对活动节点分配的顺序进行明智的计划,以确保在检测到事件时可以到达尽可能多的节点,并且确保系统在能量收集能力较低时保持能力。在本文中,我们提出了一种新颖的强化学习(RL)代理,它充当该系统的集中式电源管理器。我们开发了一个综合的仿真环境,以模拟能量采集传感器网络的行为,并考虑到空间上变化的能量采集功能和无线连接。然后,我们训练提出的RL代理,以通过与仿真环境的交互来学习最佳节点选择策略。这些策略的行为和性能在真实的看不见的太阳能数据上进行了测试,以证明该方法的有效性。深度RL代理在可见数据和不可见数据上均优于基线方法。我们开发了一个综合的仿真环境,以模拟能量采集传感器网络的行为,并考虑到空间上变化的能量采集功能和无线连接。然后,我们训练提出的RL代理,以通过与仿真环境的交互来学习最佳节点选择策略。这些策略的行为和性能在真实的看不见的太阳能数据上进行了测试,以证明该方法的有效性。深度RL代理在可见数据和不可见数据上均优于基线方法。我们开发了一个综合的仿真环境,以模拟能量采集传感器网络的行为,并考虑到空间上变化的能量采集功能和无线连接。然后,我们训练提出的RL代理,以通过与仿真环境的交互来学习最佳节点选择策略。这些策略的行为和性能在真实的看不见的太阳能数据上进行了测试,以证明该方法的有效性。深度RL代理在可见数据和不可见数据上均优于基线方法。这些策略的行为和性能在真实的看不见的太阳能数据上进行了测试,以证明该方法的有效性。深度RL代理在可见数据和不可见数据上均优于基线方法。这些策略的行为和性能在真实的看不见的太阳能数据上进行了测试,以证明该方法的有效性。深度RL代理在可见数据和不可见数据上均优于基线方法。
更新日期:2019-12-19
down
wechat
bug