当前位置: X-MOL 学术Ad Hoc Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploiting scene and body contexts in controlling continuous vision body cameras
Ad Hoc Networks ( IF 4.8 ) Pub Date : 2020-12-11 , DOI: 10.1016/j.adhoc.2020.102373
Shiwei Fang , Ketan Mayer-Patel , Shahriar Nirjon

Ever-increasing performance at decreasing price has fueled camera deployments in a wide variety of real-world applications—making the case stronger for battery-powered, continuous-vision camera systems. However, given the state-of-the-art battery technology and embedded systems, most battery-powered mobile devices still do not support continuous vision. In order to reduce energy and storage requirements, there have been proposals to offload energy-demanding computations to the cloud Naderiparizi et al. (2016) [1], to discard uninteresting video frames Naderiparizi et al. (2017), and to use additional sensors to detect and predict when to turn on the camera Bahl et al. (2012) [2]. However, these proposals either require a fat communication bandwidth or have to sacrifice capturing of important events.

In this paper, we present — ZenCam 1 ,which is an always-on body camera that exploits readily available information in the encoded video stream from the on-chip firmware to classify the dynamics of the scene. This scene-context is further combined with simple inertial measurement unit (IMU)-based activity level-context of the wearer to optimally control the camera configuration at run-time to keep the device under the desired energy budget. We describe the design and implementation of ZenCam and thoroughly evaluate its performance in real-world scenarios. Our evaluation shows a 29.8%–35% reduction in energy consumption and 48.1-49.5% reduction in storage usage when compared to a standard baseline setting of 1920x1080 at 30fps while maintaining a competitive or better video quality at the minimal computational overhead.



中文翻译:

在控制连续视觉人体摄像机中利用场景和身体环境

性能不断提高且价格不断下降,这推动了摄像机在各种实际应用中的部署,从而使电池供电的连续视觉摄像机系统的外壳更加坚固。但是,考虑到最新的电池技术和嵌入式系统,大多数由电池供电的移动设备仍不支持连续视觉。为了减少能源和存储需求,有人提出将对能源需求的计算卸载到云Naderiparizi等人。(2016)[1],丢弃无趣的视频帧Naderiparizi等。(2017),并使用其他传感器来检测和预测何时打开相机Bahl等。(2012)[2]。但是,这些建议要么需要较大的通信带宽,要么必须牺牲对重要事件的捕获。

在本文中,我们介绍了ZenCam 1,它是永远在线的人体摄像机,它利用来自片上固件的编码视频流中的随时可用信息来对场景的动态进行分类。此场景上下文进一步与基于简单惯性测量单元(IMU)的活动级别上下文结合在一起佩戴者可以在运行时最佳地控制相机配置,以使设备保持在所需的能量预算之下。我们将描述ZenCam的设计和实现,并全面评估其在实际场景中的性能。我们的评估显示,与30fps的1920x1080标准基准设置相比,能耗降低了29.8%–35%,存储使用量减少了48.1-49.5%,同时以最小的计算开销保持了竞争性或更好的视频质量。

更新日期:2020-12-22
down
wechat
bug