当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning to Reconstruct HDR Images from Events, with Applications to Depth and Flow Prediction
International Journal of Computer Vision ( IF 19.5 ) Pub Date : 2021-01-05 , DOI: 10.1007/s11263-020-01410-2
Mohammad Mostafavi , Lin Wang , Kuk-Jin Yoon

Event cameras have numerous advantages over traditional cameras, such as low latency, high temporal resolution, and high dynamic range (HDR). We initially investigate the potential of creating intensity images/videos from an adjustable portion of the event data stream via event-based conditional generative adversarial networks (cGANs). Using the proposed framework, we further show the versatility of our method in directly handling similar supervised tasks, such as optical flow and depth prediction. Stacks of space-time coordinates of events are used as the inputs while the proposed framework is trained to predict either the intensity images, optical flows, or depth outputs according to the target task. We further demonstrate the unique capability of our approach in generating HDR images even under extreme illumination conditions, creating non-blurred images under rapid motion, and generating very high frame rate videos up to the temporal resolution of event cameras. The proposed framework is evaluated using a publicly available real-world dataset and a synthetic dataset we prepared by utilizing an event camera simulator.

中文翻译:

学习从事件重建 HDR 图像,以及深度和流量预测的应用

与传统摄像机相比,事件摄像机具有许多优势,例如低延迟、高时间分辨率和高动态范围 (HDR)。我们最初研究了通过基于事件的条件生成对抗网络 (cGAN) 从事件数据流的可调整部分创建强度图像/视频的潜力。使用所提出的框架,我们进一步展示了我们的方法在直接处理类似监督任务(例如光流和深度预测)方面的多功能性。事件的时空坐标堆栈用作输入,同时训练提出的框架以根据目标任务预测强度图像、光流或深度输出。我们进一步证明了我们的方法即使在极端光照条件下也能生成 HDR 图像的独特能力,在快速运动下创建非模糊图像,并生成高达事件摄像机时间分辨率的非常高的帧速率视频。使用公开可用的真实世界数据集和我们利用事件相机模拟器准备的合成数据集评估所提出的框架。
更新日期:2021-01-05
down
wechat
bug