当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint Framework for Single Image Reconstruction and Super-Resolution With an Event Camera
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2021-09-20 , DOI: 10.1109/tpami.2021.3113352
Lin Wang 1 , Tae-Kyun Kim 2 , Kuk-Jin Yoon 1
Affiliation  

Event cameras sense brightness changes in each pixel and yield asynchronous event streams instead of producing intensity images. They have distinct advantages over conventional cameras, such as a high dynamic range (HDR) and no motion blur. To take advantage of event cameras with existing image-based algorithms, a few methods have been proposed to reconstruct images from event streams. However, the output images have a low resolution (LR) and are unrealistic. Low-quality outputs stem from broader applications of event cameras, where high-quality and high-resolution (HR) images are needed. In this work, we consider the problem of reconstructing and super-resolving images from LR events when no ground truth (GT) HR images and degradation models are available. We propose a novel end-to-end joint framework for single image reconstruction and super-resolution from LR event data. Our method is primarily unsupervised to handle the absence of real inputs from GT and deploys adversarial learning. To train our framework, we constructed an open dataset, including simulated events and real-world images. The use of the dataset boosts the network performance, and the network architectures and various loss functions in each phase help improve the quality of the resulting image. Various experiments showed that our method surpasses the state-of-the-art LR image reconstruction methods for real-world and synthetic datasets. The experiments for super-resolution (SR) image reconstruction also substantiate the effectiveness of the proposed method. We further extended our method to more challenging problems of HDR, sharp image reconstruction, and color events. In addition, we demonstrate that the reconstruction and super-resolution results serve as intermediate representations of events for high-level tasks, such as semantic segmentation, object recognition, and detection. We further examined how events affect the outputs of the three phases and analyze our method’s efficacy through an ablation study.

中文翻译:


使用事件相机进行单图像重建和超分辨率的联合框架



事件摄像机感测每个像素的亮度变化并生成异步事件流,而不是生成强度图像。与传统相机相比,它们具有明显的优势,例如高动态范围 (HDR) 和无运动模糊。为了利用事件相机和现有的基于图像的算法,已经提出了一些从事件流重建图像的方法。然而,输出图像的分辨率 (LR) 较低且不真实。低质量输出源于事件摄像机的更广泛应用,其中需要高质量和高分辨率 (HR) 图像。在这项工作中,我们考虑了当没有地面实况 (GT) HR 图像和退化模型可用时从 LR 事件重建和超分辨率图像的问题。我们提出了一种新颖的端到端联合框架,用于 LR 事件数据的单图像重建和超分辨率。我们的方法主要是无监督的,以处理 GT 真实输入的缺失并部署对抗性学习。为了训练我们的框架,我们构建了一个开放数据集,包括模拟事件和真实世界图像。数据集的使用提高了网络性能,每个阶段的网络架构和各种损失函数有助于提高生成图像的质量。各种实验表明,我们的方法超越了现实世界和合成数据集最先进的 LR 图像重建方法。超分辨率(SR)图像重建的实验也证实了该方法的有效性。我们进一步将我们的方法扩展到更具挑战性的 HDR、清晰图像重建和颜色事件问题。 此外,我们证明重建和超分辨率结果可以作为高级任务的事件的中间表示,例如语义分割、对象识别和检测。我们进一步研究了事件如何影响三个阶段的输出,并通过消融研究分析了我们方法的功效。
更新日期:2021-09-20
down
wechat
bug