当前位置: X-MOL 学术ACS Photonics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Metasurface Generation of Paired Accelerating and Rotating Optical Beams for Passive Ranging and Scene Reconstruction
ACS Photonics ( IF 7 ) Pub Date : 2020-05-22 , DOI: 10.1021/acsphotonics.0c00354
Shane Colburn 1 , Arka Majumdar 1, 2
Affiliation  

Depth measurements are vital for many emerging technologies with applications in augmented reality, robotics, gesture detection, and facial recognition. These applications, however, demand compact and low-power systems beyond the capabilities of many state-of-the-art depth cameras. While active illumination techniques can enable precise scene reconstruction, they increase power consumption, and systems that employ stereo require extended form factors to separate viewpoints. Here, we exploit a single, spatially multiplexed aperture of nanoscatterers to demonstrate a solution that replicates the functionality of a high-performance depth camera typically comprising a spatial light modulator, polarizer, and multiple lenses. Using cylindrical nanoscatterers that can arbitrarily modify the phase of an incident wavefront, we passively encode two complementary optical responses to depth information in a scene. The designed optical metasurfaces simultaneously generate a focused accelerating beam and a focused rotating beam that exploit wavefront propagation-invariance to produce paired, adjacent images with a single camera snapshot. Compared to conventional depth from defocus methods, this technique enhances both the depth precision and depth of field at the same time. By decoding the captured data in software, our system produces a fully reconstructed image and transverse depth map, providing an optically passive ranging solution. In our reconstruction algorithm, we account for the field curvature of our metasurface by calculating the change in Gouy phase over the field of view, enabling a fractional ranging error of 1.7%. We demonstrate a precise, visible wavelength, and polarization-insensitive metasurface depth camera with a compact 2 mm2 aperture.

中文翻译:

用于被动测距和场景重建的成对加速和旋转光束的超表面生成

深度测量对于许多新兴技术至关重要,这些技术在增强现实,机器人技术,手势检测和面部识别中的应用。然而,这些应用需要紧凑且低功耗的系统,超出了许多最新深度相机的能力。虽然主动照明技术可以实现精确的场景重建,但它们会增加功耗,并且采用立体声的系统需要扩展的外形尺寸以分离视点。在这里,我们利用纳米散射体的单个空间复用孔径来演示一种解决方案,该解决方案可以复制高性能深度相机的功能,该相机通常包括空间光调制器,偏振器和多个透镜。使用可以任意修改入射波前相位的圆柱形纳米散射体,我们对场景中的深度信息被动地编码了两个互补的光学响应。设计的光学超表面同时生成聚焦的加速光束和聚焦的旋转光束,这些光束利用波前传播不变性生成具有单个相机快照的成对的相邻图像。与传统的散焦深度相比,该技术可同时提高深度精度和景深。通过用软件解码捕获的数据,我们的系统产生了完全重建的图像和横向深度图,从而提供了一种光学被动测距解决方案。在我们的重建算法中,我们通过计算整个视场中Gouy相位的变化来解决我们的超表面的场曲,从而实现了1.7%的分数测距误差。我们展示出精确,2个光圈。
更新日期:2020-05-22
down
wechat
bug