当前位置: X-MOL 学术Light Sci. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
All-optical synthesis of an arbitrary linear transformation using diffractive surfaces
Light: Science & Applications ( IF 20.6 ) Pub Date : 2021-09-24 , DOI: 10.1038/s41377-021-00623-5
Onur Kulce 1, 2, 3 , Deniz Mengu 1, 2, 3 , Yair Rivenson 1, 2, 3 , Aydogan Ozcan 1, 2, 3
Affiliation  

Spatially-engineered diffractive surfaces have emerged as a powerful framework to control light-matter interactions for statistical inference and the design of task-specific optical components. Here, we report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (Ni) and output (No), where Ni and No represent the number of pixels at the input and output fields-of-view (FOVs), respectively. First, we consider a single diffractive surface and use a matrix pseudoinverse-based method to determine the complex-valued transmission coefficients of the diffractive features/neurons to all-optically perform a desired/target linear transformation. In addition to this data-free design approach, we also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation. We compared the all-optical transformation errors and diffraction efficiencies achieved using data-free designs as well as data-driven (deep learning-based) diffractive designs to all-optically perform (i) arbitrarily-chosen complex-valued transformations including unitary, nonunitary, and noninvertible transforms, (ii) 2D discrete Fourier transformation, (iii) arbitrary 2D permutation operations, and (iv) high-pass filtered coherent imaging. Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is ≥Ni × No, both design methods succeed in all-optical implementation of the target transformation, achieving negligible error. However, compared to data-free designs, deep learning-based diffractive designs are found to achieve significantly larger diffraction efficiencies for a given N and their all-optical transformations are more accurate for N < Ni × No. These conclusions are generally applicable to various optical processors that employ spatially-engineered diffractive surfaces.



中文翻译:

使用衍射表面的任意线性变换的全光学合成

空间工程衍射表面已成为控制光-物质相互作用以进行统计推断和特定任务光学组件设计的强大框架。在这里,我们报告了衍射表面的设计,以在输入 ( N i ) 和输出 ( N o )之间以全光学方式执行任意复值线性变换,其中N iN o分别表示输入和输出视场 (FOV) 处的像素数。首先,我们考虑单个衍射表面并使用基于矩阵伪逆的方法来确定衍射特征/神经元的复值透射系数,以全光学执行所需/目标线性变换。除了这种无数据设计方法之外,我们还考虑了一种基于深度学习的设计方法,通过使用与目标变换对应的输入/输出字段的示例来优化衍射表面的透射系数。我们比较了使用无数据设计以及数据驱动(基于深度学习)衍射设计实现的全光学变换误差和衍射效率,以全光学执行 (i) 任意选择的复值变换,包括酉变换、非酉变换和不可逆变换,(ii) 2D 离散傅立叶变换,(iii) 任意 2D 置换操作,以及 (iv) 高通滤波相干成像。我们的分析表明,如果总数(空间工程衍射特征/神经元的N ) ≥ N i  ×  N o,两种设计方法都成功地实现了目标变换的全光学实现,误差可以忽略不计。然而,与无数据设计相比,发现基于深度学习的衍射设计对于给定的N实现了显着更高的衍射效率,并且它们的全光学变换对于N  <  N i  ×  N o更准确。这些结论通常适用于采用空间工程衍射表面的各种光学处理器。

更新日期:2021-09-24
down
wechat
bug