当前位置: X-MOL 学术Optica › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Training of photonic neural networks through in situ backpropagation and gradient measurement
Optica ( IF 10.4 ) Pub Date : 2018-07-19 , DOI: 10.1364/optica.5.000864
Tyler W. Hughes , Momchil Minkov , Yu Shi , Shanhui Fan

Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms. Of particular interest are artificial neural networks, since matrix-vector multiplications, which are used heavily in artificial neural networks, can be done efficiently in photonic circuits. The training of an artificial neural network is a crucial step in its application. However, currently on the integrated photonics platform there is no efficient protocol for the training of these networks. In this work, we introduce a method that enables highly efficient, in situ training of a photonic neural network. We use adjoint variable methods to derive the photonic analogue of the backpropagation algorithm, which is the standard method for computing gradients of conventional neural networks. We further show how these gradients may be obtained exactly by performing intensity measurements within the device. As an application, we demonstrate the training of a numerically simulated photonic artificial neural network. Beyond the training of photonic machine learning implementations, our method may also be of broad interest to experimental sensitivity analysis of photonic systems and the optimization of reconfigurable optics platforms.

中文翻译:

通过原位反向传播和梯度测量训练光子神经网络

最近,集成光学作为实现机器学习算法的硬件平台已引起人们的兴趣。人工神经网络特别受关注,因为可以在光子电路中高效完成在人工神经网络中大量使用的矩阵向量乘法。人工神经网络的训练是其应用中的关键步骤。但是,目前在集成光子平台上,尚无用于训练这些网络的有效协议。在这项工作中,我们介绍了一种可实现高效,原位的方法训练光子神经网络。我们使用伴随变量方法来推导反向传播算法的光子类似物,这是计算常规神经网络梯度的标准方法。我们进一步展示了如何通过在设备内执行强度测量来精确获得这些梯度。作为一个应用程序,我们演示了数值模拟光子人工神经网络的训练。除了训练光子机器学习实现之外,我们的方法还可能对光子系统的实验灵敏度分析和可重构光学平台的优化具有广泛的兴趣。
更新日期:2018-07-21
down
wechat
bug