• Featured in Physics
  • Open Access

Far-Field Subwavelength Acoustic Imaging by Deep Learning

Bakhtiyar Orazbayev and Romain Fleury
Phys. Rev. X 10, 031029 – Published 7 August 2020
Physics logo See Focus story: Machine Learning Makes High-Resolution Imaging Practical
PDFHTMLExport Citation

Abstract

Seeing and recognizing an object whose size is much smaller than the illumination wavelength is a challenging task for an observer placed in the far field, due to the diffraction limit. Recent advances in near- and far-field microscopy have offered several ways to overcome this limitation; however, they often use invasive markers and require intricate equipment with complicated image postprocessing. On the other hand, a simple marker-free solution for high-resolution imaging may be found by exploiting resonant metamaterial lenses that can convert the subwavelength image information contained in the near field of the object to propagating field components that can reach the far field. Unfortunately, resonant metalenses are inevitably sensitive to absorption losses, which has so far largely hindered their practical applications. Here, we solve this vexing problem and show that this limitation can be turned into an advantage when metalenses are combined with deep learning techniques. We demonstrate that combining deep learning with lossy metalenses allows recognizing and imaging largely subwavelength features directly from the far field. Our acoustic learning experiment shows that, despite being 30 times smaller than the wavelength of sound, the fine details of images can be successfully reconstructed and recognized in the far field, which is crucially favored by the presence of absorption. We envision applications in acoustic image analysis, feature detection, object classification, or as a novel noninvasive acoustic sensing tool in biomedical applications.

  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
1 More
  • Received 18 November 2019
  • Revised 13 May 2020
  • Accepted 8 July 2020

DOI:https://doi.org/10.1103/PhysRevX.10.031029

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.

Published by the American Physical Society

Physics Subject Headings (PhySH)

General Physics

Focus

Key Image

Machine Learning Makes High-Resolution Imaging Practical

Published 7 August 2020

A new acoustic technique involving machine learning could lead to cheaper and faster high-resolution medical imaging.

See more in Physics

Authors & Affiliations

Bakhtiyar Orazbayev and Romain Fleury*

  • Laboratory of Wave Engineering, École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland

  • *romain.fleury@epfl.ch

Popular Summary

While we use sound every day to communicate, its meter-scale wavelength is so large that audible acoustic waves cannot be used to image standard objects because most feature geometrical details that are much smaller than this characteristic sonic wavelength. While, in principle, the use of low-frequency sound could enable new applications such as through-wall monitoring or through-skull biomedical imaging, these dreams are crushed by a fundamental physical law, the diffraction limit, which states that the maximal resolution of conventional imaging systems is, approximately, half of the imaging wavelength. Here, we use artificial-intelligence methods to work around this limit and use airborne audible sound to image, recognize, and classify very small objects shaped as digits or letters.

There are a number of ways to bend the diffraction limit, but these generally require complex setups, extensive image processing, or tagging the target with some sort of marker (an invasive method potentially problematic in biomedical applications). The limit itself originates from evanescent waves that scatter off small details but cannot propagate far from the target. Lenses made of metamaterials (metalenses) can recover the lost evanescent waves but suffer from losses. We analyze theoretically how dissipative processes help a machine to learn to recognize small features and demonstrate experimentally a learning metalens that is capable of reproducing the images of thousands of differently shaped objects whose geometrical details are smaller than 30 times the wavelength of sound.

We believe that this method could be transposed to higher frequencies and other types of waves, from ultrasound to optics, and applied to novel material characterization and medical imaging strategies. For example, tiny resonators embedded at the entrance of an optical fiber may allow subwavelength details to be identified by a neural network at the other end of the fiber.

Key Image

Article Text

Click to Expand

Supplemental Material

Click to Expand

References

Click to Expand
Issue

Vol. 10, Iss. 3 — July - September 2020

Subject Areas
Reuse & Permissions
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review X

Reuse & Permissions

It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures.

×

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×