Skip to content
BY 4.0 license Open Access Published by De Gruyter April 4, 2023

Active 3D positioning and imaging modulated by single fringe projection with compact metasurface device

  • Xiaoli Jing , Yao Li , Junjie Li , Yongtian Wang and Lingling Huang ORCID logo EMAIL logo
From the journal Nanophotonics

Abstract

Three-dimensional (3D) information is vital for providing detailed features of the physical world, which is used in numerous applications such as industrial inspection, automatic navigation and identity authentication. However, the implementations of 3D imagers always rely on bulky optics. Metasurfaces, as the next-generation optics, shows flexible modulation abilities and excellent performance combined with computer vision algorithm. Here, we demonstrate an active 3D positioning and imaging method with large field of view (FOV) by single fringe projection based on metasurface and solve the accurate and robust calibration problem with the depth uncertainty of 4 μm. With a compact metasurface projector, the demonstrated method can achieve submillimeter positioning accuracy under the FOV of 88°, offering robust and fast 3D reconstruction of the texture-less scene due to the modulation characteristic of the fringe. Such scheme may accelerate prosperous engineering applications with the continued growth of flat-optics manufacturing process by using metadevices.

1 Introduction

Optical 3D sensors play a key role in the booming 3D information technique, which are ubiquitous in industry [13], medicine [4], artistry [5, 6], and virtual reality [7]. And compact size and high performance optical devices are always the ultimate goals to pursue [8]. Recently, combined with the computer vision technique, metasurface [9, 10] enables the miniaturization of optical devices, endowed with flexible arrangement in the application scene with low cost and complexity. Metasurface is an artificial optical surface composed of subwavelength scale two-dimensional array, providing novel modulation of amplitude [11, 12], phase [13, 14], polarization [15, 16] and orbital angular momentum [17, 18] of light field. With the advantage of compact size and large numerical aperture [19], metasurface has already been introduced in some innovative applications of holography display, beam shaping, and optical computing, which surpass the performances of conventional lenses and diffractive optical elements (DOE) [20, 21].

Prior metasurface-based 3D sensors generally belong to passive imaging technique, which enable the miniaturization of 3D sensors and the achievement of 3D positioning or imaging. For example, both the bifocus metalens [22] and points spread function (PSF)-engineered metasurface [23] are based on designed depth-dependent response of metasurface. Inspired by depth from defocus (DFD), the bifocus metalens simultaneously capture two differently defocused images for depth extraction. As the double-helix beam generated by phase-coded metasurface rotates with the depth, the depth information is retrieved by the captured image including two identical scenes with a rotation angle. In addition, light field-based 3D imaging technique such as light-field camera [24] and metalens array [25] have been proposed. Both they can obtain the depth information because of multiple viewpoints by using the lens array, which is akin to multi-view 3D reconstruction. Since they rely the image capture of the multiple viewpoints or multiple modulated depths, these 3D imaging techniques inherit the limitation of the tradeoff between spatial information and depth information. Recently, some active 3D imaging methods based on dots projection [8, 26] are proposed. The large space-bandwidth product of metasurface offers the high density dots projection which can reach to 10 k∼20 k, showing the potential solution of active imaging method on the improvement of the spatial resolution. But the depth reconstruction is time-consuming since the calculation involves a large number of matching operations.

However, metasurface can be introduced in the 3D imaging system as a coded light source to offer another information freedom, which brings extra advantages of tackling the 3D imaging problem of texture-less object and working in the dark conditions. Therefore, we have studied the active 3D positioning and imaging based on metasurface-based device. Here, we propose a single fringe illumination modulation to achieve the frequency coding of the captured image, which is robust for single-shot depth deduce. The geometric metasurface is used for projecting the designed fringe and the deformed fringe image captured by the reflection of the object can be used to reconstruct the depth information. Considering the phase ambiguity caused by the single-direction fringe and phase unwrapping, we propose a calibration model by introducing the epipolar constraint and similarity search, and the corresponding algorithm with alternating direction method of multipliers (ADMM), resulting a re-projection error of 0.2 pixel. Meanwhile, based on the calibration model, the 3D positioning algorithm is proposed and experimentally demonstrated with the positioning accuracy of 0.5 mm in the working distance ranging from 300 mm to 400 mm. Finally, we conduct a 3D facial imaging experiment successfully, indicating the practicability of proposed method in the real scene. In addition, in conjunction with scalability to light source integrating [27], we believe that our method will accelerate the application in computer vision, personal authentication and artificial intelligence et al. with the drastically improved compactness and robust 3D positioning and imaging performance.

2 Results

Active 3D imaging can cope well with the texture-less objects that frequently encountered in the real scene, because the coding illumination offers extra information for depth estimation. Here we proposed a metasurface device for single-shot 3D positioning and imaging as shown in Figure 1. The fringe projection is used here for robust depth extraction, leveraging the advantage of effective noise removal as a kind of encoding in Fourier space. Combined with the triangulation, depth information can be calculated by deformed fringe image, which is captured by the camera from the reflection of the object as shown in Figure 1a.

Figure 1: 
Schematic of proposed method and principle of operation. (a) Experiment setup for 3D imaging system. (b) Calibration algorithm. The loss function of optimization algorithm is similarity evaluation, and the line constraint 
x

i
W = 0 denotes that the corresponding points 
x

i
 of the certain pixel position 
x

0

i
 (red, blue, yellow, and green point in each calibrated image i) should be determined along a certain line (red, blue, yellow, and green point in the reference image) in the reference image. (c) Application of 3D positioning and imaging.
Figure 1:

Schematic of proposed method and principle of operation. (a) Experiment setup for 3D imaging system. (b) Calibration algorithm. The loss function of optimization algorithm is similarity evaluation, and the line constraint x i W = 0 denotes that the corresponding points x i of the certain pixel position x 0 i (red, blue, yellow, and green point in each calibrated image i) should be determined along a certain line (red, blue, yellow, and green point in the reference image) in the reference image. (c) Application of 3D positioning and imaging.

Note the single illumination fringe cannot describe the spatial position without ambiguity because of the single-directional encoding characteristic, which yields some sticky problems of accurate calibration. Assuming the fixed pixel position x 0 i of the ith calibration image has the corresponding points x i (in the reference image) as shown in Figure 1b, we solve the sub-pixel matching problem with finely designed algorithms combined geometrical constraint with the similarity requirement of x i . Thus the relation P between the depth h and the correspondence ( x 0 , x i ) can be obtained by proposed optimization algorithm leveraging the nonlinear mathematical relation between them based on triangular geometry (see the details in the look-up table of Supplementary Note 4) metrology method. Herein, we propose a fast, accurate positioning method based on the priori calibration results P, which can also be applied in 3D imaging shown in Figure 1c.

The captured image can be modeled as a deformed fringe modulated by the depth information as shown in Figure 1a, and we represent this as follows,

(1) I x , y = A x , y + B x , y cos 2 π f x x + 2 π f y y + φ x , y

where I(x, y) is the intensity distribution of the captured image, and A(x, y) and B(x, y) are the background intensity and object reflectivity, respectively. The undeformed fringe with the spatial frequency f x , f y equaling to 1/10 pixel−1 is shown in Figure 2a. The captured image reflected from the object is regarded as a deformed fringe added a phase term φ(x, y) nonlinearly related to the depth h(x, y) in the original fringe, which the nonlinear relationship depends on the system parameters including the relative position between the metasurface and camera, and the focal length of camera. Leveraging the even pulse of cosine function as shown in Figure 2a, we transform I(x, y) in Fourier space to describe the spectrum I ̃ k x , k y ,

(2) I ̃ k x , k y = A ̃ k x , k y + Q k x f x , k y f y + Q * k x + f x , k y + f y

where Q is the Fourier spectrum of b x , y exp j φ x , y / 2 , and Q* is the conjugate of Q. As we can see, the fundamental frequency component Q carries the phase information φ(x, y), and is separable from zero-order component A ̃ k x , k y , which is the Fourier spectrum of the background intensity A(x, y). Therefore, the depth estimation can be achieved by the phase extraction calculation based on frequency filtering and inverse Fourier transform (see Supplementary Note 1). For the high efficiency of holographic reconstruction, we design the phase hologram as shown in Figure 2b, which is calculated by Gerchberg–Saxton (GS) algorithm with angle of full FOV (field of view) equaling to 88°.

Figure 2: 
Metasurface design. (a) Functionality of fringe modulation. Depth information is embedded in the phase around the frequency component of modulated fringe. (b) Phase profile provided by metasurface. (c) Nanofin design. (d) SEM images. d1 and d2 are the top view and side view, respectively.
Figure 2:

Metasurface design. (a) Functionality of fringe modulation. Depth information is embedded in the phase around the frequency component of modulated fringe. (b) Phase profile provided by metasurface. (c) Nanofin design. (d) SEM images. d1 and d2 are the top view and side view, respectively.

We choose the meta-structure made of amorphous silicon to implement the phase engineering. By leveraging the advantage of broadband performance and robustness against fabrication errors of geometric metasurface, amorphous silicon nanopillars with different orientation angles θ (x, y) are arranged on a fused silica substrate to fulfill the desired phase profile. The output electric field Eout transmitted through the nanopillars can be described as

(3) E out = t l + t s 2 1 i + t l t s 2 e ± i 2 θ x , y 1 ± i ,

where t l and t s denote the complex transmission coefficients corresponding to the long and short axis of the rectangular nanopillars. Therefore, t l t s 2 2 is required for high polarization conversion efficiency by optimizing the structure parameters such as height, length, and width of the nanopillars. The period and height of nanopillars are chosen as 316 nm and 600 nm, respectively for covering the phase shifts from 0 to 2π. A rigorous coupled wave analysis (RCWA) method is used for optimizing the 2D parameter of nanopillar at the operating wavelength of 633 nm and the optimized length and width are determined as 180 nm and 80 nm (see the efficiency analysis in Supplementary Note 2). Our fabricated metasurface is composed of 1578 × 1578 nanopillars by using electron beam lithography and reactive ion etching, and the corresponding scanning electron microscopy images with a top view and side view are shown in Figure 2d.

The fringe modulation conducts the search of x i by determining the extracted phase equaling to the phase of x i 0, but the single-directional fringe results the indistinguishable points along the direction of fringe which have the identical phase. Note the key problem in the triangulation is the determination of the corresponding points; we introduce the epipolar geometry [28] as search constraint, which is described in Figure 3a. The spatial points set {P1, P2, P3} imaged at the certain pixel position x 0 of image plane is captured in different depths, and the corresponding projection pattern is same to the one of points set {M1, M2, M3} in reference plane. According to the geometric characteristic, the points in set {M1, M2, M3} keep in a line, and the corresponding points { x 0 , x i } in the image plane also meet the requirement of collinearity. As we can see in Figure 3b, the captured projection pattern has some speckles, which keeps a certain degree of similarity among the calibrated images (The experiment validation can be seen in Supplementary Note 3). Consequently, for a given set of calibrated images, the problem of searching the corresponding points is recast to finding the most similar points { x i } to the certain point x 0, and they can precisely fall on one straight line at the same time. The correspondence matching is modeled as

(4) x ̂ , y ̂ = arg min 1 2 i Ω i f i x 0 + Δ x , y 0 + Δ y g x + Δ x , y + Δ y 2 i = 2 : N s t u W = 0 , u = C x , y

where x , y is the spatial coordinate of optimized corresponding points set { x i } of N calibrated images, and u is the matrix transformed from (x, y) by the operation C. W is the auxiliary matrix for line constraint (see the details in Supplementary Note 4). f i , g are the calibrated image with the serial number i and the reference image, respectively. ∆x, and ∆y are used to assure that the calculated points are in the sub-region Ω i . The initial value of x i can be estimated by the proximal phase search and similarity determination.

Figure 3: 
Calibration algorithm and results. (a) Epipolar geometry in the 3D imaging system. (b) The initial value of 
x

i
. The initial corresponding coordinate is estimated by finding the most similar points from the regions with proximal phase value in reference image. The orange line denotes the regions with proximal phase. (c) The goal of the calibration algorithm. The calibration aims to make each point 
x

i
 have a high similarity with the corresponding calibrated image f
i
 and stands in a line.
Figure 3:

Calibration algorithm and results. (a) Epipolar geometry in the 3D imaging system. (b) The initial value of x i . The initial corresponding coordinate is estimated by finding the most similar points from the regions with proximal phase value in reference image. The orange line denotes the regions with proximal phase. (c) The goal of the calibration algorithm. The calibration aims to make each point x i have a high similarity with the corresponding calibrated image f i and stands in a line.

The phase distribution of each calibrated image can be calculated by frequency filter and inverse Fourier transform (see Supplementary Note 1), then we can find the possible positions according to the proximal phase value of φ0(x0, y0) in the reference image, which is shown as an orange line in Figure 3b. The initial estimation of x i can be determined by finding the most similar point in the orange line. Based on the initial estimation, we develop the algorithm solver of Eq. (3) with alternating direction method of multipliers (ADMM) [29]. One of the optimization directions is similarity match between the{ x i } and the corresponding calibrated points x i as shown in Figure 3c, and the other is the line constraint of { x i } (see Supplementary Note 4). Meanwhile, we design an iteration method based on the least squares on the basis of the relation between depth h and spatial coordinate x ̂ , y ̂ in the image plane,

(5) h x , y = P x ̂ , y ̂ ; a , b

where P denotes the mathematical operator (see Supplementary Note 4), and a, b are related to system parameters such as the relative position between the metasurface and camera, and the focal length of camera, thus the look-up table (a, b) can be established for fast 3D reconstruction.

To demonstrate the validity of calibration results, the correlation values zero-normalized sum of squared difference coefficient (ZNSSD) and deviation off the line are calculated as shown in Figure 4a and b, respectively, which can be used to evaluate the correctness of the corresponding points directly.

(6) C ZNSSD = x = 1 M y = 1 N f x , y f m x = 1 M y = 1 N f x , y f m 2 g x , y g m x = 1 M y = 1 N g x , y g m 2 2
Figure 4: 
Calibration results and 3D positioning results. (a) Correlation values ZNSSD obtained by the optimized corresponding points. (b) Deviation values off the line calculated by the optimized corresponding points. (c) PV and RMS value calculated by the optimized corresponding points and look-up table. (d) Positioning results. The blue line represents the deviation values between the average value of reconstructed depth and the real depth value, and the bluish area is bounded by the maximum and minimal deviation of the depth distribution in each experiment. The orange line represents the RMS (root-mean-square) value of the depth distribution in each experiment.
Figure 4:

Calibration results and 3D positioning results. (a) Correlation values ZNSSD obtained by the optimized corresponding points. (b) Deviation values off the line calculated by the optimized corresponding points. (c) PV and RMS value calculated by the optimized corresponding points and look-up table. (d) Positioning results. The blue line represents the deviation values between the average value of reconstructed depth and the real depth value, and the bluish area is bounded by the maximum and minimal deviation of the depth distribution in each experiment. The orange line represents the RMS (root-mean-square) value of the depth distribution in each experiment.

The results are calculated by 25 randomly selected points in the selected calibrated image named Img 1, Img 2, …, Img 8. As we can see, all the correlation values are greater than 0.8, and the absolute values of deviation are smaller than 0.2 pixel. Therefore, the optimized corresponding points obtained by the proposed algorithm are proven to fulfill with the requirement of similarity and epipolar constraint. We also use the optimized corresponding points and look-up table to calculate the depth value h (x, y) of each calibrated image, and the peak valley (PV) and root mean square (RMS) value of each depth value h (x, y) are shown in Figure 4c. The PV value is smaller than 0.5 mm and RMS value is smaller than 4 × 10−3 mm, showing a sub-millmeter calibration accuracy.

Therefore, we design an accurate positioning algorithm based on the similarity match, epipolar constraint and look-up table. Firstly, the wrapped phase is obtained by phase extraction calculation (see Supplementary Note 1) and a global initial search method is used to unwrap the phase image for absolute phase (see Supplementary Note 5). Combined with the epipolar constraint, the absolute phase image can be used to determine a rough area for searching the corresponding points. Thus the subpixel correspondence can be obtained with the correspondence matching algorithm (see Supplementary Note 4), resulting in an accurate depth estimation by using the look-up-table (a, b). To quantify the positioning accuracy, we design the positioning experiment by 11 groups of object distances ranging from 300 mm to 400 mm apart from the metasurface. As shown in Figure 4d, the reconstructed depths are in relatively good agreement with the real object distance, which is obtained by modulating precise translation stage. The reconstructed depth deviations are smaller than 0.5 mm, owing to the proposed positioning algorithm (see the comparison with traditional Fourier transform profilometry [30] in Supplementary Note 5) and accurate calibration data which has been proven in Figure 4c. Note the depth error increases with the increasing object distance, the main reason is the decorrelation effect (see Supplementary Note 3) with the increasing relative depth based on the reference plane.

In addition to the positioning technique, we also demonstrate the ability of 3D imaging reconstruction as shown in Figure 5. The deformed unidirectional fringe image of a facial mask is captured by a camera, indicating that the nose with a relatively high depth makes the projection fringe a sharp twist shown in Figure 5a. Thus the phase information of the fringe can be obtained clearly and fast by the phase extraction method and global initial search shown in Figure 5b and c, which offers rough but robust corresponding points estimation with the reference image. The 3D result of the facial mask is completely reconstructed as shown in Figure 5d and e, which can be attributed to the accurate look-up table (a, b) computed by our proposed calibrated algorithm for the transformation between the depth and the correspondence matching. Note that the corresponding points search of the facial mask is operated by phase information and epipolar constraint without similarity matching, since the decorrelation effect occurs in the discrepant surface. Therefore, our proposed method can achieve fast (computing time is smaller than 0.05 s) and robust 3D imaging.

Figure 5: 
3D facial imaging. (a) Captured image. (b) Wrapped phase. (c) Unwrapped phase. (d) Top view of 3D image. (e) Perspective view of 3D image.
Figure 5:

3D facial imaging. (a) Captured image. (b) Wrapped phase. (c) Unwrapped phase. (d) Top view of 3D image. (e) Perspective view of 3D image.

3 Conclusions

In summary, an active 3D positioning and imaging method with compact metasurface is proposed in this work, which benefits from the flexible light field control ability of metasurface and elaborate calibration and reconstruction algorithm. Note that previous metasurface-based passive imaging such as metalens array generally needs to capture multi-views in a single detector, the depth information is obtained with the sacrifice of the spatial information. Leveraging the extra information offered by the metasurface–based illumination, our proposed method not only takes advantage of the active imaging to achieve the 3D imaging without the waste of the spatial information, but also demonstrate a coding illumination application examples as high-performance 3D positioning and imaging implementation with advanced metasurface-based optical systems, offering another design freedom for the growing metasurface-based imaging systems. Furthermore, we develop a complete technical route for active 3D imaging with metasurface-based device. We establish the calibration model and the corresponding calibration algorithm including the algorithm of correspondence matching and look-up table by leveraging the geometric relation of system and the pattern characteristic sufficiently. The calibration results demonstrate the validity of proposed calibration method experimentally, laying the foundation of accurate 3D positioning. Eventually, the quantitative positioning accuracy is provided, and the 3D facial imaging is achieved successfully, indicating that the proposed method can be applied in the real scenario. Note that decorrelation effect may harm the spatial resolution of 3D imaging, multi-periodic [31] or phase-shifting fringe [32] projection fringe projection based on multiplexing technique of metasurface [15, 33] has the potential for 3D reconstruction of high spatial resolution. We believe that the proposed method is an elegant integration of nanophotonic technology with computational photography in the field of active imaging, which may inspire the further development of endoscopy vision, industrial inspection, and automatic vehicle.

See Supplementary Material for the complete algorithm derivation.


Corresponding author: Lingling Huang, Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 10081, China, E-mail:

Award Identifier / Grant number: 2021M690389

Funding source: National Key R&D Program of China

Award Identifier / Grant number: 2021YFA1401200

Funding source: Beijing Outstanding Young Scientist Program

Award Identifier / Grant number: BJJWZYJH01201910007022

Funding source: Beijing Municipal Science & Technology Commission, Administrative Commission of Zhongguancun Science Park

Award Identifier / Grant number: Z211100004821009

Award Identifier / Grant number: 62105024

Award Identifier / Grant number: 92050117

Award Identifier / Grant number: U21A20140

Funding source: Fok Ying-Tong Education Foundation of China

Award Identifier / Grant number: 161009

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: We thank the financial support from the National Key R&D Program of China (2021YFA1401200), Beijing Outstanding Young Scientist Program (BJJWZYJH01201910007022), National Natural Science Foundation of China (No. U21A20140, No. 92050117, No. 62105024) program, China Postdoctoral Science Foundation (No. 2021M690389), Fok Ying-Tong Education Foundation of China (No. 161009) and Beijing Municipal Science & Technology Commission, Administrative Commission of Zhongguancun Science Park (No. Z211100004821009). This work was also supported by the Synergic Extreme Condition User Facility, China.

  3. Conflict of interest statement: The authors declare that they have no competing interests.

References

[1] Z. Wu, W. Guo, B. Pan, Q. Kemao, and Q. Zhang, “A DIC-assisted fringe projection profilometry for high-speed 3D shape, displacement and deformation measurement of textured surfaces,” Opt. Las. Eng., vol. 142, p. 106614, 2021. https://doi.org/10.1016/j.optlaseng.2021.106614.Search in Google Scholar

[2] S. Royo and M. Ballesta-Garcia, “An overview of lidar imaging systems for autonomous vehicles,” Appl. Sci., vol. 9, p. 4093, 2019. https://doi.org/10.3390/app9194093.Search in Google Scholar

[3] L. Kaul, R. Zlot, and M. Bosse, “Continuous-time three-dimensional mapping for micro aerical vehicles with a passively actuated rotating laser scanner,” J. Field Robot., vol. 33, pp. 103–132, 2016. https://doi.org/10.1002/rob.21614.Search in Google Scholar

[4] A. Shamata and T. Thompson, “Using structured light three-dimensional surface scanning on living individuals: key considerations and best practice for forensic medicine,” J. Forensic Leg. Med., vol. 55, pp. 58–64, 2018. https://doi.org/10.1016/j.jflm.2018.02.017.Search in Google Scholar PubMed

[5] Y. Ham, K. K. Han, J. J. Lin, and M. Goparvar-Fard, “Visual monitoring of civil infrastructure systems via camera-equipped unmanned aerial vehicles (UAVs): a review of related works,” Visual. Eng., vol. 4, pp. 1–8, 2016. https://doi.org/10.1186/s40327-015-0029-z.Search in Google Scholar

[6] J. A. Christian and A. Cryan, “A survey of LiDAR technology and its use in spacecraft relative navigation,” in AIAA Guidance, Navigation and Control Conference, Boston, MA, 2013.10.2514/6.2013-4641Search in Google Scholar

[7] D. Aliaga and Y. Xu, “Photogeometric structured light: a self-calibrating and multi-viewpoint framework for accurate 3D modeling,” in CVPR, 2008, pp. 1–8.10.1109/CVPR.2008.4587709Search in Google Scholar

[8] X. Jing, R. Zhao, X. Li, et al.., “Single-shot 3D imaging with point cloud projection based on metadevice,” Nat. Commun., vol. 13, p. 7842, 2022. https://doi.org/10.1038/s41467-022-35483-z.Search in Google Scholar PubMed PubMed Central

[9] A. V. Kildishev, A. Boltasseva, and V. M. Shalaev, “Planar photonics with metasurfaces,” Science, vol. 339, p. 1232009, 2013. https://doi.org/10.1126/science.1232009.Search in Google Scholar PubMed

[10] J. Park, B. G. Jeong, S. I. Kim, et al.., “All-solid-state spatial light modulator with independent phase and amplitude control for three-dimensional LiDAR applications,” Nat. Nanotechnol., vol. 16, pp. 69–76, 2021. https://doi.org/10.1038/s41565-020-00787-y.Search in Google Scholar PubMed

[11] K. Huang, H. Liu, F. J. Garcia-Vidal, et al.., “Ultrahigh-capacity non-periodic photon sieves operating in visible light,” Nat. Commun., vol. 6, p. 7059, 2015. https://doi.org/10.1038/ncomms8059.Search in Google Scholar PubMed

[12] J. Park, K. Lee, and Y. Park, “Ultrathin wide-angle large-area digital 3D holographic display using a non-periodic photon sieve,” Nat. Commun., vol. 10, p. 1304, 2019. https://doi.org/10.1038/s41467-019-09126-9.Search in Google Scholar PubMed PubMed Central

[13] L. Huang, X. Chen, H. Muhlenbernd, et al.., “Three-dimensional optical holography using a plasmonic metasurface,” Nat. Commun., vol. 4, p. 2808, 2013. https://doi.org/10.1038/ncomms3808.Search in Google Scholar

[14] G. Zheng, H. Muhlenbernd, M. Kenney, G. Li, T. Zentgraf, and S. Zhang, “Metasurface holograms reaching 80% efficiency,” Nat. Nanotechnol., vol. 10, pp. 308–312, 2015. https://doi.org/10.1038/nnano.2015.2.Search in Google Scholar PubMed

[15] X. Guo, J. Zhong, B. Li, et al.., “Full-color holographic display and encryption with full-polarization degree of freedom,” Adv. Mater., vol. 34, p. 2103192, 2021. https://doi.org/10.1002/adma.202103192.Search in Google Scholar PubMed

[16] A. H. Dorrah, N. A. Rubin, A. Zaidi, M. Tamagnone, and F. Capasso, “Metasurface optics for on-demand polarization transformations along the optical path,” Nat. Photon., vol. 15, pp. 287–296, 2021. https://doi.org/10.1038/s41566-020-00750-2.Search in Google Scholar

[17] R. Zhao, L. Huang, and Y. Wang, “Recent advances in multi-dimensional metasurfaces holographic technologies,” PhotoniX, vol. 1, p. 20, 2020. https://doi.org/10.1186/s43074-020-00020-y.Search in Google Scholar

[18] A. Forbes, M. de Oliveira, and M. R. Dennis, “Structured light,” Nat. Photon., vol. 15, pp. 253–262, 2021. https://doi.org/10.1038/s41566-021-00780-4.Search in Google Scholar

[19] J. Engelberg and U. Levy, “The advantages of metalenses over diffractive lenses,” Nat. Commun., vol. 11, p. 1991, 2020. https://doi.org/10.1038/s41467-020-15972-9.Search in Google Scholar PubMed PubMed Central

[20] K. Rastani, M. Orenstein, E. Kapon, and A. C. Von Lehmen, “Integration of planar Fresnel microlenses with vertical-cavity surface-emitting laser arrays,” Opt. Lett., vol. 16, pp. 919–921, 1991. https://doi.org/10.1364/ol.16.000919.Search in Google Scholar PubMed

[21] Y. Ni, S. Chen, Y. Wang, Q. Tan, S. Xiao, and Y. Yang, “Metasurface for structured light projection over 120-degree field of view,” Nano Lett., vol. 20, pp. 6719–6724, 2020. https://doi.org/10.1021/acs.nanolett.0c02586.Search in Google Scholar PubMed

[22] Q. Guo, Z. Shi, Y. W. Huang, et al.., “Compact single-shot metalens depth sensors inspired by eyes of jumping spiders,” Proc. Natl. Acad. Sci. U.S.A., vol. 116, pp. 22959–22965, 2019. https://doi.org/10.1073/pnas.1912154116.Search in Google Scholar PubMed PubMed Central

[23] C. Jin, M. Afsharnia, R. Berlich, et al.., “Dielectric metasurfaces for distance measurements and three-dimensional imaging,” Adv. Photon., vol. 1, p. 036001, 2019. https://doi.org/10.1117/1.ap.1.3.036001.Search in Google Scholar

[24] Q. Fan, W. Xu, X. Hu, et al.., “Trilobite-inspired neural nanophotonic light-field camera with extreme depth-of-field,” Nat. Commun., vol. 13, p. 2130, 2022. https://doi.org/10.1038/s41467-022-29568-y.Search in Google Scholar PubMed PubMed Central

[25] W. Liu, D. Ma, Z. Li, et al.., “Aberration-corrected three-dimensional positioning with a single-shot metalens array,” Optica, vol. 7, p. 1706, 2020. https://doi.org/10.1364/optica.406039.Search in Google Scholar

[26] G. Kim, Y. Kim, J. Yun, et al.., “Metasurface-driven full-space structured light for three-dimensional imaging,” Nat. Commun., vol. 13, p. 5920, 2022. https://doi.org/10.1038/s41467-022-32117-2.Search in Google Scholar PubMed PubMed Central

[27] Y. Xie, P. N. Ni, Q. H. Wang, et al.., “Metasurface-integrated vertical cavity surface-emitting lasers for programmable directional lasing emissions,” Nat. Nanotechnol., vol. 15, pp. 125–130, 2020. https://doi.org/10.1038/s41565-019-0611-y.Search in Google Scholar PubMed

[28] F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng., vol. 39, pp. 10–22, 2000. https://doi.org/10.1117/1.602438.Search in Google Scholar

[29] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, pp. 1–122, 2011. https://doi.org/10.1561/2200000016.Search in Google Scholar

[30] M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt., vol. 22, pp. 3977–3982, 1983. https://doi.org/10.1364/ao.22.003977.Search in Google Scholar PubMed

[31] L. Guo, X. Su, and J. Li, “Improved Fourier transform profilometry for the automatic measurement of 3d object shapes,” Opt. Eng., vol. 29, pp. 1439–1444, 1990. https://doi.org/10.1117/12.55746.Search in Google Scholar

[32] C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: a comparative review,” Opt. Las. Eng., vol. 85, pp. 84–103, 2016. https://doi.org/10.1016/j.optlaseng.2016.04.022.Search in Google Scholar

[33] R. Zhao, B. Sain, Q. Wei, et al.., “Multichannel vectorial holographic display and encryption,” Light Sci. Appl., vol. 7, p. 95, 2018. https://doi.org/10.1038/s41377-018-0091-0.Search in Google Scholar PubMed PubMed Central


Supplementary Material

This article contains supplementary material (https://doi.org/10.1515/nanoph-2023-0112).


Received: 2023-02-17
Accepted: 2023-03-22
Published Online: 2023-04-04

© 2023 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 20.4.2024 from https://www.degruyter.com/document/doi/10.1515/nanoph-2023-0112/html
Scroll to top button