Structured-light-field 3D imaging without phase unwrapping

https://doi.org/10.1016/j.optlaseng.2020.106047Get rights and content

Highlights

  • A method for structured-light-field 3D imaging without phase unwrapping is proposed.

  • The wrapped phase-encoded field was processed to be locally continuous and then used for depth estimation and 3D imaging without the requirement of system pre-calibration.

  • Comparative experiments under different measurement conditions were performed and resulted in consistent measurement results.

  • The proposed method is proved to adapt to the changes in the measurement environment and be suitable for flexible structured-light-field 3D imaging without the need for phase unwrapping.

Abstract

Directly using wrapped phases may cause a problem of phase ambiguity in fringe-projection-based three-dimensional (3D) imaging. In our previous work, the phase ambiguity was addressed by introducing light-field imaging to associate a wrapped phase-encoded field in the image space with several candidates of spatial points in the object space to identify the correct fringe orders for absolute phase unwrapping. The candidate spatial points with absolute phases can be reconstructed by satisfying the mandatory requirements of system fixation and pre-calibration, which may lack flexibility and applicability. Since we also illustrated that the correspondence cue in an unwrapped phase-encoded field could be used for accurate light-field depth estimation, an instructive question is whether or not depth cues of the light-field imaging can provide constraint conditions to deal with the phase ambiguity without these systematic requirements. To this end, a method for structured-light-field 3D imaging is proposed. The wrapped phase-encoded field was processed to be locally continuous with respect to the phases in specific angular coordinates. After that, the correspondence cue focusing on the local angular variance of the processed wrapped phase-encoded field was used to estimate depths correctly without the requirement of system pre-calibration. Furthermore, comparative experiments under different measurement conditions, including the changes of the fringe period of projected patterns, the focal length of the projector, and the relative orientation among system components, produced the consistent measurement results. Consequently, the proposed method is proved to adapt to the changes in the measurement environment and be a flexible way for structured-light-field 3D imaging without the need for phase unwrapping.

Introduction

Fringe-projection-based three-dimensional (3D) imaging with an advantage of non-contact, high-accuracy, and high-density 3D data acquisition has been widely used in a broad range of scenarios, such as industry, biomedicine, and entertainment [1], [2], [3], [4]. A crucial step of the fringe-projection technology is to extract the phase information from the captured fringe image(s) by using either phase-shifting [5, 6] or Fourier-transform [7, 8] methods. However, only a 2π-mode wrapped phase can be obtained due to the inverse-tangent calculation. The wrapped phase may cause phase ambiguity and must be unwrapped correctly or used under proper constraint conditions [9].

Spatial and temporal phase unwrapping are two classical kinds of methods. The former searches phase jumps among adjacent pixel points but is quite sensitive to sensor noise and depth discontinuity when the measured surfaces experience an abrupt change in depth or multiple isolated objects in a scene [10], [11], [12], [13]. The latter employs additional encoding information to process the wrapped phase pixel-wise [14]. The additional encoding information can be encoded in time-series signals, such as frequency-varying fringe patterns [15], [16], [17], [18], gray code [19, 20], and pseudorandom fringe [21], or embedded in the background or modulation intensity of original fringe patterns [22], [23], [24]. Unfortunately, temporal phase unwrapping suffers from low measuring speed due to additional image-sequence acquisition or low robustness and accuracy due to problematic signal separation and fringe-contrast reduction. Alternatively, multi-view geometric constraints can be used to address the phase ambiguity without temporally acquiring or embedding the additional encoding information [25], [26], [27], [28], [29]. Generally, sufficient views are suggested for robustly handling the problem of phase ambiguity.

Recently, we presented a method for absolute phase unwrapping based on light-field imaging [30]. From the recorded light field under structured illumination, i.e., structured-light field (SLF), a wrapped phase-encoded field (PEF) can be retrieved and resampled at possible image planes associated with several candidate spatial points in the measurement volume. Correct fringe orders can be determined by checking the phase consistency in the resampled wrapped PEF. The candidate spatial points with absolute phases can be reconstructed as long as the SLF system is structure-fixed and pre-calibrated. If the system parameters take change, the SLF system must be recalibrated, resulting in time-consuming processes. Moreover, frequent system calibration is impractical. More recently, we proposed another method for SLF depth estimation. The correspondence depth cue with the weighted angular variance of the unwrapped PEF can be used to estimate depths accurately [31]. This work indicates that the correspondence cue focusing on the local angular variance of the light field may provide potential constraint conditions to deal with the phase ambiguity in the wrapped PEF.

In this paper, we propose a method for SLF 3D imaging without phase unwrapping. It will be proved that the correspondence cue can be applied to the wrapped PEF for depth estimation and 3D reconstruction as long as the wrapped PEF is processed to be locally continuous with respect to the phases in specific angular coordinates. Compared with our previous methods [30, 31], the proposed method has two major advantages: on the one hand, the wrapped PEF is processed to produce sufficient constraints for effective SLF depth estimation without the need for phase unwrapping; on the other hand, system fixation and calibration are no longer mandatory so that the SLF system is given more degrees of freedom to flexibly adapt to the changes in the measurement environment, which has a special meaning in practice.

Section snippets

Principle

In this section, the basic principle involving the phase computation and SLF depth estimation is briefly described, following by the detailed explanations and discussions of the proposed method in the next sections.

Method

The PEF retrieved using Eq. (2) is wrapped, i.e., ϕw(s, u). Using Eq. (4) to compute the refocused phase maps of the wrapped PEF is problematic, because the angular distribution of the wrapped PEF, i.e., ϕw(s,u)|uNu, may be discontinuous in the region of phase jumps. A general manipulation is to determine the fringe orders K(s, u) for obtaining the absolute unwrapped PEF ϕu(s, u), which can be used to compute the correspondence response using Eqs. (4) and (5). According to the method of

Experiments and analysis

A microlens-array-based unfocused plenoptic camera (Lytro Illum) and a digital projector (Casio XJ-VC100, 1024 × 768 pixels) were combined to set up an experimental configuration of the SLF system, as shown in Fig. 4. This experimental layout was used to demonstrate and verify the proposed method. Raw images captured by the plenoptic camera were decoded with the spatial and angular resolutions as 434 × 625 pixels and 15 × 15 pixels, respectively [34]. The plenoptic camera was calibrated so that

Conclusion

In this paper, we proposed a method for SLF 3D imaging without phase unwrapping. The wrapped PEF was processed to make the phase distribution in each microlens unit continuous. As a result, the correspondence cue focusing on the local angular variance of the light field could be applied to the processed wrapped PEF for depth estimation and 3D reconstruction without the requirement of system pre-calibration. Additionally, we performed the comparative experiments under various measurement

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

This work was supported by the Natural Science Foundation of Guangdong Province under the grant 2018A030313831, the Sino-German Research Cooperation group under the grant GZ 1391, and the National Natural Science Foundation of China under the grant 11804231 and 61875137.

References (35)

  • Y. Yin et al.

    Fringe projection 3D microscopy with the general imaging model

    Opt Express

    (2015)
  • V. Srinivasan et al.

    Automated phase-measuring profilometry of 3-D diffuse objects

    Appl Opt

    (1984)
  • X. Meng et al.

    Wavefront reconstruction and three-dimensional shape measurement by two-step dc-term-suppressed phase-shifted intensities

    Opt Lett

    (2009)
  • M. Takeda et al.

    Fourier transform profilometry for the automatic measurement of 3-D object shapes

    Appl Opt

    (1983)
  • R.M. Goldstein et al.

    Satellite radar interferometry: two-dimensional phase unwrapping

    Radio Sci

    (1988)
  • D.C. Ghiglia et al.

    Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods

    J Opt Soc Am. A

    (1994)
  • M.D. Pritt et al.

    Least-squares two-dimensional phase unwrapping using FFT’s

    IEEE Trans Geosci Remote Sens

    (1994)
  • Cited by (20)

    • A dense light field reconstruction algorithm for four-dimensional optical flow constraint equation

      2023, Pattern Recognition
      Citation Excerpt :

      Light field imaging [2] obtains 3D information about the scene by capturing light distribution in free space, which contains many light field information. It has essential research significance and application prospects in 3D rendering [3] and displays [4]. Camera array is a meaningful way to collect optical fields [5], synchronously acquiring multi-dimensional optical field information.

    • Improved depth of field of the composite micro-lens arrays by electrically tunable focal lengths in the light field imaging system

      2022, Optics and Laser Technology
      Citation Excerpt :

      3D imaging technologies have been widely used to capture 3D information of spatial objects [1–3].

    • Phase unwrapping error correction based on phase edge detection and classification

      2021, Optics and Lasers in Engineering
      Citation Excerpt :

      They accomplish phase unwrapping based on various principles. These methods can be divided into two main categories: type-Ⅰ PUAs unwrap the phase with aid of phases of adjacent pixels [4,6]; type-Ⅱ PUAs unwrap the phase with aid of additional information at each pixel [7–16,18–20]. In static 3D measurement, type-Ⅱ PUAs are preferred because they are more robust and more efficient.

    View all citing articles on Scopus
    View full text