Research article Special Issues

3D shape measurement based on structured light field imaging

  • In this paper, a three-dimensional (3D) shape measurement method based on structured light field imaging is proposed, which contributes to the biomedical imaging. Generally, light field imaging is challenging to accomplish the 3D shape measurement accurately, as the slope estimation method based on radiance consistency is inaccurate. Taking into consideration the special modulation of structured light field, we utilize the phase information to substitute the phase consistency for the radiance consistency in epi-polar image (EPI) at first. Therefore, the 3D coordinates are derived after light field calibration, but the results are coarse due to slope estimation error and need to be corrected. Furthermore, the 3D coordinates refinement is performed based on relationship between the structured light field image and DMD image of the projector, which allows to improve the performance of the 3D shape measurement. The necessary light field camera calibration is described to generalize its application. Subsequently, the effectiveness of the proposed method is demonstrated with a sculpture and compared to the results of a conventional PMP system.

    Citation: Ping Zhou, Yuting Zhang, Yunlei Yu, Weijia Cai, Guangquan Zhou. 3D shape measurement based on structured light field imaging[J]. Mathematical Biosciences and Engineering, 2020, 17(1): 654-668. doi: 10.3934/mbe.2020034

    Related Papers:

    [1] Mingju Chen, Hongyang Li, Hongming Peng, Xingzhong Xiong, Ning Long . HPCDNet: Hybrid position coding and dual-frquency domain transform network for low-light image enhancement. Mathematical Biosciences and Engineering, 2024, 21(2): 1917-1937. doi: 10.3934/mbe.2024085
    [2] Hong'an Li, Man Liu, Jiangwen Fan, Qingfang Liu . Biomedical image segmentation algorithm based on dense atrous convolution. Mathematical Biosciences and Engineering, 2024, 21(3): 4351-4369. doi: 10.3934/mbe.2024192
    [3] Chaofeng Ren, Xiaodong Zhi, Yuchi Pu, Fuqiang Zhang . A multi-scale UAV image matching method applied to large-scale landslide reconstruction. Mathematical Biosciences and Engineering, 2021, 18(3): 2274-2287. doi: 10.3934/mbe.2021115
    [4] Dan Yang, Shijun Li, Yuyu Zhao, Bin Xu, Wenxu Tian . An EIT image reconstruction method based on DenseNet with multi-scale convolution. Mathematical Biosciences and Engineering, 2023, 20(4): 7633-7660. doi: 10.3934/mbe.2023329
    [5] Tian Ma, Huimin Zhao, Xue Qin . A dehazing method for flight view images based on transformer and physical priori. Mathematical Biosciences and Engineering, 2023, 20(12): 20727-20747. doi: 10.3934/mbe.2023917
    [6] Sanjaykumar Kinge, B. Sheela Rani, Mukul Sutaone . Restored texture segmentation using Markov random fields. Mathematical Biosciences and Engineering, 2023, 20(6): 10063-10089. doi: 10.3934/mbe.2023442
    [7] Zijian Wang, Yaqin Zhu, Haibo Shi, Yanting Zhang, Cairong Yan . A 3D multiscale view convolutional neural network with attention for mental disease diagnosis on MRI images. Mathematical Biosciences and Engineering, 2021, 18(5): 6978-6994. doi: 10.3934/mbe.2021347
    [8] Yalong Yang, Zhen Niu, Liangliang Su, Wenjing Xu, Yuanhang Wang . Multi-scale feature fusion for pavement crack detection based on Transformer. Mathematical Biosciences and Engineering, 2023, 20(8): 14920-14937. doi: 10.3934/mbe.2023668
    [9] Shuaiyu Bu, Yuanyuan Li, Guoqiang Liu, Yifan Li . MAET-SAM: Magneto-Acousto-Electrical Tomography segmentation network based on the segment anything model. Mathematical Biosciences and Engineering, 2025, 22(3): 585-603. doi: 10.3934/mbe.2025022
    [10] Xiaomeng Feng, Taiping Wang, Xiaohang Yang, Minfei Zhang, Wanpeng Guo, Weina Wang . ConvWin-UNet: UNet-like hierarchical vision Transformer combined with convolution for medical image segmentation. Mathematical Biosciences and Engineering, 2023, 20(1): 128-144. doi: 10.3934/mbe.2023007
  • In this paper, a three-dimensional (3D) shape measurement method based on structured light field imaging is proposed, which contributes to the biomedical imaging. Generally, light field imaging is challenging to accomplish the 3D shape measurement accurately, as the slope estimation method based on radiance consistency is inaccurate. Taking into consideration the special modulation of structured light field, we utilize the phase information to substitute the phase consistency for the radiance consistency in epi-polar image (EPI) at first. Therefore, the 3D coordinates are derived after light field calibration, but the results are coarse due to slope estimation error and need to be corrected. Furthermore, the 3D coordinates refinement is performed based on relationship between the structured light field image and DMD image of the projector, which allows to improve the performance of the 3D shape measurement. The necessary light field camera calibration is described to generalize its application. Subsequently, the effectiveness of the proposed method is demonstrated with a sculpture and compared to the results of a conventional PMP system.


    Light field imaging (LFI) has been increasingly popular in the last decade, which records the angular and spatial distribution of rays in all directions in the space simultaneously. The captured light field enables the 3D shape measurement, which is applied to several areas, such as the robotics, automation, medicine and so on [1]. Among these applications, an area with great potential for LFI is biomedical imaging. Basing on the conventional microscope, Levoy et al. [2] inserted a micro-lens array into the optical train and realized the reconstruction volumes for various biological specimens by focal stacks. Furthermore, Yong [3] developed the light field microscopy and enabled the observation of the 3D surface morphology of opaque microstructures. Liu et al. [4] constructed a prototype light field endoscopy for invasive surgery.

    Light field imaging is conventionally achieved with dense camera array [5] before the lens-let light field camera (LFC) appeared [6]. In 2005, Ren Ng accomplished a hand-held lens-let LFC by introducing a micro lens array (MLA) into the image plane of a conventional camera [7]. The angular information recorded by LFC allows the depth estimation or scene reconstruction with or without calibration results [8]. Ng et al. [7] proposed that a light field image can be refocused in various depth from sub-aperture images based on Fourier Slice Theorem. What’s more, Tao [9] combines correspondence cues, defocus cues and shading cues of sub-aperture images to accomplish the depth estimation. In addition, the slopes of lines in epipolar images (EPI) indicate the corresponding depths of the object, which is often used to achieve depth estimation. Criminisi [10] proposed an EPI volume and found high degree of regularity to compute the depth. Kim [11] utilized high angular resolution EPI and performed confidence measurement to assess the reliability of the depth. Wanner and Goldluecke [12] performed structure tensor operator on EPI to calculate the slopes, and Diebold [13] extended structure tensor to the field of heterogeneous light fields to deal with non-constant intensity problem in EPI. However, due to noise, low angular resolution and textureless regions, the methods mentioned above only generate coarse initial 3D shape information when calibration is performed, and the furthermore refinement has to be executed.

    The basic idea to refine the 3D coordinates in this paper is inspired by Markov Random Field (MRF). Tao [9] applied the MRF framework to initial depth, and Wanner and Goldluecke [12] took the depth reconstruction as a multi-label problem and employed a total variance smoothing model to achieve the optimization. The MRF framework proposed by Tao [9] is based on the smoothness and color consistency, which are taken as extra constraints to optimize the final depth. Structured light field makes it possible to establish more constraints. Structured light field means projecting a pattern onto an object, the deformed pattern modulated by the object surface encodes 3D shape information and is acquired by the light field camera, which enables the 3D shape measurement. For a certain point, the 3D coordinates are encoded not only in LFC sub-aperture images but also in the modulated grating pattern, which provides additional constraints to accomplish the 3D shape measurement accurately.

    In this paper, a structured light field system is established, which consists of a lens-let LFC and a projector, as shown in Figure 1. The remainder of this paper starts with system calibration, including the light field camera calibration and system calibration. The initial 3D coordinates are derived based on phase consistency in EPI and LFC calibration results. Subsequently, the 3D coordinates are refined based on the relationship between the structured light field image and DMD image of the projector. As a Consequence, the experimental results are presented to demonstrate the 3D shape measurement performance.

    Figure 1.  The structured light field system.

    The structured light field system in this paper consists of a lens-let LFC and a projector. Therefore, the system calibration includes the LFC calibration, projector calibration and the system joint calibration. The model of the whole SLF system is described in Figure 2(b) and the coordinate system settings are listed in Table 1.

    Figure 2.  (a) Mathematical model of a lens-let LFC. (b) The model and coordinates configuration of the SLF system.
    Table 1.  Coordinate system settings of SLF system description.
    Coordinate System Description
    (Ow,Xw,Yw,Zw) World coordinate system
    (Oc,Xc,Yc,Zc) Light field camera coordinate system
    (ocn,xcn,ycn) Sub-aperture image coordinate system (pixel coordinate)
    (Om,Xm,Ym) Sub-aperture image coordinate system (physical coordinate)
    (Oij,i,j) CCD coordinate system
    (Ouv,u,v) Element image coordinate system
    (Op,Xp,Yp,Zp) Projector coordinate system
    (opn,xpn,ypn) DMD image coordinate system (pixel coordinate)

     | Show Table
    DownLoad: CSV

    The mathematical model of a typical lens-let LFC is shown in Figure 2(a). To clarify the recorded information, L(s,t,x,y) is simplified to 2D L(s,x) in Figure 2(a). L(s,x) expresses a ray that passes through the main-lens plane and MLA plane. In Figure 2(a), s is the distance from optical center OC to the sub-region of the main-lens where the rays passes through the main-lens plane, xm is the distance from the MLA’s center to the position where the ray passes through the MLA plane.

    The projection from a scene point P(xc,zc) to the MLA plane can be expressed as:

    xmhm=sxczcshm (1)

    where hm is the distance from the main-lens plane to the MLA plane, hm is the focused depth of the main-lens, and d is the micro-lens diameter. The coordinates on the MLA plane and image plane are shown in Figure 2(b).

    In general, each sub-aperture image is the result of taking the same pixel underneath each micro-lens, at the offset corresponding to (u,v) for the sub-region of the main-lens aperture. Therefore, the distance between two adjacent sub-apertures can be expressed as D=qhm/qhmbb, where b is the distance from the MLA plane to the image plane, as shown in Figure 2(a).

    The projection relationship between s and u is described as:

    s=qhm(uu0)b=D(uu0) (2)

    where q is the pixel size of image sensor. Moreover, the relationship between two coordinates in MLA plane is expressed as:

    xm=(xx0)d (3)

    Let sk=D(uku0) and sk+1=D(uk+1u0) describe two adjacent sub-apertures so that Eq (1) can be rewritten [14] as

    1zc=1hm+dDhmΔx (4)

    where Δx is the disparity value in disparity map, constructed with two adjacent sub-aperture images. Theoretically, the central sub-aperture image is the same as a picture that captured by a conventional camera with a same aperture. Therefore, the main-lens parameters of the lens-let LFC can be calibrated based on Zhang’s calibration method [15] using central sub-aperture images taken in different viewpoints.

    Except for the parameters of the main-lens, the other parameters in Figure 2(a) need to be calibrated also. To calibrate the Eq (4), which shows the constraint between the depth zc and disparity Δx, zc of each feature point in LFC images is calculated using the main-lens parameters, and Δx is acquired through the corresponding EPI. Therefore, the parameters b, D and hm are derived with the line-fitting coefficients in Eq (4). The light field camera calibration is described in detail in our previous work [16].

    The commercial light field camera, Lytro Illum is equipped in the structured light filed based 3D shape measurement system. Its pixel size q is 0.0014 mm and the micro-lens diameter d is 0.02 mm. The light field images are decoded with the toolbox provided by Donald [17], the resolution of sub-aperture images is 434×625 pixels. The calibrated parameters of Lytro Illum are as follows: f is 40.11 mm, hm is 40.88 mm, hm is 2116.34 mm, b is 48.38 um, D is 1.18 mm, and the center of MLA plane, (x0,y0) is (312.67,225.53). The average re-projection error is 0.0685 pixels. Especially, Eq (4) is fitted as

    1zc=(0.47250.3581Δx)103 (5)

    To accomplish 3D shape measurement, the structured light field system in this paper is calibrated also. The calibration methods are similar to that of the conventional structured light system equipped with a projector and a conventional camera. In our calibrations, the central sub-aperture image is utilized instead of conventional camera images.

    For the sake of completeness, the projector calibration and conventional structured light system calibration are presented briefly which have been reported previously in detail [18]. To calibrate the projector, the projector’s digital micromirror device (DMD) image is needed. The 4-step phase-shifting algorithm and heterodyne principle are applied to obtain the unwrapped phase map in both horizontal and vertical directions. More details on the phase-shifting and heterodyne are described in [19]. Although the fringe patterns are captured by the light field camera, the unwrapped phase map in the central sub-aperture image is calculated only in the system calibration. Therefore, the DMD image is formed in such a way that a univocal correspondence is established between the unwrapped phase in the central sub-aperture image and the DMD image in horizontal and vertical directions.

    To calibrate structured light field system, the conventional stereo calibration method is applied, where the central sub-aperture image and DMD image are used. The intrinsic and extrinsic parameters of the light field camera and projector are computed, so are the transform matrices between them. Eight groups of central sub-aperture images in different views are captured and applied in the calibration, and the calibration results are shown in Figure 3, where the light field camera and the projector are labeled as LFC and Pro, respectively. The LFC is modelled as Figure 2(a) and the projector is treated as an inverse camera [18]. The reason why the projector is closer to the calibration board is matching the field of view to that of the light field camera. The average re-projection error is 0.2189 pixels.

    Figure 3.  The structured light field system calibration results.

    It is essential for the 3D shape measurement based on structured light field to retrieve accurate coordinates. To improve the real-time capability, only two sinusoidal gratings with π shift are projected onto the object in this paper, and the deformed pattern is captured by the LFC. The depth of the object is derived from the phase-consistency-based slope estimation method in EPI according to Eq (5), and the 3D coordinates of the object are derived from the LFC calibration results subsequently. Furthermore, the 3D coordinates refinement is performed to accomplish the 3D shape measurement accurately.

    The light field is denoted L(s,t,x,y), where (s,t) is the sub-aperture position in main-lens plane and (x,y) is the micro-lens position in MLA plane. Each epipolar image (EPI) is the 2D slice of the light field image where (t,y) are fixed, and (s,x) vary, and vice versa. An EPI-based depth computation approach is proposed in this paper, as illustrated in Eq (5). In EPI, the disparity is inversely proportional to the line’s slope for a certain point in the scene [14], that is Δx=1/1kk where Δx is the disparity for the point in the scene, and k is the corresponding line’s slope in the EPI. Therefore, the depth of a certain point is derived from Eq (5).

    To obtain the slope k, instead of previous radiance variance methods [20], the phase variance method is proposed in this paper. Generally, the sub-aperture images are treated as observing the object from different viewpoints. Based on the phase measurement profilometry (PMP) principles, the phases in different sub-aperture images modified by the same object point are almost the same, so that the phase information shows consistency along the corresponding line in the EPI, which is termed phase consistency in this paper. Therefore, the EPI used for slope computation based on phase consistency is not the 2D slice of original light field, but that of the structured light field after phase computation in all of the sub-aperture images.

    For a certain sub-aperture image, the captured fringe pattern image is expressed as:

    I(x,y)=A(x,y)+B(x,y)cos(ϕ) (6)

    where I(x,y) is the image intensity, A(x,y) is the background intensity, B(x,y) is the modulated intensity of a given pixel and ϕ is the desired phase information modulated by the object depth and to be solved. ϕ varies in the range of 02π and thus termed wrapped phase. A(x,y) is the background intensity, which is assumed to be constant but actually change with the environment. So that the conventional Fourier transform profilometry method performs badly when the frequency bandwidth of the A(x,y) is too wide to be separated. Therefore, the modified Fourier transform profilometry method is applied in this paper. Another fringe pattern image I1(x,y) with π shift based on I(x,y) is captured:

    I1(x,y)=A(x,y)+B(x,y)cos(ϕ+π) (7)

    Therefore, the wanted ϕ can be derived by applying the Hilbert Transform to the difference between I(x,y) and I1(x,y). And then the complex logarithm is applied, the imaginary part is desired wrapped phase which varies in the range of 02π, i.e.ϕ=im[log(I+iH(II1))], where H() is the Hilbert Transform operator. The equation is applied to all the sub-aperture images and the EPI is derived at last, where the phase consistency is utilized to compute the slopes of lines.

    Without losing generosity, for a pixel P in the EPI, n lines l1:ln are chosen as shown in Figure 4, where P is in the middle of the lines. The corresponding slopes and phase variances of the lines are expressed as k1:kn and v1:vn, respectively. Therefore, the optimal line li for a certain point in the scene is obtained when the phase variance reaches the minimum value, and the corresponding optimal disparity Δx is the reciprocal of the optimal slope k, i.e. kargmin(vi). Therefore, for a certain point in the scene, its depth is derived with the optimal disparity according to Eq (5), and the other 3D coordinates are derived with the calibration parameters of the light field camera, as shown in Eq (8).

    Figure 4.  Computation of the line’s slope in EPI.

    With the slope estimation in EPI and LFC calibration results, the 3D coordinates of object are derived. However, the EPI only provides coarse depth measurement result due to the low angular resolution, which results in the inaccuracy of 3D coordinates. In the structured light field imaging system in this paper, the depth is not only indicated by the lines’ slopes in the EPI, but also encoded in the phase carried by the modulated sinusoidal grating, which is relatively accurate and provides significant constraint to refine the initial 3D shape measurement results.

    In the SLF system, an object point PW is projected onto two image planes, the LFC image plane and DMD image plane, respectively. Therefore, the phase information recorded in sub-aperture images and DMD image is considered in our method. The wrapped phase in the central sub-aperture image is derived with the methods in Section 3.1. On the other hand, the unwrapped phase is computed by re-projecting the 3D coordinates reconstructed in Section 3.1 to DMD image plane, which can also be turned into wrapped phase by performing the modulo function. Therefore, there are some phase cues to refine the 3D shape measurement results. To facilitate the refinement method, the accurate unwrapped phase of the object point PW is termed ϕun(PW). PC is the corresponding point of PW on the central sub-aperture image and ϕwr(PC) is the wrapped phase of PC, as shown in Figure 5(a). The relationship between the ϕun(PW) and ϕwr(PC) is expressed as ϕwr(PC)=mod(ϕun(PW),2π). Based on the method proposed in Section 3.1, the 3D coordinates of PW are derived as follows

    {1zc=1hm+bdqhm2(Δx)xc=(xcnxcn0)dhmzcyc=(ycnycn0)dhmzc (8)
    XW=R1L(XCtL) (9)
    Figure 5.  (a) The wrapped phase map. (b) The wrapped phase map calculated with re-projection phase map. (c) The illustration of correcting (b) with (a).

    where XW=(xw,yw,zw)T are the 3D coordinates of PW in the world coordinate system, XC=(xc,yc,zc)T are the 3D coordinates of PW in the LFC coordinate system, (xcn,ycn) are the pixel coordinates of PC on central sub-aperture image, (xcn0,ycn0) is the central pixel coordinate on central sub-aperture image, RL is the rotation matrix of LFC and tL is the translation vector of LFC.

    As mentioned above, the 3D coordinates of PW are coarse and will be refined, which is accomplished by correcting the coarse unwrapped phase of PW. Let PD be the corresponding re-projection point of PW on the DMD image, the coordinates of PD are derived from Eq (10).

    [xpnypn1]=MP2DMW2P[xwywzw1] (10)

    where (xpn,ypn) are the coordinates of PD, MW2P is the transform matrix from world coordinate system to Projector coordinate system, and MP2D is the transform matrix from Projector coordinate system to DMD image coordinate system. The unwrapped phase of PD is ϕre(PD), which is termed re-projection phase and derived from Eq (11)

    ϕre(PD)=2πNypnresy (11)

    where N is the number of periods of the projected fringe patterns, resy is the resolution of projector DMD image in y direction. The corresponding wrapped phase of ϕre(PD) is termed ϕwr(PD). The relationship between ϕwr(PD) and ϕre(PD) is similar to that of ϕwr(PC) and ϕun(PW). When the 3D shape measurement is accurate, ϕre(PD) is equal to ϕun(PW) and ϕwr(PD) is equal to ϕwr(PC). However, the re-projection phase ϕre(PD) derived from the coarse 3D coordinates of PW is inaccurate due to the error of disparity, so that the unwrapped phase of PW is corrected by the re-projection phase ϕre(PD).

    The re-projection phase ϕre(PD) is corrected according to the relationship between unwrapped phase and wrapped phase, as shown in Figure 5. A point PD is given as shown in Figure 5(b) and (c), which lies in the same period of PD and whose wrapped phase ϕwr(PD) is equal to ϕwr(PC). The period of PD is obtained by dividing ϕre(PD) with 2π.

    When the difference between ϕwr(PD) and ϕwr(PD) is less than π, the re-projection phase ϕre(PD) of PD is corrected to be as same as that of the point PD. Nevertheless, the re-projection phase is corrected as Eq (12), when the difference is larger than π, as shown in Figure 5(b) and (c).

    ϕre(PD)=2π[Round(ϕre(PD),2π)+sgn(ϕwr(PD)ϕwr(PD))]+ϕwr(PD) (12)

    Consequently, the 3D coordinates of PW are refined according to the corrected re-projection phase ϕre(PD) and PC based on the PMP principles, so the 3D shape measurement is accomplished accurately.

    The light field camera, Lytro Illum is equipped in the 3D shape measurement system, whose analytic image angular resolution is 15×15 pixels and spatial resolution is 434×625 pixels. The projector, BENQ GP1, is equipped in the system also, whose resolution is 600×800 pixels. To evaluate the 3D shape measurement performance, a sinusoidal fringe pattern with 36 periods is projected onto a sculpture surface, and the modulated fringe pattern image is captured by the light field camera, as shown in Figure 6(a).

    Figure 6.  (a) The original pattern image of a sculpture. (b) The wrapped phase from central sub-aperture image. (c) The disparity map of the sculpture from EPI. (d) Re-projection phase map from 3D coordinates. (e) Wrapped phase map from (d). (f) Unwrapped phase map after correction. (g) Wrapped phase map from (f).

    The wrapped phase map in the central sub-aperture image is computed and shown in Figure 6(b), where the background intensity and the modulated intensity have been removed by the modified Fourier transform profilometry [21]. The wrapped phase of every pixel in the Figure 6(b) is in the range of 0:2π. Subsequently, the EPI-based method is performed and the disparity map of the sculpture is shown in Figure 6(c), where the phase consistency method is substituted for the radiance consistency to estimate the slopes. Unfortunately, the 3D shape of the sculpture based on disparity map and calibration parameters directly is not accurate enough. Therefore, the 3D coordinates are refined by the phase information mentioned in Section 3.2. The re-projection phase map before and after correction are shown in Figure 6(d) and (f), respectively. To illustrate the re-projection phase map correction performance, the wrapped phase maps of re-projection phase map before and after correction are computed, as shown in Figure 6(e) and (g), respectively. For instance, the phase of the smooth surface of the object is supposed to change smoothly and continuously, as shown in the area (2) of Figure 6(g), which is more accurate than area (1) of Figure 6(e).

    The 3D shape of the sculpture is reconstructed with the corrected unwrapped phase map and calibration results, as shown in Figure 7, where the lens distortions of LFC and projector are corrected according to our previous works [22].

    Figure 7.  The depth of the reconstructed sculpture.

    To evaluate the 3D shape measurement performance, the experimental result in Figure 7 is compared to that measured by a conventional PMP system. In this paper, the conventional PMP system is equipped with a projector and a light field camera also, while the central sub-aperture image is considered as the image acquired by the conventional camera. Furthermore, the fringe pattern images with periods of 36, 30 and 25 are projected, and the heterodyne principle is applied to achieve the unwrapped phase map, which is considered as the ground truth of the sculpture 3D shape measurement. As the central sub-aperture images are used in both the 3D shape measurement systems, it’s obvious that the measurement error is due to the different unwrapped phase maps completely. The difference maps between the ground truth and unwrapped phase maps after and before correction are computed to analyze the 3D shape measurement performance, as shown in Figure 8.

    Figure 8.  (a) The ground truth of the unwrapped phase. The error of the unwrapped phase map (b) after and (c) before correction.

    As mentioned above, the disparity map derived from EPI-based method is not accurate enough for the 3D shape measurement, so that the re-projection phase map is worse than the final unwrapped phase map after correction. The re-projection phase error relative to the ground truth is shown in Figure 8(b). It’s obvious that the error of the unwrapped phase map after correction is much smaller than that before correction, so that the corrected unwrapped phase refines the 3D coordinates consequently and improves the 3D shape measurement performance. For instance, as shown in the white square frames in Figure 8(b) and (c), the accuracy has been improved remarkably but the error cannot be corrected completely due to the slightly high gradient of depth. However, as only two fringe patterns with different phases are projected onto the sculpture in a specific angle, the loss of viewpoints results in the bad illumination in the transitional region between the right cheek, ear, and neck, as shown in the red square frame in Figure 8(a) (the ground truth) where the phase accuracy decreases heavily due to the low SNR.

    In this paper, a 3D shape measurement method based on structured light field is accomplished, where only two sinusoidal fringes are projected onto the object and are acquired by the light field camera. The performance of 3D shape measurement has been improved by the means of utilizing phase information, i.e. phase consistency to derive initial coarse 3D coordinates and unwrapped phase correction to refine the coarse 3D coordinates. Future works will focus on reducing the error caused by high gradient of depth and the biomedical imaging system configuration based on structure light field.

    This work was supported by the National Natural Science Foundation of China (NSFC) 61771130 and “the Fundamental Research Funds for the Central Universities”.



    [1] S. V. der Jeughtand and J. J. J. Dirckx, Real-time structured light profilometry: A review, Opt. Lasers Eng., 87(2016), 18-31.
    [2] M. Levoy, R. Ng, A. Adams, et al., Light field microscopy, ACM Trans. Graphics, 25 (2006), 924-934.
    [3] Y. D. Sie, C. Y. Lin and S. J. Chen, 3D surface morphology imaging of opaque microstructures via light-field microscopy, Sci. Rep., 8 (2018), 10505.
    [4] J. Liu, D. Claus, T. Xu, et al., Light field endoscopy and its parametric description, Opt. Lett., 42 (2017), 1804-1807.
    [5] X. Lin, J. Wu, G. Zheng, et al., Camera array based light field microscopy, Biomed. Opt. Express, 6 (2015), 3179-3189.
    [6] E. H. Adelson and J. Y. A. Wang, Single lens stereo with a plenoptic camera, IEEE Trans. Pattern Anal., 14 (1992), 99-106.
    [7] R. Ng, M. Levoy, M. Bredif, et al., Light field photography with a hand-held plenoptic camera, Comput. Sci. Tech. Rep., 2 (2005), 1-11.
    [8] C. Hahne, A. Aggoun, V. Velisavljevic, et al., Refocusing distance of a standard plenoptic camera, Opt. Express, 24 (2016), 21521-21540.
    [9] M. T. Tao, S. Hadap, J. Malik, et al., Depth from combining defocus and correspondence using light-field cameras, Proceedings of IEEE International Conference on Computer Vision (2013), 673-680. Available from: https://www.cv-foundation.org/openaccess/content_iccv_2013/html/Tao_Depth_from_Combining_2013_ICCV_paper.html.
    [10] A. Criminisi, S. B. Kang, R. Swaminathan, et al., Extracting layers and analyzing their specular properties using epipolar-plane-image analysis, Comput. Vision Image Understanding, 97 (2005), 51-58.
    [11] C. Kim, H. Zimmer, Y. Pritch, et al., Scene reconstruction from high spatio-angular resolution light fields, ACM Trans. Graphics, 32 (2013), 73-1.
    [12] S. Wanner and B. Goldluecke, Globally consistent depth labeling of 4D light fields, 2012 IEEE Conference on Computer Vision and Pattern Recognition, 41-48. Available from: https://ieeexplore_ieee.gg363.site/abstract/document/6247656.
    [13] M. Diebold, B. Jaehne and A. Gatto, Heterogeneous light fields, 2016 IEEE Conference on Computer Vision and Pattern Recognition, 1745-1753. Available from: https://ieeexplore_ieee.gg363.site/abstract/document/7780562.
    [14] P. Yang, Z. Wang, Y. Yan, et al., Close-range photogrammetry with light field camera: From disparity map to absolute distance, Appl. Opt., 55 (2016), 7477-7486.
    [15] Z. Zhang, A flexible new technique for camera calibration, IEEE Trans. Pattern Anal. Mach. Intell., 22 (2000), 1330-1334.
    [16] P. Zhou, W. Cai, Y. Yu, et al., A two-step calibration method of lenslet-based light field cameras, Opt. Lasers Eng., 115 (2019), 190-196.
    [17] D. G. Dansereau, O. Pizarro and S. B. Williams, Decoding, calibration and rectification for lenselet-based plenoptic cameras, IEEE Conference on Computer Vision and Pattern Recognition (2013), 1027-1034. Available from: https://www.cv-foundation.org/openaccess/content_cvpr_2013/html/Dansereau_Decoding_Calibration_and_2013_CVPR_paper.html.
    [18] S. Zhang and P. S. Huang, Novel method for structured light system calibration, Opt. Eng., 45 (2006), 083601.
    [19] C. Zuo, L. Huang, M. Zhang, et al., Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review, Opt. Lasers Eng., 85 (2016), 84-103.
    [20] H. G. Jeon, J. Park, G. Choe, et al., Accurate depth map estimation from a lenslet light field camera, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015, 1547-1555. Available from: http://openaccess.thecvf.com/content_cvpr_2015/html/Jeon_Accurate_Depth_Map_2015_CVPR_paper.html.
    [21] J. Yi and S. Huang, Modified Fourier transform profilometry for the measurement of 3D steep shapes, Opt. Lasers Eng., 27 (1997), 493-505.
    [22] P. Zhou, Y. Yu, W. Cai, et al., Non-iterative three dimensional reconstruction method of the structured light system based on polynomial distortion representation, Opt. Lasers Eng., 100 (2018), 216-225.
  • This article has been cited by:

    1. Wei Feng, Junhui Gao, Tong Qu, Shiqi Zhou, Daxing Zhao, Three-Dimensional Reconstruction of Light Field Based on Phase Similarity, 2021, 21, 1424-8220, 7734, 10.3390/s21227734
    2. Tran Quang-Huy, Phuc Thinh Doan, Nguyen Thi Hoang Yen, Duc-Tan Tran, Shear wave imaging and classification using extended Kalman filter and decision tree algorithm, 2021, 18, 1551-0018, 7631, 10.3934/mbe.2021378
    3. Liang Shan, Teng-Fei Zhao, Hui-Yun Huang, Bo Hong, Ming Kong, Flame 3D temperature field reconstruction based on Damped LSQR-LMBC, 2022, 71, 1000-3290, 040701, 10.7498/aps.71.20211421
    4. Hanshan Li, Shiqiang Yue, Xiaoqian Zhang, Measurement model and method of multiple projectile dispersion position based on dual light field intersection imaging, 2021, 186, 02632241, 110161, 10.1016/j.measurement.2021.110161
    5. Jun Che, Yanxia Sun, Xiaojun Jin, Yong Chen, 3D Measurement of Discontinuous Objects with Optimized Dual-frequency Grating Profilometry, 2021, 21, 1335-8871, 197, 10.2478/msr-2021-0027
    6. Sen Xiang, Li Liu, Huiping Deng, Jin Wu, You Yang, Li Yu, Fast depth estimation with cost minimization for structured light field, 2021, 29, 1094-4087, 30077, 10.1364/OE.434548
    7. Ping Zhou, Yanzheng Wang, Yuda Xu, Zewei Cai, Chao Zuo, Phase-unwrapping-free 3D reconstruction in structured light field system based on varied auxiliary point, 2022, 30, 1094-4087, 29957, 10.1364/OE.468049
    8. Duc-Tan Tran, Nguyen Thi Thu Ha, Luong Quang Hai, Duc-Nghia Tran, Achyut Shankar, Shear complex modulus imaging utilizing frequency combination in the least mean square/algebraic Helmholtz inversion, 2023, 83, 1573-7721, 40021, 10.1007/s11042-023-17061-7
    9. Chenyu Zhang, 2024, Improving Depth Map Geometric Consistency Using an Enhanced Transformer Latent Code Model, 979-8-3503-6304-3, 586, 10.1109/ICIIBMS62405.2024.10792864
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5231) PDF downloads(615) Cited by(9)

Figures and Tables

Figures(8)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog