
Citation: Ping Zhou, Yuting Zhang, Yunlei Yu, Weijia Cai, Guangquan Zhou. 3D shape measurement based on structured light field imaging[J]. Mathematical Biosciences and Engineering, 2020, 17(1): 654-668. doi: 10.3934/mbe.2020034
[1] | Mingju Chen, Hongyang Li, Hongming Peng, Xingzhong Xiong, Ning Long . HPCDNet: Hybrid position coding and dual-frquency domain transform network for low-light image enhancement. Mathematical Biosciences and Engineering, 2024, 21(2): 1917-1937. doi: 10.3934/mbe.2024085 |
[2] | Hong'an Li, Man Liu, Jiangwen Fan, Qingfang Liu . Biomedical image segmentation algorithm based on dense atrous convolution. Mathematical Biosciences and Engineering, 2024, 21(3): 4351-4369. doi: 10.3934/mbe.2024192 |
[3] | Chaofeng Ren, Xiaodong Zhi, Yuchi Pu, Fuqiang Zhang . A multi-scale UAV image matching method applied to large-scale landslide reconstruction. Mathematical Biosciences and Engineering, 2021, 18(3): 2274-2287. doi: 10.3934/mbe.2021115 |
[4] | Dan Yang, Shijun Li, Yuyu Zhao, Bin Xu, Wenxu Tian . An EIT image reconstruction method based on DenseNet with multi-scale convolution. Mathematical Biosciences and Engineering, 2023, 20(4): 7633-7660. doi: 10.3934/mbe.2023329 |
[5] | Tian Ma, Huimin Zhao, Xue Qin . A dehazing method for flight view images based on transformer and physical priori. Mathematical Biosciences and Engineering, 2023, 20(12): 20727-20747. doi: 10.3934/mbe.2023917 |
[6] | Sanjaykumar Kinge, B. Sheela Rani, Mukul Sutaone . Restored texture segmentation using Markov random fields. Mathematical Biosciences and Engineering, 2023, 20(6): 10063-10089. doi: 10.3934/mbe.2023442 |
[7] | Zijian Wang, Yaqin Zhu, Haibo Shi, Yanting Zhang, Cairong Yan . A 3D multiscale view convolutional neural network with attention for mental disease diagnosis on MRI images. Mathematical Biosciences and Engineering, 2021, 18(5): 6978-6994. doi: 10.3934/mbe.2021347 |
[8] | Yalong Yang, Zhen Niu, Liangliang Su, Wenjing Xu, Yuanhang Wang . Multi-scale feature fusion for pavement crack detection based on Transformer. Mathematical Biosciences and Engineering, 2023, 20(8): 14920-14937. doi: 10.3934/mbe.2023668 |
[9] | Shuaiyu Bu, Yuanyuan Li, Guoqiang Liu, Yifan Li . MAET-SAM: Magneto-Acousto-Electrical Tomography segmentation network based on the segment anything model. Mathematical Biosciences and Engineering, 2025, 22(3): 585-603. doi: 10.3934/mbe.2025022 |
[10] | Xiaomeng Feng, Taiping Wang, Xiaohang Yang, Minfei Zhang, Wanpeng Guo, Weina Wang . ConvWin-UNet: UNet-like hierarchical vision Transformer combined with convolution for medical image segmentation. Mathematical Biosciences and Engineering, 2023, 20(1): 128-144. doi: 10.3934/mbe.2023007 |
Light field imaging (LFI) has been increasingly popular in the last decade, which records the angular and spatial distribution of rays in all directions in the space simultaneously. The captured light field enables the 3D shape measurement, which is applied to several areas, such as the robotics, automation, medicine and so on [1]. Among these applications, an area with great potential for LFI is biomedical imaging. Basing on the conventional microscope, Levoy et al. [2] inserted a micro-lens array into the optical train and realized the reconstruction volumes for various biological specimens by focal stacks. Furthermore, Yong [3] developed the light field microscopy and enabled the observation of the 3D surface morphology of opaque microstructures. Liu et al. [4] constructed a prototype light field endoscopy for invasive surgery.
Light field imaging is conventionally achieved with dense camera array [5] before the lens-let light field camera (LFC) appeared [6]. In 2005, Ren Ng accomplished a hand-held lens-let LFC by introducing a micro lens array (MLA) into the image plane of a conventional camera [7]. The angular information recorded by LFC allows the depth estimation or scene reconstruction with or without calibration results [8]. Ng et al. [7] proposed that a light field image can be refocused in various depth from sub-aperture images based on Fourier Slice Theorem. What’s more, Tao [9] combines correspondence cues, defocus cues and shading cues of sub-aperture images to accomplish the depth estimation. In addition, the slopes of lines in epipolar images (EPI) indicate the corresponding depths of the object, which is often used to achieve depth estimation. Criminisi [10] proposed an EPI volume and found high degree of regularity to compute the depth. Kim [11] utilized high angular resolution EPI and performed confidence measurement to assess the reliability of the depth. Wanner and Goldluecke [12] performed structure tensor operator on EPI to calculate the slopes, and Diebold [13] extended structure tensor to the field of heterogeneous light fields to deal with non-constant intensity problem in EPI. However, due to noise, low angular resolution and textureless regions, the methods mentioned above only generate coarse initial 3D shape information when calibration is performed, and the furthermore refinement has to be executed.
The basic idea to refine the 3D coordinates in this paper is inspired by Markov Random Field (MRF). Tao [9] applied the MRF framework to initial depth, and Wanner and Goldluecke [12] took the depth reconstruction as a multi-label problem and employed a total variance smoothing model to achieve the optimization. The MRF framework proposed by Tao [9] is based on the smoothness and color consistency, which are taken as extra constraints to optimize the final depth. Structured light field makes it possible to establish more constraints. Structured light field means projecting a pattern onto an object, the deformed pattern modulated by the object surface encodes 3D shape information and is acquired by the light field camera, which enables the 3D shape measurement. For a certain point, the 3D coordinates are encoded not only in LFC sub-aperture images but also in the modulated grating pattern, which provides additional constraints to accomplish the 3D shape measurement accurately.
In this paper, a structured light field system is established, which consists of a lens-let LFC and a projector, as shown in Figure 1. The remainder of this paper starts with system calibration, including the light field camera calibration and system calibration. The initial 3D coordinates are derived based on phase consistency in EPI and LFC calibration results. Subsequently, the 3D coordinates are refined based on the relationship between the structured light field image and DMD image of the projector. As a Consequence, the experimental results are presented to demonstrate the 3D shape measurement performance.
The structured light field system in this paper consists of a lens-let LFC and a projector. Therefore, the system calibration includes the LFC calibration, projector calibration and the system joint calibration. The model of the whole SLF system is described in Figure 2(b) and the coordinate system settings are listed in Table 1.
Coordinate System | Description |
World coordinate system | |
Light field camera coordinate system | |
Sub-aperture image coordinate system (pixel coordinate) | |
Sub-aperture image coordinate system (physical coordinate) | |
CCD coordinate system | |
Element image coordinate system | |
Projector coordinate system | |
DMD image coordinate system (pixel coordinate) |
The mathematical model of a typical lens-let LFC is shown in Figure 2(a). To clarify the recorded information,
The projection from a scene point
xmh′m=s−xczc−shm | (1) |
where
In general, each sub-aperture image is the result of taking the same pixel underneath each micro-lens, at the offset corresponding to
The projection relationship between
s=qh′m(u−u0)b=D(u−u0) | (2) |
where
xm=−(x−x0)d | (3) |
Let
1zc=1hm+−dDh′mΔx | (4) |
where
Except for the parameters of the main-lens, the other parameters in Figure 2(a) need to be calibrated also. To calibrate the Eq (4), which shows the constraint between the depth
The commercial light field camera, Lytro Illum is equipped in the structured light filed based 3D shape measurement system. Its pixel size
1zc=(0.4725−0.3581Δx)10−3 | (5) |
To accomplish 3D shape measurement, the structured light field system in this paper is calibrated also. The calibration methods are similar to that of the conventional structured light system equipped with a projector and a conventional camera. In our calibrations, the central sub-aperture image is utilized instead of conventional camera images.
For the sake of completeness, the projector calibration and conventional structured light system calibration are presented briefly which have been reported previously in detail [18]. To calibrate the projector, the projector’s digital micromirror device (DMD) image is needed. The 4-step phase-shifting algorithm and heterodyne principle are applied to obtain the unwrapped phase map in both horizontal and vertical directions. More details on the phase-shifting and heterodyne are described in [19]. Although the fringe patterns are captured by the light field camera, the unwrapped phase map in the central sub-aperture image is calculated only in the system calibration. Therefore, the DMD image is formed in such a way that a univocal correspondence is established between the unwrapped phase in the central sub-aperture image and the DMD image in horizontal and vertical directions.
To calibrate structured light field system, the conventional stereo calibration method is applied, where the central sub-aperture image and DMD image are used. The intrinsic and extrinsic parameters of the light field camera and projector are computed, so are the transform matrices between them. Eight groups of central sub-aperture images in different views are captured and applied in the calibration, and the calibration results are shown in Figure 3, where the light field camera and the projector are labeled as LFC and Pro, respectively. The LFC is modelled as Figure 2(a) and the projector is treated as an inverse camera [18]. The reason why the projector is closer to the calibration board is matching the field of view to that of the light field camera. The average re-projection error is 0.2189 pixels.
It is essential for the 3D shape measurement based on structured light field to retrieve accurate coordinates. To improve the real-time capability, only two sinusoidal gratings with π shift are projected onto the object in this paper, and the deformed pattern is captured by the LFC. The depth of the object is derived from the phase-consistency-based slope estimation method in EPI according to Eq (5), and the 3D coordinates of the object are derived from the LFC calibration results subsequently. Furthermore, the 3D coordinates refinement is performed to accomplish the 3D shape measurement accurately.
The light field is denoted
To obtain the slope
For a certain sub-aperture image, the captured fringe pattern image is expressed as:
I(x,y)=A(x,y)+B(x,y)cos(ϕ) | (6) |
where
I1(x,y)=A(x,y)+B(x,y)cos(ϕ+π) | (7) |
Therefore, the wanted
Without losing generosity, for a pixel
With the slope estimation in EPI and LFC calibration results, the 3D coordinates of object are derived. However, the EPI only provides coarse depth measurement result due to the low angular resolution, which results in the inaccuracy of 3D coordinates. In the structured light field imaging system in this paper, the depth is not only indicated by the lines’ slopes in the EPI, but also encoded in the phase carried by the modulated sinusoidal grating, which is relatively accurate and provides significant constraint to refine the initial 3D shape measurement results.
In the SLF system, an object point
{1zc=1hm+bdqh′m2⋅(−Δx)xc=−(xcn−xcn0)dh′mzcyc=−(ycn−ycn0)dh′mzc | (8) |
XW=R−1L(XC−tL) | (9) |
where
As mentioned above, the 3D coordinates of
[xpnypn1]=MP2DMW2P[xwywzw1] | (10) |
where
ϕre(PD)=2πNypnresy | (11) |
where
The re-projection phase
When the difference between
ϕre(PD)=2π⋅[Round(ϕre(PD),2π)+sgn(ϕwr(P′D)−ϕwr(PD))]+ϕwr(P′D) | (12) |
Consequently, the 3D coordinates of
The light field camera, Lytro Illum is equipped in the 3D shape measurement system, whose analytic image angular resolution is
The wrapped phase map in the central sub-aperture image is computed and shown in Figure 6(b), where the background intensity and the modulated intensity have been removed by the modified Fourier transform profilometry [21]. The wrapped phase of every pixel in the Figure 6(b) is in the range of 0:2π. Subsequently, the EPI-based method is performed and the disparity map of the sculpture is shown in Figure 6(c), where the phase consistency method is substituted for the radiance consistency to estimate the slopes. Unfortunately, the 3D shape of the sculpture based on disparity map and calibration parameters directly is not accurate enough. Therefore, the 3D coordinates are refined by the phase information mentioned in Section 3.2. The re-projection phase map before and after correction are shown in Figure 6(d) and (f), respectively. To illustrate the re-projection phase map correction performance, the wrapped phase maps of re-projection phase map before and after correction are computed, as shown in Figure 6(e) and (g), respectively. For instance, the phase of the smooth surface of the object is supposed to change smoothly and continuously, as shown in the area (2) of Figure 6(g), which is more accurate than area (1) of Figure 6(e).
The 3D shape of the sculpture is reconstructed with the corrected unwrapped phase map and calibration results, as shown in Figure 7, where the lens distortions of LFC and projector are corrected according to our previous works [22].
To evaluate the 3D shape measurement performance, the experimental result in Figure 7 is compared to that measured by a conventional PMP system. In this paper, the conventional PMP system is equipped with a projector and a light field camera also, while the central sub-aperture image is considered as the image acquired by the conventional camera. Furthermore, the fringe pattern images with periods of 36, 30 and 25 are projected, and the heterodyne principle is applied to achieve the unwrapped phase map, which is considered as the ground truth of the sculpture 3D shape measurement. As the central sub-aperture images are used in both the 3D shape measurement systems, it’s obvious that the measurement error is due to the different unwrapped phase maps completely. The difference maps between the ground truth and unwrapped phase maps after and before correction are computed to analyze the 3D shape measurement performance, as shown in Figure 8.
As mentioned above, the disparity map derived from EPI-based method is not accurate enough for the 3D shape measurement, so that the re-projection phase map is worse than the final unwrapped phase map after correction. The re-projection phase error relative to the ground truth is shown in Figure 8(b). It’s obvious that the error of the unwrapped phase map after correction is much smaller than that before correction, so that the corrected unwrapped phase refines the 3D coordinates consequently and improves the 3D shape measurement performance. For instance, as shown in the white square frames in Figure 8(b) and (c), the accuracy has been improved remarkably but the error cannot be corrected completely due to the slightly high gradient of depth. However, as only two fringe patterns with different phases are projected onto the sculpture in a specific angle, the loss of viewpoints results in the bad illumination in the transitional region between the right cheek, ear, and neck, as shown in the red square frame in Figure 8(a) (the ground truth) where the phase accuracy decreases heavily due to the low SNR.
In this paper, a 3D shape measurement method based on structured light field is accomplished, where only two sinusoidal fringes are projected onto the object and are acquired by the light field camera. The performance of 3D shape measurement has been improved by the means of utilizing phase information, i.e. phase consistency to derive initial coarse 3D coordinates and unwrapped phase correction to refine the coarse 3D coordinates. Future works will focus on reducing the error caused by high gradient of depth and the biomedical imaging system configuration based on structure light field.
This work was supported by the National Natural Science Foundation of China (NSFC) 61771130 and “the Fundamental Research Funds for the Central Universities”.
[1] | S. V. der Jeughtand and J. J. J. Dirckx, Real-time structured light profilometry: A review, Opt. Lasers Eng., 87(2016), 18-31. |
[2] | M. Levoy, R. Ng, A. Adams, et al., Light field microscopy, ACM Trans. Graphics, 25 (2006), 924-934. |
[3] | Y. D. Sie, C. Y. Lin and S. J. Chen, 3D surface morphology imaging of opaque microstructures via light-field microscopy, Sci. Rep., 8 (2018), 10505. |
[4] | J. Liu, D. Claus, T. Xu, et al., Light field endoscopy and its parametric description, Opt. Lett., 42 (2017), 1804-1807. |
[5] | X. Lin, J. Wu, G. Zheng, et al., Camera array based light field microscopy, Biomed. Opt. Express, 6 (2015), 3179-3189. |
[6] | E. H. Adelson and J. Y. A. Wang, Single lens stereo with a plenoptic camera, IEEE Trans. Pattern Anal., 14 (1992), 99-106. |
[7] | R. Ng, M. Levoy, M. Bredif, et al., Light field photography with a hand-held plenoptic camera, Comput. Sci. Tech. Rep., 2 (2005), 1-11. |
[8] | C. Hahne, A. Aggoun, V. Velisavljevic, et al., Refocusing distance of a standard plenoptic camera, Opt. Express, 24 (2016), 21521-21540. |
[9] | M. T. Tao, S. Hadap, J. Malik, et al., Depth from combining defocus and correspondence using light-field cameras, Proceedings of IEEE International Conference on Computer Vision (2013), 673-680. Available from: https://www.cv-foundation.org/openaccess/content_iccv_2013/html/Tao_Depth_from_Combining_2013_ICCV_paper.html. |
[10] | A. Criminisi, S. B. Kang, R. Swaminathan, et al., Extracting layers and analyzing their specular properties using epipolar-plane-image analysis, Comput. Vision Image Understanding, 97 (2005), 51-58. |
[11] | C. Kim, H. Zimmer, Y. Pritch, et al., Scene reconstruction from high spatio-angular resolution light fields, ACM Trans. Graphics, 32 (2013), 73-1. |
[12] | S. Wanner and B. Goldluecke, Globally consistent depth labeling of 4D light fields, 2012 IEEE Conference on Computer Vision and Pattern Recognition, 41-48. Available from: https://ieeexplore_ieee.gg363.site/abstract/document/6247656. |
[13] | M. Diebold, B. Jaehne and A. Gatto, Heterogeneous light fields, 2016 IEEE Conference on Computer Vision and Pattern Recognition, 1745-1753. Available from: https://ieeexplore_ieee.gg363.site/abstract/document/7780562. |
[14] | P. Yang, Z. Wang, Y. Yan, et al., Close-range photogrammetry with light field camera: From disparity map to absolute distance, Appl. Opt., 55 (2016), 7477-7486. |
[15] | Z. Zhang, A flexible new technique for camera calibration, IEEE Trans. Pattern Anal. Mach. Intell., 22 (2000), 1330-1334. |
[16] | P. Zhou, W. Cai, Y. Yu, et al., A two-step calibration method of lenslet-based light field cameras, Opt. Lasers Eng., 115 (2019), 190-196. |
[17] | D. G. Dansereau, O. Pizarro and S. B. Williams, Decoding, calibration and rectification for lenselet-based plenoptic cameras, IEEE Conference on Computer Vision and Pattern Recognition (2013), 1027-1034. Available from: https://www.cv-foundation.org/openaccess/content_cvpr_2013/html/Dansereau_Decoding_Calibration_and_2013_CVPR_paper.html. |
[18] | S. Zhang and P. S. Huang, Novel method for structured light system calibration, Opt. Eng., 45 (2006), 083601. |
[19] | C. Zuo, L. Huang, M. Zhang, et al., Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review, Opt. Lasers Eng., 85 (2016), 84-103. |
[20] | H. G. Jeon, J. Park, G. Choe, et al., Accurate depth map estimation from a lenslet light field camera, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015, 1547-1555. Available from: http://openaccess.thecvf.com/content_cvpr_2015/html/Jeon_Accurate_Depth_Map_2015_CVPR_paper.html. |
[21] | J. Yi and S. Huang, Modified Fourier transform profilometry for the measurement of 3D steep shapes, Opt. Lasers Eng., 27 (1997), 493-505. |
[22] | P. Zhou, Y. Yu, W. Cai, et al., Non-iterative three dimensional reconstruction method of the structured light system based on polynomial distortion representation, Opt. Lasers Eng., 100 (2018), 216-225. |
1. | Wei Feng, Junhui Gao, Tong Qu, Shiqi Zhou, Daxing Zhao, Three-Dimensional Reconstruction of Light Field Based on Phase Similarity, 2021, 21, 1424-8220, 7734, 10.3390/s21227734 | |
2. | Tran Quang-Huy, Phuc Thinh Doan, Nguyen Thi Hoang Yen, Duc-Tan Tran, Shear wave imaging and classification using extended Kalman filter and decision tree algorithm, 2021, 18, 1551-0018, 7631, 10.3934/mbe.2021378 | |
3. | Liang Shan, Teng-Fei Zhao, Hui-Yun Huang, Bo Hong, Ming Kong, Flame 3D temperature field reconstruction based on Damped LSQR-LMBC, 2022, 71, 1000-3290, 040701, 10.7498/aps.71.20211421 | |
4. | Hanshan Li, Shiqiang Yue, Xiaoqian Zhang, Measurement model and method of multiple projectile dispersion position based on dual light field intersection imaging, 2021, 186, 02632241, 110161, 10.1016/j.measurement.2021.110161 | |
5. | Jun Che, Yanxia Sun, Xiaojun Jin, Yong Chen, 3D Measurement of Discontinuous Objects with Optimized Dual-frequency Grating Profilometry, 2021, 21, 1335-8871, 197, 10.2478/msr-2021-0027 | |
6. | Sen Xiang, Li Liu, Huiping Deng, Jin Wu, You Yang, Li Yu, Fast depth estimation with cost minimization for structured light field, 2021, 29, 1094-4087, 30077, 10.1364/OE.434548 | |
7. | Ping Zhou, Yanzheng Wang, Yuda Xu, Zewei Cai, Chao Zuo, Phase-unwrapping-free 3D reconstruction in structured light field system based on varied auxiliary point, 2022, 30, 1094-4087, 29957, 10.1364/OE.468049 | |
8. | Duc-Tan Tran, Nguyen Thi Thu Ha, Luong Quang Hai, Duc-Nghia Tran, Achyut Shankar, Shear complex modulus imaging utilizing frequency combination in the least mean square/algebraic Helmholtz inversion, 2023, 83, 1573-7721, 40021, 10.1007/s11042-023-17061-7 | |
9. | Chenyu Zhang, 2024, Improving Depth Map Geometric Consistency Using an Enhanced Transformer Latent Code Model, 979-8-3503-6304-3, 586, 10.1109/ICIIBMS62405.2024.10792864 |
Coordinate System | Description |
World coordinate system | |
Light field camera coordinate system | |
Sub-aperture image coordinate system (pixel coordinate) | |
Sub-aperture image coordinate system (physical coordinate) | |
CCD coordinate system | |
Element image coordinate system | |
Projector coordinate system | |
DMD image coordinate system (pixel coordinate) |
Coordinate System | Description |
World coordinate system | |
Light field camera coordinate system | |
Sub-aperture image coordinate system (pixel coordinate) | |
Sub-aperture image coordinate system (physical coordinate) | |
CCD coordinate system | |
Element image coordinate system | |
Projector coordinate system | |
DMD image coordinate system (pixel coordinate) |