Mini review Special Issues

Point cloud registration: a mini-review of current state, challenging issues and future directions

  • Received: 01 September 2022 Revised: 30 October 2022 Accepted: 08 December 2022 Published: 10 January 2023
  • A point cloud is a set of data points in space. Point cloud registration is the process of aligning two or more 3D point clouds collected from different locations of the same scene. Registration enables point cloud data to be transformed into a common coordinate system, forming an integrated dataset representing the scene surveyed. In addition to those reliant on targets being placed in the scene before data capture, there are various registration methods available that are based on using only the point cloud data captured. Until recently, cloud-to-cloud registration methods have generally been centered upon the use of a coarse-to-fine optimization strategy. The challenges and limitations inherent in this process have shaped the development of point cloud registration and the associated software tools over the past three decades. Based on the success of deep learning methods applied to imagery data, attempts at applying these approaches to point cloud datasets have received much attention. This study reviews and comments on more recent developments in point cloud registration without using any targets and explores remaining issues, based on which recommendations on potential future studies in this topic are made.

    Citation: Nathan Brightman, Lei Fan, Yang Zhao. Point cloud registration: a mini-review of current state, challenging issues and future directions[J]. AIMS Geosciences, 2023, 9(1): 68-85. doi: 10.3934/geosci.2023005

    Related Papers:

  • A point cloud is a set of data points in space. Point cloud registration is the process of aligning two or more 3D point clouds collected from different locations of the same scene. Registration enables point cloud data to be transformed into a common coordinate system, forming an integrated dataset representing the scene surveyed. In addition to those reliant on targets being placed in the scene before data capture, there are various registration methods available that are based on using only the point cloud data captured. Until recently, cloud-to-cloud registration methods have generally been centered upon the use of a coarse-to-fine optimization strategy. The challenges and limitations inherent in this process have shaped the development of point cloud registration and the associated software tools over the past three decades. Based on the success of deep learning methods applied to imagery data, attempts at applying these approaches to point cloud datasets have received much attention. This study reviews and comments on more recent developments in point cloud registration without using any targets and explores remaining issues, based on which recommendations on potential future studies in this topic are made.



    加载中


    [1] Fan L, Smethurst JA, Atkinson PM, et al. (2015) Error in target-based georeferencing and registration in terrestrial laser scanning. Comput Geosci 83: 54–64. https://doi.org/10.1016/j.cageo.2015.06.021 doi: 10.1016/j.cageo.2015.06.021
    [2] Cai Y, Fan L (2021) An efficient approach to automatic construction of 3D watertight geometry of buildings using point clouds. Remote Sens 13: 1947. https://doi.org/10.3390/rs13101947 doi: 10.3390/rs13101947
    [3] Fan L (2020) A comparison between structure-from-motion and terrestrial laser scanning for deriving surface roughness: a case study on a sandy terrain surface. Int Arch Photogramm Remote Sens Spatial Inf Sci 42: 1225–1229. https://doi.org/10.5194/isprs-archives-XLⅡ-3-W10-1225-2020 doi: 10.5194/isprs-archives-XLⅡ-3-W10-1225-2020
    [4] Zeng A, Song S, Nießner M, et al. (2017) 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1802–1811. https://doi.org/10.1109/CVPR.2017.29 doi: 10.1109/CVPR.2017.29
    [5] Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? The KITTI vision benchmark suite. 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3354–3361. https://doi.org/10.1109/CVPR.2012.6248074 doi: 10.1109/CVPR.2012.6248074
    [6] Pomerleau F, Liu M, Colas F, et al. (2012) Challenging data sets for point cloud registration algorithms. Int J Rob Res 31: 1705–1711. https://doi.org/10.1177/0278364912458814 doi: 10.1177/0278364912458814
    [7] Huang X, Mei G, Zhang J, et al. (2021) A comprehensive survey on point cloud registration. arXiv, abs/2103.02690. https://doi.org/10.48550/arXiv.2103.02690
    [8] Wu Z, Song S, Khosla A, et al. (2015) 3D Shapenets: A deep representation for volumetric shapes. Proceedings of the IEEE conference on computer vision and pattern recognition, 1912–1920.
    [9] Reitmann S, Neumann L, Jung B (2021) BLAINDER-A Blender AI Add-On for Generation of Semantically Labeled Depth-Sensing Data. Sensors 21: 2144. https://doi.org/10.3390/s21062144 doi: 10.3390/s21062144
    [10] Griffiths D, Boehm J (2019) SynthCity: A large scale synthetic point cloud. arXiv, abs/1907.04758. https://doi.org/10.48550/arXiv.1907.04758
    [11] CARLA Simulator (2021) Readthedocs.io. 2021. Available from: https://carla.readthedocs.io/en/0.9.11/.
    [12] LGSVL (2021) lgsvl/simulator. GitHub. Available from: https://github.com/lgsvl/simulator.
    [13] Chen Y, Medioni GG (1992) Object modelling by registration of multiple range images. Image Vision Comput 10: 145–155. https://doi.org/10.1016/0262-8856(92)90066-C doi: 10.1016/0262-8856(92)90066-C
    [14] Campbell RJ, Flynn PJ (2001) A survey of free-form object representation and re-cognition techniques. Comput Vision Image Understanding 81: 166–210. https://doi.org/10.1006/cviu.2000.0889 doi: 10.1006/cviu.2000.0889
    [15] Segal A, Haehnel D, Thrun S (2009) Generalized-ICP. Robotics: Science and Systems, 5: 168–176. https://doi.org/10.15607/RSS.2009.V.021 doi: 10.15607/RSS.2009.V.021
    [16] Chetverikov D, Svirko D, Stepanov D, et al. (2002) The trimmed iterative closest point algorithm. IEEE Object Recognition Supported by User Interaction for Service Robots, 3,545–548. https://doi.org/10.1109/ICPR.2002.1047997 doi: 10.1109/ICPR.2002.1047997
    [17] Yang J, Li H, Jia Y (2013) Go-ICP: solving 3D registration efficiently and globally optimally. Proceedings of the IEEE Conference on Computer Vision, 1457–1464. https://doi.org/10.1109/ICCV.2013.184 doi: 10.1109/ICCV.2013.184
    [18] Qiu D, May S, Nüchter A (2009) GPU-accelerated nearest neighbor search for 3D re-gistration. International Conference on Computer Vision Systems, Springer, Berlin, 194–203. https://doi.org/10.1007/978-3-642-04667-4_20
    [19] Biber P, Straßer W (2003) The normal distributions transform: A new approach to laser scan matching. Proceedings of International Conference on Intelligent Robots and Systems (IROS 2003), Cat. No. 03CH37453, 3: 2743–2748. https://doi.org/10.1109/IROS.2003.1249285
    [20] Takeuchi E, Tsubouchi T (2006) A 3-D scan matching using improved 3-D normal distributions transforms for mobile robotic mapping. 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 3068–3073. https://doi.org/10.1109/IROS.2006.282246 doi: 10.1109/IROS.2006.282246
    [21] Zhou Z, Zhao C, Adolfsson D, et al. (2021) NDT-Transformer: Large-Scale 3D Point Cloud Localisation using the Normal Distribution Transform Representation. arXiv, abs/2103.12292.
    [22] Han X, Jin J, Xie J, et al. (2018) A comprehensive review of 3D point cloud descriptors. arXiv, abs/1802.02297.
    [23] Rusu RB, Blodow N, Beetz M (2009) Fast point feature histograms (FPFH) for 3D registration. IEEE International Conference on Robotics and Automation (ICRA), 3212–3217. https://doi.org/10.1109/ROBOT.2009.5152473 doi: 10.1109/ROBOT.2009.5152473
    [24] Yang H, Carlone L (2019) A Polynomial-time Solution for Robust Registration with Extreme Outlier Rates. arXiv: abs/1903.08588. https://doi.org/10.48550/arXiv.1903.08588
    [25] Yang H, Shi J, Carlone L (2020) TEASER: Fast and Certifiable Point Cloud Registration. arXiv: 2001.07715.
    [26] Zhou Q, Park J, Koltun V (2016) Fast global registration. Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part Ⅱ, 766–782. https://doi.org/10.1007/978-3-319-46475-6_47
    [27] Theiler PW, Wegner JD, Schindler K, (1996) Keypoint-based 4-Points Congruent Sets – Automated marker-less registration of laser scans. ISPRS J Photogram Remote Sens 96: 149–163. https://doi.org/10.1016/j.isprsjprs.2014.06.015 doi: 10.1016/j.isprsjprs.2014.06.015
    [28] Leong-Hoï A, Chambrial A, Collet M, et al. (2020) Non-rigid registration of 3D points clouds of deformed liver models with Open3D and PyCPD. Proceedings of Unconventional Optical Imaging Ⅱ. https://doi.org/10.1117/12.2555673 doi: 10.1117/12.2555673
    [29] Golyanik V, Taetz B, Reis G, et al. (2016) Extended coherent point drift algorithm with correspondence priors and optimal subsampling. 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 1–9. https://doi.org/10.1109/WACV.2016.7477719 doi: 10.1109/WACV.2016.7477719
    [30] Wang L, Chen J, Li X, et al. (2019) Non-Rigid Point Set Registration Networks. arXiv, abs/1904.01428. https://doi.org/10.48550/arXiv.1904.01428
    [31] Dong Z, Liang F, Yang B, et al. (2020) Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J Photogram Remote Sens 163: 327-342, https://doi.org/10.1016/j.isprsjprs.2020.03.013 doi: 10.1016/j.isprsjprs.2020.03.013
    [32] Khoury M, Zhou QY, Koltun V (2017) Learning compact geometric features. 2017 Proceedings of the IEEE international conference on computer vision, 153–161. https://doi.org/10.1109/ICCV.2017.26
    [33] Zhang Z, Dai Y, Sun J (2020) Deep learning based point cloud registration: an overview. Virtual Reality & Intelligent Hardware 2: 222–246. https://doi.org/10.1016/j.vrih.2020.05.002 doi: 10.1016/j.vrih.2020.05.002
    [34] Deng H, Birdal T, Ilic S (2018) PPFNet: Global context aware local features for robust 3D point matching. 2018 Proceedings of the IEEE conference on computer vision and pattern recognition, 195–205. https://doi.org/10.1109/CVPR.2018.00028
    [35] Choy C, Park J, Koltun V (2019) Fully convolutional geometric features. 2019 Proceedings of the IEEE/CVF International Conference on Computer Vision, 8958–8966. https://doi.org/10.1109/ICCV.2019.00905
    [36] Choy C, Dong W, Koltun V (2020) Deep Global Registration. 2020 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2511–2520. https://doi.org/10.1109/CVPR42600.2020.00259
    [37] Bai X, Luo Z, Zhou L, et al. (2020) D3feat: Joint learning of dense detection and description of 3d local features. 2020 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6359–6367. https://doi.org/10.1109/CVPR42600.2020.00639
    [38] Huang S, Gojcic Z, Usvyatsov M, et al. (2021) PREDATOR: Registration of 3D Point Clouds with Low Overlap. 2021 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4267–4276. https://doi.org/10.1109/CVPR46437.2021.00425
    [39] Qin Z, Yu H, Wang C, et al. (2022) Geometric transformer for fast and robust point cloud registration. 2022 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11143–11152. https://doi.org/10.1109/CVPR52688.2022.01086
    [40] Poiesi F, Boscaini D (2022) Learning general and distinctive 3D local deep descriptors for point cloud registration. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2022.3175371 doi: 10.1109/TPAMI.2022.3175371
    [41] Horache S, Deschaud JE, Goulette F (2021) 3D Point Cloud Registration with Multi-Scale Architecture and Self-supervised Fine-tuning. arXiv: 210314533. 2021.
    [42] Poiesi F, Boscaini D (2021) Distinctive 3D local deep descriptors. The 25th International Conference on Pattern Recognition (ICPR), 5720–5727. https://doi.org/10.1109/ICPR48806.2021.9411978 doi: 10.1109/ICPR48806.2021.9411978
    [43] Wang Y, Solomon JM (2019) Deep closest point: Learning representations for point cloud registration. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 3523–3532. https://doi.org/10.1109/ICCV.2019.00362 doi: 10.1109/ICCV.2019.00362
    [44] Wang Y, Solomon JM (2019) PRNet: Self-supervised learning for partial-to-partial registration. arXiv: 1910.12240. https://doi.org/10.48550/arXiv.1910.12240
    [45] Sarode V, Li X, Goforth H, et al. (2019) PCRNet: Point Cloud Registration Network using PointNet Encoding. arXiv: 1908.07906. https://doi.org/10.48550/arXiv.1908.07906
    [46] Aoki Y, Goforth H, Srivatsan RA, et al. (2019) PointNetLK: Robust & efficient point cloud registration using PointNet. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7163–7172. https://doi.org/10.1109/CVPR.2019.00733 doi: 10.1109/CVPR.2019.00733
    [47] Yew ZJ, Lee GH (2020) RPM-Net: Robust point matching using learned features. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11824–11833. https://doi.org/10.1109/CVPR42600.2020.01184 doi: 10.1109/CVPR42600.2020.01184
    [48] Qi CR, Su H, Mo K, et al. (2017) Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 652–660.
    [49] Qi CR, Yi L, Su H, et al. (2017) Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv Neural Inf Process Syst, 5099–5108.
    [50] Lucas BD, Kanade T (1981) An Iterative Image Registration technique with an application to stereo vision. Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI'81).
    [51] Wang C, Galoogahi HK, Lin CH, et al. (2018) Deep-LK for efficient adaptive object tracking. 2018 IEEE International Conference on Robotics and Automation (ICRA), 627–634. https://doi.org/10.1109/ICRA.2018.8460815 doi: 10.1109/ICRA.2018.8460815
    [52] Huang X, Mei G, Zhang J (2020) Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11366–11374. https://doi.org/10.1109/CVPR42600.2020.01138 doi: 10.1109/CVPR42600.2020.01138
    [53] Li X, Pontes JK, Lucey S (2020) Deterministic PointNetLK for Generalized Registration. arXiv: abs/2008.09527.
    [54] Wang Y, Sun Y, Liu Z, et al. (2018) Dynamic graph CNN for learning on point clouds. arXiv: 1801.07829.
    [55] Papadopoulo T, Lourakis MI (2000) Estimating the Jacobian of the singular value decomposition: Theory and applications. European Conference on Computer Vision. Springer, Berlin, 554–570. https://doi.org/10.1007/3-540-45054-8_36 doi: 10.1007/3-540-45054-8_36
    [56] Vaswani A, Shazeer N, Parmar N, et al. (2017) Attention Is All You Need. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, Advances, 5998–6008.
    [57] Vinyals O, Fortunato M, Jaitly N (2015) Pointer networks. Proceedings of the 28th International Conference on Neural Information Processing Systems, 2692–2700.
    [58] Gold S, Rangarajan A, Lu CP, et al. (1998) New algorithms for 2D and 3D point matching) pose estimation and correspondence. Pattern Recognit 31: 1019–1031. https://doi.org/10.1016/S0031-3203(98)80010-1 doi: 10.1016/S0031-3203(98)80010-1
    [59] Sinkhorn R (1964) A relationship between arbitrary positive matrices and doubly stochastic matrices. Ann Math Stat 35: 876–879. https://doi.org/10.1214/aoms/1177703591 doi: 10.1214/aoms/1177703591
    [60] Thomas H, Qi CR, Deschaud JE, et al. (2019) KPConv: Flexible and Deformable Convolution for Point Clouds. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 6410–6419. https://doi.org/10.1109/ICCV.2019.00651
    [61] Yew ZJ, Lee GH (2022) REGTR: End-to-end Point Cloud Correspondences with Transformers. 2022 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6677–6686. https://doi.org/10.1109/CVPR52688.2022.00656
    [62] Zhang Z, Chen G, Wang X, et al. (2021) DDRNet: Fast point cloud registration network for large-scale scenes. ISPRS J Photogram Remote Sens 175: 184–198. https://doi.org/10.1016/j.isprsjprs.2021.03.003 doi: 10.1016/j.isprsjprs.2021.03.003
    [63] Cao AQ, Puy G, Boulch A, et al. (2021) PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 13209–13218. https://doi.org/10.1109/ICCV48922.2021.01298
    [64] Lu F, Chen G, Liu Y, et al. (2021) HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration. 2021 Proceedings of the IEEE/CVF International Conference on Computer Vision, 16014–16023. https://doi.org/10.1109/ICCV48922.2021.01571
    [65] Li L, Fu H, Ovsjanikov M (2022) WSDesc: Weakly Supervised 3D Local Descriptor Learning for Point Cloud Registration. IEEE Transactions on Visualization and Computer Graphics. https://doi.org/10.1109/TVCG.2022.3160005 doi: 10.1109/TVCG.2022.3160005
    [66] Xu H, Liu S, Wang G, et al. (2021) OMNet: Learning Overlapping Mask for Partial-to-Partial Point Cloud Registration. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 3112–3121. https://doi.org/10.1109/ICCV48922.2021.00312
    [67] Gu X, Tang C, Yuan W, et al. (2022) RCP: Recurrent Closest Point for Point Cloud. 2022 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8216–8226. https://doi.org/10.1109/CVPR52688.2022.00804
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3662) PDF downloads(361) Cited by(2)

Article outline

Figures and Tables

Figures(2)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog