Research article Special Issues

A two-stage grasp detection method for sequential robotic grasping in stacking scenarios


  • Received: 18 October 2023 Revised: 09 January 2024 Accepted: 18 January 2024 Published: 05 February 2024
  • Dexterous grasping is essential for the fine manipulation tasks of intelligent robots; however, its application in stacking scenarios remains a challenge. In this study, we aimed to propose a two-phase approach for grasp detection of sequential robotic grasping, specifically for application in stacking scenarios. In the initial phase, a rotated-YOLOv3 (R-YOLOv3) model was designed to efficiently detect the category and position of the top-layer object, facilitating the detection of stacked objects. Subsequently, a stacked scenario dataset with only the top-level objects annotated was built for training and testing the R-YOLOv3 network. In the next phase, a G-ResNet50 model was developed to enhance grasping accuracy by finding the most suitable pose for grasping the uppermost object in various stacking scenarios. Ultimately, a robot was directed to successfully execute the task of sequentially grasping the stacked objects. The proposed methodology demonstrated the average grasping prediction success rate of 96.60% as observed in the Cornell grasping dataset. The results of the 280 real-world grasping experiments, conducted in stacked scenarios, revealed that the robot achieved a maximum grasping success rate of 95.00%, with an average handling grasping success rate of 83.93%. The experimental findings demonstrated the efficacy and competitiveness of the proposed approach in successfully executing grasping tasks within complex multi-object stacked environments.

    Citation: Jing Zhang, Baoqun Yin, Yu Zhong, Qiang Wei, Jia Zhao, Hazrat Bilal. A two-stage grasp detection method for sequential robotic grasping in stacking scenarios[J]. Mathematical Biosciences and Engineering, 2024, 21(2): 3448-3472. doi: 10.3934/mbe.2024152

    Related Papers:

  • Dexterous grasping is essential for the fine manipulation tasks of intelligent robots; however, its application in stacking scenarios remains a challenge. In this study, we aimed to propose a two-phase approach for grasp detection of sequential robotic grasping, specifically for application in stacking scenarios. In the initial phase, a rotated-YOLOv3 (R-YOLOv3) model was designed to efficiently detect the category and position of the top-layer object, facilitating the detection of stacked objects. Subsequently, a stacked scenario dataset with only the top-level objects annotated was built for training and testing the R-YOLOv3 network. In the next phase, a G-ResNet50 model was developed to enhance grasping accuracy by finding the most suitable pose for grasping the uppermost object in various stacking scenarios. Ultimately, a robot was directed to successfully execute the task of sequentially grasping the stacked objects. The proposed methodology demonstrated the average grasping prediction success rate of 96.60% as observed in the Cornell grasping dataset. The results of the 280 real-world grasping experiments, conducted in stacked scenarios, revealed that the robot achieved a maximum grasping success rate of 95.00%, with an average handling grasping success rate of 83.93%. The experimental findings demonstrated the efficacy and competitiveness of the proposed approach in successfully executing grasping tasks within complex multi-object stacked environments.



    加载中


    [1] Y. Liu, Z. Li, H. Liu, Z. Kan, Skill transfer learning for autonomous robots and human-robot cooperation: A survey, Rob. Auton. Syst., 128 (2020), 103515. https://doi.org/10.1016/j.robot.2020.103515 doi: 10.1016/j.robot.2020.103515
    [2] J. Luo, W. Liu, W. Qi, J. Hu, J. Chen, C. Yang, A vision-based virtual fixture with robot learning for teleoperation, Rob. Auton. Syst., 164 (2023), 104414. https://doi.org/10.1016/j.robot.2023.104414 doi: 10.1016/j.robot.2023.104414
    [3] Y Liu, Z. Li, H. Liu, Z. Kan, B. Xu, Bioinspired embodiment for intelligent sensing and dexterity in fine manipulation: A survey, IEEE Trans. Ind. Inf., 16 (2020), 4308–4321. https://doi.org/10.1109/TⅡ.2020.2971643 doi: 10.1109/TⅡ.2020.2971643
    [4] A. Bicchi, V. Kumar, Robotic grasping and contact: A review, in IEEE International Conference on Robotics and Automation, 1 (2020), 348–353. https://doi.org/10.1109/ROBOT.2000.844081
    [5] A. T. Miller, S. Knoop, H. I. Christensen, P. K. Allen, Automatic grasp planning using shape primitives, in 2003 IEEE International Conference on Robotics and Automation, 2 (2003), 1824–1829. https://doi.org/10.1109/ROBOT.2003.1241860
    [6] G. P. Slota, M. S. Suh, M. L. Latash, V. M. Zatsiorsky, Stability control of grasping objects with different locations of center of mass and rotational inertia, J. Mot. Behav., 44 (2012), 169–178. https://doi.org/10.1080/00222895.2012.665101 doi: 10.1080/00222895.2012.665101
    [7] J. Bohg, A. Morales, T. Asfour, D. Kragic, Data-driven grasp synthesis-A survey, IEEE Trans. Rob., 30 (2014), 289–309. https://doi.org/10.1109/TRO.2013.2289018 doi: 10.1109/TRO.2013.2289018
    [8] J. Redmon, A. Angelova, Real-time grasp detection using convolutional neural networks, in 2015 IEEE International Conference on Robotics and Automation (ICRA), (2015), 1316–1322. https://doi.org/10.1109/ICRA.2015.7139361
    [9] R. Xu, F. Chu, P. A. Vela, GKNet: Grasp keypoint network for grasp candidates detection, Int. J. Rob. Res., 41 (2022), 361–389. https://doi.org/10.1177/02783649211069569 doi: 10.1177/02783649211069569
    [10] H. Cheng, Y. Wang, M. Q. Meng, A robot grasping system with single-stage anchor-free deep grasp detector, IEEE Trans. Instrum. Meas., 71 (2022), 1–12. https://doi.org/10.1109/TIM.2022.3165825 doi: 10.1109/TIM.2022.3165825
    [11] Y. Wu, F. Zhang, Y. Fu, Real-time robotic multigrasp detection using anchor-free fully convolutional grasp detector, IEEE Trans. Ind. Electron., 69 (2022), 13171–13181. https://doi.org/10.1109/TIE.2021.3135629 doi: 10.1109/TIE.2021.3135629
    [12] G. Zuo, J. Tong, H. Liu, W. Chen, J. Li, Graph-based visual manipulation relationship reasoning network for robotic grasping, Front. Neurorobot., 15 (2021), 719731. https://doi.org/10.3389/fnbot.2021.719731 doi: 10.3389/fnbot.2021.719731
    [13] J. Ge, L. Mao, J. Shi, Y. Jiang, Fusion-Mask-RCNN: Visual robotic grasping in cluttered scenes, Multimedia Tools Appl., (2023), 1–21. https://doi.org/10.1007/s11042-023-16365-y doi: 10.1007/s11042-023-16365-y
    [14] Y. Li, F. Guo, M. Zhang, S. Suo, Q. An, J. Li, et al., A novel deep learning-based pose estimation method for robotic grasping of axisymmetric bodies in industrial stacked scenarios, Machines, 10 (2022), 1141. https://doi.org/10.3390/machines10121141 doi: 10.3390/machines10121141
    [15] L. François, S. Bruno, C. Philippe, C. Gosselin, A model-based scooping grasp for the autonomous picking of unknown objects with a two-fingered gripper, Rob. Auton. Syst., 106 (2018), 14–25. https://doi.org/10.1016/j.robot.2018.04.003 doi: 10.1016/j.robot.2018.04.003
    [16] N. S. Pollard, Closure and quality equivalence for efficient synthesis of grasps from examples, Int. J. Rob. Res., 23 (2004), 595–613. https://doi.org/10.1177/0278364904044402 doi: 10.1177/0278364904044402
    [17] M. Abdeetedal, M. R. Kermani, Grasp synthesis for purposeful fracturing of object, Rob. Auton. Syst., 105 (2018), 47–58. https://doi.org/10.1016/j.robot.2018.03.003 doi: 10.1016/j.robot.2018.03.003
    [18] A. Saxena, J. Driemeyer, A. Y. Ng, Robotic grasping of novel objects using vision, Int. J. Rob. Res., 27 (2008), 157–173. https://doi.org/10.1177/0278364907087172 doi: 10.1177/0278364907087172
    [19] Y. Jiang, S. Moseson, A. Saxena. Efficient grasping from RGBD images: Learning using a new rectangle representation, in 2011 IEEE International Conference on Robotics and Automation, (2011), 3304–3311. https://doi.org/10.1109/ICRA.2011.5980145
    [20] I. Lenz, H. Lee, A. Saxena, Deep learning for detecting robotic grasps, preprint, arXiv: 1301.3592.
    [21] Y. Song, L. Gao, X. Li, W. Shen, A novel robotic grasp detection method based on region proposal networks, Rob. Comput.-Integr. Manuf., 65 (2020), 101963. https://doi.org/10.1016/j.rcim.2020.101963 doi: 10.1016/j.rcim.2020.101963
    [22] D. Morrison, P. Corke, J. Leitner, Learning robust, real-time, reactive robotic grasping, Int. J. Rob. Res., 39 (2020), 183–201. https://doi.org/10.1177/0278364919859066 doi: 10.1177/0278364919859066
    [23] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, et al., Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics, preprint, arXiv: 1703.09312.
    [24] H. Zhu, Y. Li, F. Bai, W. Chen, X. Li, J. Ma, et al., Grasping detection network with uncertainty estimation for confidence-driven semi-supervised domain adaptation, in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2020), 9608–9613. https://doi.org/10.1109/IROS45743.2020.9341056
    [25] S. Yu, D. Zhai, Y. Xia, H. Wu, J. Liao, SE-ResUNet: A novel robotic grasp detection method, IEEE Rob. Autom.. Lett., 7 (2022), 5238–5245. https://doi.org/10.1109/LRA.2022.3145064 doi: 10.1109/LRA.2022.3145064
    [26] Q. Zhang, X. Sun, Bilateral cross-modal fusion network for robot grasp detection, Sensors, 23 (2023), 3340. https://doi.org/10.3390/s23063340 doi: 10.3390/s23063340
    [27] D. Guo, F. Sun, H. Liu, T. Kong, B. Fang, N. Xi, A hybrid deep architecture for robotic grasp detection, in 2017 IEEE International Conference on Robotics and Automation (ICRA), (2017), 1609–1614. https://doi.org/10.1109/ICRA.2017.7989191
    [28] Y. Huang, D. Liu, Z. Liu, K. Wang, Q. Wang, J. Tan, A novel robotic grasping method for moving objects based on multi-agent deep reinforcement learning, Rob. Comput.-Integr. Manuf., 86 (2024), 102644. https://doi.org/10.1016/j.rcim.2023.102644 doi: 10.1016/j.rcim.2023.102644
    [29] S. Kumra, C. Kanan, Robotic grasp detection using deep convolutional neural networks, in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2017), 769–776. https://doi.org/10.1109/IROS.2017.8202237
    [30] F Chu, R. Xu, P. A. Vela, Real-world multiobject, multigrasp detection, IEEE Rob. Autom. Lett., 3 (2018), 3355–3362. https://doi.org/10.1109/LRA.2018.2852777 doi: 10.1109/LRA.2018.2852777
    [31] J. Ge, J. Shi, Z. Zhou, Z. Wang, Q. Qian, A grasping posture estimation method based on 3D detection network, Comput. Electr. Eng., 100 (2022), 107896. https://doi.org/10.1016/j.compeleceng. 2022.107896 doi: 10.1016/j.compeleceng.2022.107896
    [32] H. Zhang, X. Lan, S. Bai, L. Wan, C. Yang, N. Zheng, A multi-task convolutional neural network for autonomous robotic grasping in object stacking scenes, in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2019), 6435–6442. https://doi.org/10.1109/IROS40897.2019.8967977
    [33] Y. Lin, L. Zeng, Z. Dong, X. Fu, A vision-guided robotic grasping method for stacking scenes based on deep learning, in 2019 IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), (2019), 91–96. https://doi.org/10.1109/IMCEC46724.2019.8983819
    [34] C. Lu, R. Krishna, M. Bernstein, L. Fei-Fei, Visual relationship detection with language priors, in European Conference on Computer Vision, (2016), 852–869. https://doi.org/10.1007/978-3-319-46448-0_51
    [35] A. Zeng, S. Song, K. Yu, E. Donlon, F. R. Hogan, M. Bauza, et al., Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching, Int. J. Rob. Res., 41 (2022), 690–705. http://doi.org/10.1177/0278364919868017 doi: 10.1177/0278364919868017
    [36] G. Wu, W. Chen, H. Cheng, W. Zuo, D. Zhang, J. You, Multi-object grasping detection with hierarchical feature fusion, IEEE Access, 7 (2019), 43884–43894. https://doi.org/10.1109/ACCESS.2019.2908281 doi: 10.1109/ACCESS.2019.2908281
    [37] W. Hu, C. Wang, F. Liu, X. Peng, P. Sun, J. Tan, A grasps-generation-and-selection convolutional neural network for a digital twin of intelligent robotic grasping, Rob. Comput.-Integr. Manuf., 77 (2022), 102371. https://doi.org/10.1016/j.rcim.2022.102371 doi: 10.1016/j.rcim.2022.102371
    [38] S. Duan, G. Tian, Z. Wang, S. Liu, C. Feng, A semantic robotic grasping framework based on multi-task learning in stacking scenes, Eng. Appl. Artif. Intell., 121 (2023), 106059. https://doi.org/10.1016/j.engappai.2023.106059 doi: 10.1016/j.engappai.2023.106059
    [39] S. Yu, D. Zhai, Y. Xia, EGNet: Efficient robotic grasp detection network, IEEE Trans. Ind. Electron., 70 (2023), 4058–4067. https://doi.org/10.1109/TIE.2022.3174274 doi: 10.1109/TIE.2022.3174274
    [40] X. Li, X. Zhang, X. Zhou, I. Chen, UPG: 3D vision-based prediction framework for robotic grasping in multi-object scenes, Knowl.-Based Syst., 270 (2023), 110491. https://doi.org/10.1016/j.knosys.2023.110491 doi: 10.1016/j.knosys.2023.110491
    [41] J. P. C. de Souza, L. F. Rocha, P. M. Oliveira, A. P. Moreira, J. Boaventura-Cunha, Robotic grasping: from wrench space heuristics to deep learning policies, Rob. Comput.-Integr. Manuf., 71 (2021), 102176. https://doi.org/10.1016/j.rcim.2021.102176 doi: 10.1016/j.rcim.2021.102176
    [42] J. Redmon, A. Farhadi, YOLOv3: An incremental improvement. Preprint, arXiv: 1804.02767.
    [43] S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi: 10.1109/TPAMI.2016.2577031
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1155) PDF downloads(93) Cited by(0)

Article outline

Figures and Tables

Figures(14)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog