Cocoons have a direct impact on the quality of raw silk. Mulberry cocoons must be screened before silk reeling can begin in order to improve the quality of raw silk. For the silk product sector, the cocoons' level of categorization and sorting is crucial. Nonetheless, the majority of mulberry cocoon production facilities in use today choose the cocoons by hand. The accuracy and efficiency of mulberry cocoon plucking can be significantly improved by automatic methods. To increase efficiency, mulberry cocoons must be sorted automatically and intelligently using machine vision. We proposed an effective detection technique based on vision and terahertz spectrum characteristics data for distinguishing defective cocoons, including common and thin shelled defective cocoons. Each mulberry cocoon with a defect had its spatial coordinate and deflection angle computed so that grippers could grasp it. With 3762 photos in our dataset, our approach could detect mAP values up to 99.25% of the time. Furthermore, the GFLOPS of our suggested model was only 8.9 G, and its Parameters were only 5.3 M, making it appropriate for use in real-world application scenarios.
Citation: Jun Chen, Xueqiang Guo, Taohong Zhang, Han Zheng. Efficient defective cocoon recognition based on vision data for intelligent picking[J]. Electronic Research Archive, 2024, 32(5): 3299-3312. doi: 10.3934/era.2024151
Cocoons have a direct impact on the quality of raw silk. Mulberry cocoons must be screened before silk reeling can begin in order to improve the quality of raw silk. For the silk product sector, the cocoons' level of categorization and sorting is crucial. Nonetheless, the majority of mulberry cocoon production facilities in use today choose the cocoons by hand. The accuracy and efficiency of mulberry cocoon plucking can be significantly improved by automatic methods. To increase efficiency, mulberry cocoons must be sorted automatically and intelligently using machine vision. We proposed an effective detection technique based on vision and terahertz spectrum characteristics data for distinguishing defective cocoons, including common and thin shelled defective cocoons. Each mulberry cocoon with a defect had its spatial coordinate and deflection angle computed so that grippers could grasp it. With 3762 photos in our dataset, our approach could detect mAP values up to 99.25% of the time. Furthermore, the GFLOPS of our suggested model was only 8.9 G, and its Parameters were only 5.3 M, making it appropriate for use in real-world application scenarios.
[1] | A. N. J. Raj, R. Sundaram, V. G. V. Mahesh, Z. Zhuang, A. Simeone, A multi-sensor system for silkworm cocoon gender classification via image processing and support vector machine, Sensors, 19 (2019), 2656. https://doi.org/10.3390/s19122656 doi: 10.3390/s19122656 |
[2] | J. Cai, L. Yuan, B. Liu, L. Sun, Nondestructive gender identification of silkworm cocoons using X-ray imaging with multivariate data analysis, Anal. Methods, 18 (2014), 67224–7233. https://doi.org/10.1039/C4AY00940A doi: 10.1039/C4AY00940A |
[3] | F. Guo, F. He, D. Tao, G. Li, Automatic exposure correction algorithm for online silkworm pupae (Bombyx mori) sex classification, Comput. Electron. Agric., 198 (2022), 107108. https://doi.org/10.1016/j.compag.2022.107108 doi: 10.1016/j.compag.2022.107108 |
[4] | Y. Ma, Y. Xu, H. Yan, G. Zhang, On-line identification of silkworm pupae gender by short-wavelength near infrared spectroscopy and pattern recognition technology, J. Near Infrared Spectrosc., 29 (2021), 207–215. https://doi.org/10.1177/0967033521999745 doi: 10.1177/0967033521999745 |
[5] | A. Nasiri, M. Omid, A. Taheri-Garavand, An automatic sorting system for unwashed eggs using deep learning, J. Food Eng., 283 (2020), 110036. https://doi.org/10.1016/j.jfoodeng.2020.110036 doi: 10.1016/j.jfoodeng.2020.110036 |
[6] | V. Pavithra, R. Pounroja, B. S. Bama, Machine vision based automatic sorting of cherry tomatoes, in 2015 2nd International Conference on Electronics and Communication Systems (ICECS), (2015), 271–275. https://doi.org/10.1109/ECS.2015.7124907 |
[7] | F. Wang, J. Zheng, X. Tian, J. Wang, L. Niu, W. Feng, An automatic sorting system for fresh white button mushrooms based on image processing, Comput. Electron. Agric., 151 (2018), 416–425. https://doi.org/10.1016/j.compag.2018.06.022 doi: 10.1016/j.compag.2018.06.022 |
[8] | W. Xiao, J. Yang, H. Fang, J. Zhuang, Y. Ku, X. Zhang, Development of an automatic sorting robot for construction and demolition waste, Clean Technol. Environ. Policy, 22 (2020), 1829–1841. https://doi.org/10.1007/s10098-020-01922-y doi: 10.1007/s10098-020-01922-y |
[9] | W. Du, J. Zheng, W. Li, Z. Liu, H. Wang, X. Han, Efficient recognition and automatic sorting technology of waste textiles based on online near infrared spectroscopy and convolutional neural network, Resour., Conserv. Recycl., 180 (2022), 106157. https://doi.org/10.1016/j.resconrec.2022.106157 doi: 10.1016/j.resconrec.2022.106157 |
[10] | Z. Tan, H. Li, X. He, Optimizing parcel sorting process of vertical sorting system in ecommerce warehouse, Adv. Eng. Inf., 48 (2021), 101279. https://doi.org/10.1016/j.aei.2021.101279 doi: 10.1016/j.aei.2021.101279 |
[11] | H. Nadaf, G. V. Vishaka, M. Chandrashekharaiah, M. S. Rathore, Scope and potential applications of artificial intelligence in tropical tasar silkworm Antheraea mylitta D. seed production, Entomol. Zool., 9 (2021), 899–903. |
[12] | K. Kanjanawanishkul, An image-based eri silkworm pupa grading method using shape, color, and size, Int. J. Autom. Smart Technol. 12 (2022), 2331–2331. https://doi.org/10.5875/ausmt.v12i1.2331 doi: 10.5875/ausmt.v12i1.2331 |
[13] | F. Dai, X. Wang, Y. Zhong, S. Zhong, C. Chen, Convolution neural network application in the simultaneous detection of gender and variety of silkworm (bombyx mori) cocoons, in 5th International Conference on Computer Science and Information Engineering (ICCSIE 2020), 1769 (2021), 012017. https://doi.org/10.1088/1742-6596/1769/1/012017 |
[14] | H. Xiong, J. Cai, W. Zhang, J. Hu, Y. Deng, J. Miao, et al., Deep learning enhanced terahertz imaging of silkworm eggs development, Iscience, 24 (2021). https://doi.org/10.1016/j.isci.2021.103316 doi: 10.1016/j.isci.2021.103316 |
[15] | R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2022), 580–587. https://doi.org/10.1109/CVPR.2014.81 |
[16] | R. Girshick, Fast R-CNN, in 2015 IEEE International Conference on Computer Vision (ICCV), (2015), 1440–1448. https://doi.org/10.1109/ICCV.2015.169 |
[17] | S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi: 10.1109/TPAMI.2016.2577031 |
[18] | K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in deep convolutional networks for visual recognition, IEEE Trans. Pattern Anal. Mach. Intell., 37 (2015), 1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824 doi: 10.1109/TPAMI.2015.2389824 |
[19] | W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, et al., SSD: Single shot multibox detector, in European Conference on Computer Vision, (2016), 21–37. https://doi.org/10.1007/978-3-319-46448-0_2 |
[20] | T. Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, Focal loss for dense object detection, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 2999–3007. https://doi.org/10.1109/ICCV.2017.324 |
[21] | J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 779–788. https://doi.org/10.1109/ICCV.2017.324 |
[22] | J. Redmon, A. Farhadi, YOLO9000: Better, faster, stronger, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 6517–6525. https://doi.org/10.1109/CVPR.2017.690 |
[23] | J. Redmon, A. Farhadi, YOLOv3: An incremental improvement, preprint, arXiv: 1804.02767. |
[24] | A. Bochkovskiy, C. Y. Wang, H. Liao, YOLOv4: Optimal speed and accuracy of object detection, preprint, arXiv: 2004.10934. |
[25] | Z. Ge, S. Liu, F. Wang, Z. Li, J. Sun, YOLOX: Exceeding YOLO Series in 2021, preprint, arXiv: 2107.08430. |
[26] | J. Pan, A. Bulat, F. Tan, X. Zhu, L. Dudziak, H. Li, et al., EdgeViTs: Competing light-weight cnns on mobile devices with vision transformers, in Computer Vision – ECCV 2022, (2022), 294–311. https://doi.org/10.1007/978-3-031-20083-0_18 |
[27] | S. Liu, L. Qi, H. Qin, J. Shi, J. Jia, Path aggregation network for instance segmentation, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 8759–8768. https://doi.org/10.1109/CVPR.2018.00913 |
[28] | J. Yu, Y. Jiang, Z. Wang, Z. Cao, T. Huang, UnitBox: An advanced object detection network, in Proceedings of the 24th ACM international conference on Multimedia, (2016), 516–520. https://doi.org/10.1145/2964284.2967274 |