CNN models already play an important role in classification of crop and weed with high accuracy, more than 95% as reported in literature. However, to manually choose and fine-tune the deep learning models becomes laborious and indispensable in most traditional practices and research. Moreover, the classic objective functions are not thoroughly compatible with agricultural farming tasks as the corresponding models suffer from misclassifying crop to weed, often more likely than in other deep learning application domains. In this paper, we applied autonomous machine learning with a new objective function for crop and weed classification, achieving higher accuracy and lower crop killing rate (rate of identifying a crop as a weed). The experimental results show that our method outperforms state-of-the-art applications, for example, ResNet and VGG19.
Citation: Xuetao Jiang, Binbin Yong, Soheila Garshasbi, Jun Shen, Meiyu Jiang, Qingguo Zhou. Crop and weed classification based on AutoML[J]. Applied Computing and Intelligence, 2021, 1(1): 46-60. doi: 10.3934/aci.2021003
CNN models already play an important role in classification of crop and weed with high accuracy, more than 95% as reported in literature. However, to manually choose and fine-tune the deep learning models becomes laborious and indispensable in most traditional practices and research. Moreover, the classic objective functions are not thoroughly compatible with agricultural farming tasks as the corresponding models suffer from misclassifying crop to weed, often more likely than in other deep learning application domains. In this paper, we applied autonomous machine learning with a new objective function for crop and weed classification, achieving higher accuracy and lower crop killing rate (rate of identifying a crop as a weed). The experimental results show that our method outperforms state-of-the-art applications, for example, ResNet and VGG19.
[1] | D. C. Rose, R. Wheeler, M. Winter, M. Lobley, C. A. Chivers, Agriculture 4.0: Making it work for people, production, and the planet, Land Use Policy, 100 (2021), 104933. doi: 10.1016/j.landusepol.2020.104933. doi: 10.1016/j.landusepol.2020.104933 |
[2] | A. Wang, W. Zhang, X. Wei, A review on weed detection using ground-based machine vision and image processing techniques, Comput. Electron. Agr., 158 (2019), 226-240. doi: 10.1016/j.compag.2019.02.005. doi: 10.1016/j.compag.2019.02.005 |
[3] | J. Futoma, J. Morris, J. Lucas, A comparison of models for predicting early hospital readmissions, J. Biomed. Inform., 56 (2015), 229-238. doi: 10.1016/j.jbi.2015.05.016. doi: 10.1016/j.jbi.2015.05.016 |
[4] | F. B. P. Malavazi, R. Guyonneau, J. B. Fasquel, S. Lagrange, F. Mercier, Lidar-only based navigation algorithm for an autonomous agricultural robot, Comput. Electron. Agr., 154 (2018), 71-79. doi: 10.1016/j.compag.2018.08.034. doi: 10.1016/j.compag.2018.08.034 |
[5] | W. Strothmann, A. Ruckelshausen, J. Hertzberg, C. Scholz, F. Langsenkamp, Plant classification with in-field-labeling for crop/weed discrimination using spectral features and 3d surface features from a multi-wavelength laser line profile system, Comput. Electron. Agr., 134 (2017), 79-93. doi: 10.1016/j.compag.2017.01.003. doi: 10.1016/j.compag.2017.01.003 |
[6] | D. Hall, F. Dayoub, T. Perez, C. McCool, A rapidly deployable classification system using visual data for the application of precision weed management, Comput. Electron. Agr., 148 (2018), 107-120. doi: 10.1016/j.compag.2018.02.023. doi: 10.1016/j.compag.2018.02.023 |
[7] | D. I. Patrício, R. Rafael, Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review, Comput. Electron. Agr., 153 (2018), 69-81. doi: 10.1016/j.compag.2018.08.001. doi: 10.1016/j.compag.2018.08.001 |
[8] | C. Thornton, F. Hutter, H. H. Hoos, K. Leyton-Brown, Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms, 19th ACM SIGKDD international conference on Knowledge discovery and data mining, (2013), 847-855. |
[9] | G. Bradski, The OpenCV Library, Dr. Dobb's Journal of Software Tools, 2000. |
[10] | A. Ali, Plantvillage data set, 2019. data retrieved from kaggle. Available from: https://www.kaggle.com/abdallahalidev/plantvillage-dataset. |
[11] | H. Jin, Q. Song, X. Hu, Auto-keras: An efficient neural architecture search system, 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2019), 1946-1956. |
[12] | A. Kamilaris, Prenafeta-Boldú FX, Deep learning in agriculture: A survey, Comput. Electron. Agr., 147 (2018), 70-90. doi: 10.1016/j.compag.2018.02.016. doi: 10.1016/j.compag.2018.02.016 |
[13] | A. Krizhevsky, I. Sutskever, G. Hinton, Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 25 (2012), 1097-1105. doi: 10.1145/3065386. doi: 10.1145/3065386 |
[14] | K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556. |
[15] | Yu F, Koltun V, Multi-scale context aggregation by dilated convolutions, arXiv preprint, 2015, arXiv: 1511.07122. |
[16] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770-778. |
[17] | G. Huang, Z. Liu, L. V. D. Maaten, K. Q. Weinberger, Densely connected convolutional networks, IEEE conference on computer vision and pattern recognition, (2017), 4700-4708. |
[18] | C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for computer vision, IEEE conference on computer vision and pattern recognition, (2016), 2818-2826. |
[19] | F. Chollet, Xception: Deep learning with depthwise separable convolutions, IEEE conference on computer vision and pattern recognition, (2017), 1251-1258. |
[20] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, IEEE conference on computer vision and pattern recognition, (2016), 770-778. |
[21] | J. Deng, W. Dong, R. Socher, L. Li, K. Li, L. Fei-Fei, ImageNet: A large-scale hierarchical image database, IEEE Conference on Computer Vision and Pattern Recognition, (2009), 248-255. |