Research article Special Issues

Evolving blocks by segmentation for neural architecture search

  • Received: 18 October 2023 Revised: 23 February 2024 Accepted: 27 February 2024 Published: 06 March 2024
  • Convolutional neural networks (CNNs) play a prominent role in solving problems in various domains such as pattern recognition, image tasks, and natural language processing. In recent years, neural architecture search (NAS), which is the automatic design of neural network architectures as an optimization algorithm, has become a popular method to design CNN architectures against some requirements associated with the network function. However, many NAS algorithms are characterised by a complex search space which can negatively affect the efficiency of the search process. In other words, the representation of the neural network architecture and thus the encoding of the resulting search space plays a fundamental role in the designed CNN performance. In this paper, to make the search process more effective, we propose a novel compact representation of the search space by identifying network blocks as elementary units. The study in this paper focuses on a popular CNN called DenseNet. To perform the NAS, we use an ad-hoc implementation of the particle swarm optimization indicated as PSO-CNN. In addition, to reduce size of the final model, we propose a segmentation method to cut the blocks. We also transfer the final model to different datasets, thus demonstrating that our proposed algorithm has good transferable performance. The proposed PSO-CNN is compared with 11 state-of-the-art algorithms on CIFAR10 and CIFAR100. Numerical results show the competitiveness of our proposed algorithm in the aspect of accuracy and the number of parameters.

    Citation: Xiaoping Zhao, Liwen Jiang, Adam Slowik, Zhenman Zhang, Yu Xue. Evolving blocks by segmentation for neural architecture search[J]. Electronic Research Archive, 2024, 32(3): 2016-2032. doi: 10.3934/era.2024092

    Related Papers:

  • Convolutional neural networks (CNNs) play a prominent role in solving problems in various domains such as pattern recognition, image tasks, and natural language processing. In recent years, neural architecture search (NAS), which is the automatic design of neural network architectures as an optimization algorithm, has become a popular method to design CNN architectures against some requirements associated with the network function. However, many NAS algorithms are characterised by a complex search space which can negatively affect the efficiency of the search process. In other words, the representation of the neural network architecture and thus the encoding of the resulting search space plays a fundamental role in the designed CNN performance. In this paper, to make the search process more effective, we propose a novel compact representation of the search space by identifying network blocks as elementary units. The study in this paper focuses on a popular CNN called DenseNet. To perform the NAS, we use an ad-hoc implementation of the particle swarm optimization indicated as PSO-CNN. In addition, to reduce size of the final model, we propose a segmentation method to cut the blocks. We also transfer the final model to different datasets, thus demonstrating that our proposed algorithm has good transferable performance. The proposed PSO-CNN is compared with 11 state-of-the-art algorithms on CIFAR10 and CIFAR100. Numerical results show the competitiveness of our proposed algorithm in the aspect of accuracy and the number of parameters.



    加载中


    [1] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [2] S. Singaravel, J. Suykens, P. Geyer, Deep-learning neural-network architectures and methods: Using component-based models in building-design energy prediction, Adv. Eng. Inf., 38 (2018), 81–90. https://doi.org/10.1016/j.aei.2018.06.004 doi: 10.1016/j.aei.2018.06.004
    [3] H. Xu, J. Kong, M. Liang, H. Sun, M. Qi, Video behavior recognition based on actional-structural graph convolution and temporal extension module, Electron. Res. Arch., 30 (2022), 4157–4177. https://doi.org/10.3934/era.2022210 doi: 10.3934/era.2022210
    [4] D. Peng, Y. Lei, H. Munawar, Y. Guo, W. Li, Semantic-aware domain generalized segmentation, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2022), 2584–2595. https://doi.org/10.1109/CVPR52688.2022.00262
    [5] T. Korbak, K. Shi, A Chen, R. V. Bhalerao, C. Buckley, J. Phang, et al., Pretraining language models with human preferences, in Proceedings of the 40th International Conference on Machine Learning, PMLR, 202 (2023), 17506–17533.
    [6] A. Krizhevsky, I. Sutskever, G. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [7] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556.
    [8] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., Going deeper with convolutions, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2015), 1–9. https://doi.org/10.1109/CVPR.2015.7298594
    [9] G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2017), 2261–2269. https://doi.org/10.1109/CVPR.2017.243
    [10] J. Xi, Z. Xu, Z. Yan, W. Liu, Y. Liu, Portrait age recognition method based on improved ResNet and deformable convolution, Electron. Res. Arch., 31 (2023), 6585–6599. https://doi.org/10.3934/era.2023333 doi: 10.3934/era.2023333
    [11] C. Swarup, K. U. Singh, A. Kumar, S. K. Pandey, N. Varshney, T. Singh, Brain tumor detection using CNN, AlexNet & GoogLeNet ensembling learning approaches, Electron. Res. Arch., 31 (2023), 2900–2924. https://doi.org/10.3934/era.2023146 doi: 10.3934/era.2023146
    [12] B. Zoph, Q. V. Le, Neural architecture search with reinforcement learning, preprint, arXiv: 1611.01578.
    [13] T. Elsken, J. H. Metzen, F. Hutter, Neural architecture search: A survey, J. Mach. Learn. Res., 20 (2019), 1997–2017.
    [14] Y. Xue, W. Tong, F. Neri, P. Chen, T. Luo, L. Zhen, et al., Evolutionary architecture search for generative adversarial networks based on weight sharing, IEEE Trans. Evol. Comput., 2023 (2023), 1. https://doi.org/10.1109/TEVC.2023.3338371 doi: 10.1109/TEVC.2023.3338371
    [15] Y. Xue, X. Han, Z. Wang, Self-adaptive weight based on dual-attention for differentiable neural, IEEE Trans. Ind. Inf., 2024 (2024), 1–10. https://doi.org/10.1109/TII.2023.3348843 doi: 10.1109/TII.2023.3348843
    [16] Y. Xue, Z. Zhang, F. Neri, Similarity surrogate-assisted evolutionary neural architecture search with dual encoding strategy, Electron. Res. Arch., 32 (2024), 1017–1043. https://doi.org/10.3934/era.2024050 doi: 10.3934/era.2024050
    [17] H. Liu, K. Simonyan, Y. Yang, DARTS: Differentiable architecture search, preprint, arXiv: 1806.09055.
    [18] Y. Xue, J. Qin, Partial connection based on channel attention for differentiable neural architecture search, IEEE Trans. Ind. Inf., 19 (2023), 6804–6813. https://doi.org/10.1109/TII.2022.3184700 doi: 10.1109/TII.2022.3184700
    [19] Y. Xue, C. Lu, F. Neri, J. Qin, Improved differentiable architecture search with multi-stage progressive partial channel connections, IEEE Trans. Emerging Top. Comput. Intell., 8 (2024), 32–43. https://doi.org/10.1109/TETCI.2023.3301395 doi: 10.1109/TETCI.2023.3301395
    [20] Y. Liu, Y. Sun, B. Xue, M. Zhang, G. G. Yen, K. C. Tan, A survey on evolutionary neural architecture search, IEEE Trans. Neural Networks Learn. Syst., 34 (2023), 550–570. https://doi.org/10.1109/TNNLS.2021.3100554 doi: 10.1109/TNNLS.2021.3100554
    [21] Y. Xue, C. Chen, A. Slowik, Neural architecture search based on a multi-objective evolutionary algorithm with probability stack, IEEE Trans. Evol. Comput., 27 (2023), 778–786. https://doi.org/10.1109/TEVC.2023.3252612 doi: 10.1109/TEVC.2023.3252612
    [22] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, et al., Large-scale evolution of image classifiers, in Proceedings of the 34th International Conference on Machine Learning (ICML), PMLR, 70 (2017), 2902–2911.
    [23] Y. Sun, B. Xue, M. Zhang, G. G. Yen, Completely automated CNN architecture design based on blocks, IEEE Trans. Neural Networks Learn. Syst., 31 (2020), 1242–1254. https://doi.org/10.1109/TNNLS.2019.2919608 doi: 10.1109/TNNLS.2019.2919608
    [24] Y. Xue, Y. Wang, J. Liang, A. Slowik, A self-adaptive mutation neural architecture search algorithm based on blocks, IEEE Comput. Intell. Mag., 16 (2021), 67–78. https://doi.org/10.1109/MCI.2021.3084435 doi: 10.1109/MCI.2021.3084435
    [25] D. Song, C. Xu, X. Jia, Y. Chen, C. Xu, Y. Wang, Efficient residual dense block search for image super-resolution, in Proceedings of the AAAI conference on artificial intelligence, AAAI Press, 34 (2020), 12007–12014. https://doi.org/10.1609/aaai.v34i07.6877
    [26] J. Fang, Y. Sun, Q. Zhang, Y. Li, W. Liu, X. Wang, Densely connected search space for more flexible neural architecture search, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2020), 10625–10634. https://doi.org/10.1109/CVPR42600.2020.01064
    [27] J. Kennedy, R. C. Eberhart, Particle swarm optimization, in Proceedings of ICNN'95 - International Conference on Neural Networks, IEEE, 4 (1995), 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
    [28] B. Wang, B. Xue, M. Zhang, Particle swarm optimisation for evolving deep neural networks for image classification by evolving and stacking transferable blocks, in 2020 IEEE Congress on Evolutionary Computation (CEC), IEEE, (2020), 1–8. https://doi.org/10.1109/CEC48606.2020.9185541
    [29] E. Real, A. Aggarwal, Y. Huang, Q. V. Le, Regularized evolution for image classifier architecture search, in Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press, 33 (2019), 4780–4789. https://doi.org/10.1609/aaai.v33i01.33014780
    [30] G. Huang, S. Liu, L. v. d. Maaten, K. Q. Weinberger, CondenseNet: An efficient denseNet using learned group convolutions, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2018), 2752–2761. https://doi.org/10.1109/CVPR.2018.00291
    [31] H. Cai, T. Chen, W. Zhang, Y. Yu, J. Wang, Regularized evolution for image classifier architecture search, in Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press, 32 (2018), 4780–4789. https://doi.org/10.1609/aaai.v32i1.11709
    [32] B. Baker, O. Gupta, N. Naik, R. Raskar, Designing neural network architectures using reinforcement learning, preprint, arXiv: 1611.02167.
    [33] L. Xie, A. Yuille, Genetic CNN, in 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, (2017), 1388–1397. https://doi.org/10.1109/ICCV.2017.154
    [34] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, K. Kavukcuoglu, Hierarchical representations for efficient architecture search, preprint, arXiv: 1711.00436.
    [35] A. I. Sharaf, E. S. F. Radwan, An automated approach for developing a convolutional neural network using a modified firefly algorithm for image classification, in Applications of Firefly Algorithm and its Variants, Springer, (2020), 99–118. https://doi.org/10.1007/978-981-15-0306-1_5
    [36] G. Cuccu, J. Togelius, P. Cudre-Mauroux, Playing atari with six neurons (Extended abstract), in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20), International Joint Conferences on Artificial Intelligence Organization, (2020), 4711–4715. https://doi.org/10.24963/ijcai.2020/651
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(812) PDF downloads(43) Cited by(0)

Article outline

Figures and Tables

Figures(8)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog