Research article

Optimized pointwise convolution operation by Ghost blocks

  • Received: 19 December 2022 Revised: 02 March 2023 Accepted: 12 March 2023 Published: 27 March 2023
  • In the lightweight convolutional neural network model, the pointwise convolutional structure occupies most of the parameters and computation amount of the model. Therefore, improving the pointwise convolution structure is the best choice to optimize the lightweight model. Aiming at the problem that the pointwise convolution in MobileNetV1 and MobileNetV2 consumes too many computation resources, we designed the novel Ghost-PE and Ghost-PC blocks. First, in order to optimize the channel expanded pointwise convolution with the number of input channels less than the output, Ghost-PE makes full use of the feature maps generated by main convolution of the Ghost module, and adds global average pooling and depth convolution operation to enhance the information of feature maps generated through cheap convolution. Second, in order to optimize the channel compressed pointwise convolution with the number of input channels more than the output, Ghost-PC adjusts the Ghost-PE block to make full use of the features generated by cheap convolution to enhance the feature channel information. Finally, we optimized MobileNetV1 and MobileNetV2 models by Ghost-PC and Ghost-PE blocks, and then tested on Food-101, CIFAR and Mini-ImageNet datasets. Compared with other methods, the experimental results show that Ghost-PE and Ghost-PC still maintain a relatively high accuracy in the case of a small number of parameters.

    Citation: Xinzheng Xu, Yanyan Ding, Zhenhu Lv, Zhongnian Li, Renke Sun. Optimized pointwise convolution operation by Ghost blocks[J]. Electronic Research Archive, 2023, 31(6): 3187-3199. doi: 10.3934/era.2023161

    Related Papers:

  • In the lightweight convolutional neural network model, the pointwise convolutional structure occupies most of the parameters and computation amount of the model. Therefore, improving the pointwise convolution structure is the best choice to optimize the lightweight model. Aiming at the problem that the pointwise convolution in MobileNetV1 and MobileNetV2 consumes too many computation resources, we designed the novel Ghost-PE and Ghost-PC blocks. First, in order to optimize the channel expanded pointwise convolution with the number of input channels less than the output, Ghost-PE makes full use of the feature maps generated by main convolution of the Ghost module, and adds global average pooling and depth convolution operation to enhance the information of feature maps generated through cheap convolution. Second, in order to optimize the channel compressed pointwise convolution with the number of input channels more than the output, Ghost-PC adjusts the Ghost-PE block to make full use of the features generated by cheap convolution to enhance the feature channel information. Finally, we optimized MobileNetV1 and MobileNetV2 models by Ghost-PC and Ghost-PE blocks, and then tested on Food-101, CIFAR and Mini-ImageNet datasets. Compared with other methods, the experimental results show that Ghost-PE and Ghost-PC still maintain a relatively high accuracy in the case of a small number of parameters.



    加载中


    [1] X. Zhang, X. Zhou, M. Lin, J. Sun, ShuffleNet: An extremely efficient convolutional neural network for mobile devices, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, (2018), 6848–6856. https://doi.org/10.1109/CVPR.2018.00716
    [2] N. Ma, X. Zhang, H. T. Zheng, J. Sun, ShuffleNet V2: Practical guidelines for efficient CNN architecture design, in Proceedings of the European Conference on Computer Vision (ECCV), Munich, (2018), 116–131. Available from: https://openaccess.thecvf.com/content_ECCV_2018/papers/Ningning_Light-weight_CNN_Architecture_ECCV_2018_paper.pdf.
    [3] D. Zhou, Q. Hou, Y. Chen, J. Feng, S. Yan, Rethinking bottleneck structure for efficient mobile network design, in Computer Vision – ECCV 2020, Springer, Cham, (2020), 680–697. https://doi.org/10.1007/978-3-030-58580-8_40
    [4] Y. Jia, W. Miao, C. Jiang, W. Ye, An improved pointwise convolutional block for efficient model compression, in 2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS), (2019), 24–28. https://doi.org/10.1109/ICSESS47205.2019.9040771
    [5] S. Mehta, H. Hajishirzi, M. Rastegari, Dicenet: Dimension-wise convolutions for efficient networks, IEEE Trans. Pattern Anal. Mach. Intell., 44 (2020), 2416–2425. https://doi.org/10.1109/TPAMI.2020.3041871 doi: 10.1109/TPAMI.2020.3041871
    [6] C. Yu, B. Xiao, C. Gao, L. Yuan, L. Zhang, N. Sang, et al., Lite-HRNet: A lightweight high-resolution network, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 10440–10450. Available from: https://openaccess.thecvf.com/content/CVPR2021/papers/Yu_Lite-HRNet_A_Lightweight_High-Resolution_Network_CVPR_2021_paper.pdf.
    [7] H. Yang, Z. Shen, Y. Zhao, AsymmNet: Towards ultralight convolution neural networks using asymmetrical bottlenecks, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2021), 2339–2348. https://doi.org/10.1109/CVPRW53098.2021.00266
    [8] Y. Li, Y. Chen, X. Dai, D. Chen, M. Liu, L. Yuan, et al., Micronet: Improving image recognition with extremely low flops, in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), (2021), 468–477. Available from: https://openaccess.thecvf.com/content/ICCV2021/papers/Li_MicroNet_Improving_Image_Recognition_With_Extremely_Low_FLOPs_ICCV_2021_paper.pdf.
    [9] F. Liang, Z. Tian, M. Dong, S. Cheng, L. Sun, H. Li, et al., Efficient neural network using pointwise convolution kernels with linear phase constraint, Neurocomputing, 423 (2021), 572–579. https://doi.org/10.1016/j.neucom.2020.10.067 doi: 10.1016/j.neucom.2020.10.067
    [10] M. Villaret, Grouped pointwise convolutions significantly reduces parameters in efficientnet, in Proceedings of the 23rd International Conference of the Catalan Association for Artificial Intelligence, IOS Press, 339 (2021), 383.
    [11] M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, in Proceedings of the 36th International Conference on Machine Learning, 97 (2019), 6105–6114. Available from: http://proceedings.mlr.press/v97/tan19a.html.
    [12] J. P. S. Schuler, S. R. Also, D. Puig, H. Rashwan, M. Abdel-Nasser, An enhanced scheme for reducing the complexity of pointwise convolutions in CNNs for image classification based on interleaved grouped filters without divisibility constraints, Entropy, 24 (2022), 1264. https://doi.org/10.3390/e24091264 doi: 10.3390/e24091264
    [13] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, et al., MobileNets: Efficient convolutional neural networks for mobile vision applications, preprint, arXiv: 1704.04861.
    [14] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
    [15] K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, C. Xu, Ghostnet: More features from cheap operations, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 1577–1586. https://doi.org/10.1109/CVPR42600.2020.00165
    [16] L. Bossard, M. Guillaumin, L. V. Gool, Food-101-mining discriminative components with random forests, in Computer Vision – ECCV 2014, (2014), 446–461. https://doi.org/10.1007/978-3-319-10599-4_29
    [17] A. Krizhevsky, G. Hinton, Learning multiple layers of features from tiny images, 2009.
    [18] O. Vinyals, C. Blundell, T. Lillicrap, K. kavukcuoglu, D. Wierstra, Matching networks for one shot leaming, in Advances in Neural Information Processing Systems, 29 (2016). Available from: https://proceedings.neurips.cc/paper/2016/file/90e1357833654983612fb05e3ec9148c-Paper.pdf.
    [19] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, et al., Automatic differentiation in pytorch, 2017. Available from: https://openreview.net/forum?id = BJJsrmfCZ.
    [20] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [21] Y. Wei, P. W. Yang, F. Ducau, K. Liu, Pytorch-cifar. Available from: https://github.com/kuangliu/pytorch-cifar.
    [22] H. Li, A. Kadav, I. Durdanovic, H. Samet, H. P. Graf, Pruning filters for efficient convnets, preprint, arXiv: 1608.08710.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1244) PDF downloads(80) Cited by(0)

Article outline

Figures and Tables

Figures(7)  /  Tables(10)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog