Research article

Co-supervised learning paradigm with conditional generative adversarial networks for sample-efficient classification


  • Received: 30 August 2022 Revised: 08 December 2022 Accepted: 09 December 2022 Published: 19 December 2022
  • Classification using supervised learning requires annotating a large amount of classes-balanced data for model training and testing. This has practically limited the scope of applications with supervised learning, in particular deep learning. To address the issues associated with limited and imbalanced data, this paper introduces a sample-efficient co-supervised learning paradigm (SEC-CGAN), in which a conditional generative adversarial network (CGAN) is trained alongside the classifier and supplements semantics-conditioned, confidence-aware synthesized examples to the annotated data during the training process. In this setting, the CGAN not only serves as a co-supervisor but also provides complementary quality examples to aid the classifier training in an end-to-end fashion. Experiments demonstrate that the proposed SEC-CGAN outperforms the external classifier GAN (EC-GAN) and a baseline ResNet-18 classifier. For the comparison, all classifiers in above methods adopt the ResNet-18 architecture as the backbone. Particularly, for the Street View House Numbers dataset, using the 5% of training data, a test accuracy of 90.26% is achieved by SEC-CGAN as opposed to 88.59% by EC-GAN and 87.17% by the baseline classifier; for the highway image dataset, using the 10% of training data, a test accuracy of 98.27% is achieved by SEC-CGAN, compared to 97.84% by EC-GAN and 95.52% by the baseline classifier.

    Citation: Hao Zhen, Yucheng Shi, Jidong J. Yang, Javad Mohammadpour Vehni. Co-supervised learning paradigm with conditional generative adversarial networks for sample-efficient classification[J]. Applied Computing and Intelligence, 2023, 3(1): 13-26. doi: 10.3934/aci.2023002

    Related Papers:

  • Classification using supervised learning requires annotating a large amount of classes-balanced data for model training and testing. This has practically limited the scope of applications with supervised learning, in particular deep learning. To address the issues associated with limited and imbalanced data, this paper introduces a sample-efficient co-supervised learning paradigm (SEC-CGAN), in which a conditional generative adversarial network (CGAN) is trained alongside the classifier and supplements semantics-conditioned, confidence-aware synthesized examples to the annotated data during the training process. In this setting, the CGAN not only serves as a co-supervisor but also provides complementary quality examples to aid the classifier training in an end-to-end fashion. Experiments demonstrate that the proposed SEC-CGAN outperforms the external classifier GAN (EC-GAN) and a baseline ResNet-18 classifier. For the comparison, all classifiers in above methods adopt the ResNet-18 architecture as the backbone. Particularly, for the Street View House Numbers dataset, using the 5% of training data, a test accuracy of 90.26% is achieved by SEC-CGAN as opposed to 88.59% by EC-GAN and 87.17% by the baseline classifier; for the highway image dataset, using the 10% of training data, a test accuracy of 98.27% is achieved by SEC-CGAN, compared to 97.84% by EC-GAN and 95.52% by the baseline classifier.



    加载中


    [1] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84-90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [2] M. Frid-Adar, I. Diamant, E. Klang, M. Amitai, J. Goldberger, H. Greenspan, GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification, Neurocomputing, 321 (2018), 321-331. https://doi.org/10.1016/j.neucom.2018.09.013 doi: 10.1016/j.neucom.2018.09.013
    [3] B. Bhattarai, S. Baek, R. Bodur, T. K. Kim, Sampling strategies for GAN synthetic data, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2020), 2303-2307. IEEE. https://doi.org/10.1109/ICASSP40776.2020.9054677
    [4] M. Mirza, S. Osindero, Conditional generative adversarial nets, arXiv: 1411.1784, 2014.
    [5] J. J. Bird, C. M. Barnes, L. J. Manso, A. Ekárt, D. R. Faria, Fruit quality and defect image classification with conditional GAN data augmentation, Scientia Horticulturae, 293 (2022), 110684. https://doi.org/10.1016/j.scienta.2021.110684 doi: 10.1016/j.scienta.2021.110684
    [6] V. A. Fajardo, D. Findlay, C. Jaiswal, X. Yin, R. Houmanfar, H. Xie, H., et al., On oversampling imbalanced data with deep conditional generative models, Expert Syst. Appl., 169 (2021), 114463. https://doi.org/10.1016/j.eswa.2020.114463 doi: 10.1016/j.eswa.2020.114463
    [7] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, Improved techniques for training GANs, Advances in neural information processing systems, 29 (2016).
    [8] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, et al., Adversarially learned inference, arXiv: 1606.00704, 2016.
    [9] A. Odena, Semi-supervised learning with generative adversarial networks, arXiv: 1606.01583, 2016.
    [10] C. Li, T. Xu, J. Zhu, B. Zhang, Triple generative adversarial nets, Advances in neural information processing systems, 30 (2017). https://doi.org/10.1007/978-3-319-70139-4 doi: 10.1007/978-3-319-70139-4
    [11] A. Haque, Ec-gan: Low-sample classification using semi-supervised algorithms and GANs, Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 15797-15798. https://doi.org/10.1609/aaai.v35i18.17895 doi: 10.1609/aaai.v35i18.17895
    [12] T. Chen, S. Kornblith, M. Norouzi, G. Hinton, A simple framework for contrastive learning of visual representations, International conference on machine learning, (2020), 1597-1607. PMLR.
    [13] X. Chen, K. He, Exploring simple siamese representation learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2021), 15750-15758. https://doi.org/10.1109/CVPR46437.2021.01549
    [14] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, R. Girshick, Masked autoencoders are scalable vision learners, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 16000-16009. https://doi.org/10.1109/CVPR52688.2022.01553 doi: 10.1109/CVPR52688.2022.01553
    [15] A. Odena, C. Olah, J. Shlens, Conditional image synthesis with auxiliary classifier GANs, International conference on machine learning, 70 (2017), 2642-2651.
    [16] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, et al., Generative adversarial networks, Commun. ACM, 63 (2020), 139-144. https://doi.org/10.1145/3422622 doi: 10.1145/3422622
    [17] A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv: 1511.06434, 2015.
    [18] A. Brock, J. Donahue, K. Simonyan, Large scale GAN training for high fidelity natural image synthesis, arXiv preprint arXiv: 1809.11096, 2018.
    [19] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, P. Abbeel, Infogan: Interpretable representation learning by information maximizing generative adversarial nets, Advances in neural information processing systems, 29 (2016).
    [20] J. Donahue, P. Krä henbühl, T. Darrell, Adversarial feature learning, arXiv: 1605.09782, 2016.
    [21] T. Karras, S. Laine, T. Aila, A style-based generator architecture for generative adversarial networks, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (2019), 4401-4410. https://doi.org/10.1109/CVPR.2019.00453 doi: 10.1109/CVPR.2019.00453
    [22] J. Y. Zhu, T. Park, P. Isola, A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, Proceedings of the IEEE international conference on computer vision, (2017), 2223-2232. https://doi.org/10.1109/ICCV.2017.244 doi: 10.1109/ICCV.2017.244
    [23] G. Perarnau, J. Van De Weijer, B. Raducanu, J. M. Á lvarez, Invertible conditional GANs for image editing, arXiv preprint arXiv: 1611.06355, 2016.
    [24] G. L. Grinblat, L. C. Uzal, P. M. Granitto, Class-splitting generative adversarial networks, arXiv preprint arXiv: 1709.07359, 2017.
    [25] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, H. Lee, Generative adversarial text to image synthesis, International conference on machine learning, (2016), 1060-1069.
    [26] H. Zhang, T. Xu, H. Li, X. Zhang, X. Wang, X. Huang, et al., Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks, Proceedings of the IEEE international conference on computer vision, (2017), 5907-5915. https://doi.org/10.1109/ICCV.2017.629 doi: 10.1109/ICCV.2017.629
    [27] T. Kim, M. Cha, H. Kim, J. K. Lee, J. Kim, Learning to discover cross-domain relations with generative adversarial networks, International conference on machine learning, (2017), 1857-1865.
    [28] V. Dumoulin, J. Shlens, M. Kudlur, A learned representation for artistic style, arXiv: 1610.07629, 2016.
    [29] M. Patel, X. Wang, S. Mao, Data augmentation with conditional GAN for automatic modulation classification, Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning, (2020), 31-36. https://doi.org/10.1145/3395352.3402622 doi: 10.1145/3395352.3402622
    [30] D. H. Lee, Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks, Workshop on challenges in representation learning, ICML, 3 (2013), 896.
    [31] Y. Grandvalet, Y. Bengio, Semi-supervised learning by entropy minimization, Advances in neural information processing systems, 17 (2004).
    [32] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), 770-778. https://doi.org/10.1109/CVPR.2016.90 doi: 10.1109/CVPR.2016.90
    [33] N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer, SMOTE: synthetic minority over-sampling technique, J. Artif. Intell. Res., 16 (2002), 321-357. https://doi.org/10.1613/jair.953 doi: 10.1613/jair.953
    [34] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A. Y. Ng, Reading digits in natural images with unsupervised feature learning, NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
    [35] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv: 14126980, 2014.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1284) PDF downloads(58) Cited by(0)

Article outline

Figures and Tables

Figures(5)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog