Research article Special Issues

Robust safe semi-supervised learning framework for high-dimensional data classification

  • Received: 02 July 2024 Revised: 18 August 2024 Accepted: 29 August 2024 Published: 04 September 2024
  • MSC : 68T10, 91C20

  • In this study, we introduced an innovative and robust semi-supervised learning strategy tailored for high-dimensional data categorization. This strategy encompasses several pivotal symmetry elements. To begin, we implemented a risk regularization factor to gauge the uncertainty and possible hazards linked to unlabeled samples within semi-supervised learning. Additionally, we defined a unique non-second-order statistical indicator, termed $ C_{p} $-Loss, within the kernel domain. This $ C_{p} $-Loss feature is characterized by symmetry and bounded non-negativity, efficiently minimizing the influence of noise points and anomalies on the model's efficacy. Furthermore, we developed a robust safe semi-supervised extreme learning machine (RS3ELM), grounded on this educational framework. We derived the generalization boundary of RS3ELM utilizing Rademacher complexity. The optimization of the output weight matrix in RS3ELM is executed via a fixed point iteration technique, with our theoretical exposition encompassing RS3ELM's convergence and computational complexity. Through empirical analysis on various benchmark datasets, we demonstrated RS3ELM's proficiency and compared it against multiple leading-edge semi-supervised learning models.

    Citation: Jun Ma, Xiaolong Zhu. Robust safe semi-supervised learning framework for high-dimensional data classification[J]. AIMS Mathematics, 2024, 9(9): 25705-25731. doi: 10.3934/math.20241256

    Related Papers:

  • In this study, we introduced an innovative and robust semi-supervised learning strategy tailored for high-dimensional data categorization. This strategy encompasses several pivotal symmetry elements. To begin, we implemented a risk regularization factor to gauge the uncertainty and possible hazards linked to unlabeled samples within semi-supervised learning. Additionally, we defined a unique non-second-order statistical indicator, termed $ C_{p} $-Loss, within the kernel domain. This $ C_{p} $-Loss feature is characterized by symmetry and bounded non-negativity, efficiently minimizing the influence of noise points and anomalies on the model's efficacy. Furthermore, we developed a robust safe semi-supervised extreme learning machine (RS3ELM), grounded on this educational framework. We derived the generalization boundary of RS3ELM utilizing Rademacher complexity. The optimization of the output weight matrix in RS3ELM is executed via a fixed point iteration technique, with our theoretical exposition encompassing RS3ELM's convergence and computational complexity. Through empirical analysis on various benchmark datasets, we demonstrated RS3ELM's proficiency and compared it against multiple leading-edge semi-supervised learning models.



    加载中


    [1] M. Belkin, P. Niyogi, V. Sindhwani, Manifold regularization: A geometric framework for learning from labeled and unlabeled examples, J. Mach. Learn. Res., 7 (2006), 2399–2434. http://dx.doi.org/10.5555/1248547.124863 doi: 10.5555/1248547.124863
    [2] O. Chapelle, B. Scholkopf, A. Zien, Semi-supervised learning, IEEE T. Neural Networ., https://doi.org/10.1109/TNN.2009.2015974
    [3] T. Yang, C. E. Priebe, The effect of model misspecification on semi-supervised classification, IEEE T. Pattern Anal., 33 (2011), 2093–2103. http://dx.doi.org/10.1109/TPAMI.2011.45 doi: 10.1109/TPAMI.2011.45
    [4] Y. F. Li, Z. H. Zhou, Towards making unlabeled data never hurt, IEEE T. Pattern Anal., 37 (2015), 175–188. https://doi.org/10.1109/TPAMI.2014.2299812 doi: 10.1109/TPAMI.2014.2299812
    [5] Y. T. Li, J. T. Kwok, Z. H. Zhou, Towards safe semi-supervised learning for multivariate performance measures, In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 30 (2016), 1816–1822. https://doi.org/10.1609/aaai.v30i1.10282
    [6] Y. Wang, S. Chen, Z. H. Zhou, New semi-supervised classification method based on modified cluster assumption, IEEE T. Neural Networ., 23 (2011), 689–702. https://doi.org/10.1609/aaai.v25i1.7920 doi: 10.1609/aaai.v25i1.7920
    [7] Y. Wang, S. Chen, Safety-aware semi-supervised classification, IEEE T. Neural Networ., 24 (2013), 1763–1772. https://doi.org/10.1109/TNNLS.2013.2263512 doi: 10.1109/TNNLS.2013.2263512
    [8] M. Kawakita, J. Takeuchi, Safe semi-supervised learning based on weighted likelihood, Neural Networks, 53 (2014), 146–164. https://doi.org/10.1016/j.neunet.2014.01.016 doi: 10.1016/j.neunet.2014.01.016
    [9] H. Gan, Z. Luo, M. Meng, Y. Ma, Q. She, A risk degree-based safe semi-supervised learning algorithm, Int. J. Mach. Learn. Cyb., 7 (2015), 85–94. https://doi.org/10.1007/s13042-015-0416-8 doi: 10.1007/s13042-015-0416-8
    [10] H. Gan, Z. Luo, Y. Sun, X. Xi, N. Sang, R. Huang, Towards designing risk-based safe Laplacian regularized least squares, Expert Syst. Appl., 45 (2016), 1–7. https://doi.org/10.1016/j.eswa.2015.09.017 doi: 10.1016/j.eswa.2015.09.017
    [11] H. Gan, Z. Li, Y. Fan, Z. Luo, Dual learning-based safe semi-supervised learning, IEEE Access, 6 (2017), 2615–2621. https://doi.org/10.1109/access.2017.2784406 doi: 10.1109/access.2017.2784406
    [12] H. Gan, Z. Li, W. Wu, Z. Luo, R. Huang, Safety-aware graph-based semi-supervised learning, Expert Syst. Appl., 107 (2018), 243–254. https://doi.org/10.1016/j.eswa.2018.04.031 doi: 10.1016/j.eswa.2018.04.031
    [13] N. Sang, H. Gan, Y. Fan, W. Wu, Z. Yang, Adaptive safety degree-based safe semi-supervised learning, Int. J. Mach. Learn. Cyb., 10 (2018), 1101–1108. https://doi.org/10.1007/s13042-018-0788-7 doi: 10.1007/s13042-018-0788-7
    [14] Y. Y. Wang, Y. Meng, Z. Fu, H. Xue, Towards safe semi-supervised classification: Adjusted cluster assumption via clustering, Neural Process. Lett., 46 (2017), 1031–1042. https://doi.org/10.1007/s11063-017-9607-5 doi: 10.1007/s11063-017-9607-5
    [15] H. Gan, G. Li, S. Xia, T. Wang, A hybrid safe semi-supervised learning method, Expert Syst. Appl., 149 (2020), 1–9. https://doi.org/10.1016/j.eswa.2020.113295 doi: 10.1016/j.eswa.2020.113295
    [16] Y. T. Li, J. T. Kwok, Z. H. Zhou, Towards safe semi-supervised learning for multivariate performance measures, In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 30 (2016), 1816–1822. https://doi.org/10.1609/aaai.v30i1.10282
    [17] G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: Theory and applications, Neurocomputing, 70 (2006), 489–501. https://doi.org/10.1016/j.neucom.2005.12.126 doi: 10.1016/j.neucom.2005.12.126
    [18] Y. Cheng, D. Zhao, Y. Wang, G. Pei, Multi-label learning with kernel extreme learning machine autoencoder, Knowl.-Based Syst., 178 (2019), 1–10. https://doi.org/10.1016/j.knosys.2019.04.002 doi: 10.1016/j.knosys.2019.04.002
    [19] X. Huang, Q. Lei, T. Xie, Y. Zhang, Z. Hu, Q. Zhou, Deep transfer convolutional neural network and extreme learning machine for lung nodule diagnosis on CT images, Knowl.-Based Syst., 204 (2020), 106230. https://doi.org/10.1016/j.knosys.2020.106230 doi: 10.1016/j.knosys.2020.106230
    [20] J. Ma, L. Yang, Y. Wen, Q. Sun, Twin minimax probability extreme learning machine for pattern recognition, Knowl.-Based Syst., 187 (2020), 104806. https://doi.org/10.1016/j.knosys.2019.06.014 doi: 10.1016/j.knosys.2019.06.014
    [21] C. Yuan, L. Yang, Robust twin extreme learning machines with correntropy-based metric, Knowl.-Based Syst., 214 (2021), 106707. https://doi.org/10.1016/j.knosys.2020.106707 doi: 10.1016/j.knosys.2020.106707
    [22] Y. Li, Y. Wang, Z. Chen, R. Zou, Bayesian robust multi-extreme learning machine, Knowl.-Based Syst., 210 (2020), 106468. https://doi.org/10.1016/j.knosys.2020.106468 doi: 10.1016/j.knosys.2020.106468
    [23] H. Pei, K. Wang, Q. Lin, P. Zhong, Robust semi-supervised extreme learning machine, Knowl.-Based Syst., 159 (2018), 203–220. https://doi.org/10.1016/j.knosys.2018.06.029 doi: 10.1016/j.knosys.2018.06.029
    [24] G. Huang, S. Song, J. N. D. Gupta, C. Wu, Semi-supervised and unsupervised extreme learning machines, IEEE T. Cybernetics, 44 (2014), 2405. https://doi.org/10.1109/tcyb.2014.2307349 doi: 10.1109/tcyb.2014.2307349
    [25] W. Liu, P. P. Pokharel, J. C. Principe, Correntropy: Properties and applications in non-Gaussian signal processing, IEEE T. Signal Proces., 55 (2007), 5286–5298. https://doi.org/10.1109/tsp.2007.896065 doi: 10.1109/tsp.2007.896065
    [26] N. Masuyama, C. K. Loo, F. Dawood, Kernel Bayesian ART and ARTMAP, Neural Networks, 98 (2018), 76–86. https://doi.org/10.1016/j.neunet.2017.11.003 doi: 10.1016/j.neunet.2017.11.003
    [27] X. Liu, B. Chen, H. Zhao, J. Qin, J. Cao, Maximum correntropy Kalman filter with state constraints, IEEE Access, 5 (2017), 25846–25853. https://doi.org/10.1109/access.2017.2769965 doi: 10.1109/access.2017.2769965
    [28] B. Chen, X. Liu, H. Zhao, J. C. Principe, Maximum correntropy Kalman filter, Automatica, 76 (2017), 70–77. https://doi.org/10.1016/j.automatica.2016.10.004 doi: 10.1016/j.automatica.2016.10.004
    [29] B. Chen, X. Lei, W. Xin, Q. Jing, N. Zheng, Robust learning with kernel mean p-power error loss, IEEE T. Cybernetics, 48 (2018), 2101–2113. https://doi.org/10.1109/tcyb.2017.2727278 doi: 10.1109/tcyb.2017.2727278
    [30] H. Xing, X. Wang, Training extreme learning machine via regularized correntropy criterion, Neural Comput. Appl., 23 (2013), 1977–1986. https://doi.org/10.1007/s00521-012-1184-y doi: 10.1007/s00521-012-1184-y
    [31] Z. Yuan, X. Wang, J. Cao, H. Zhao, B. Chen, Robust matching pursuit extreme learning machines, Sci. Programming, 1 (2018), 1–10. https://doi.org/10.1155/2018/4563040 doi: 10.1155/2018/4563040
    [32] B. Chen, X. Wang, N. Lu, S. Wang, J. Cao, J. Qin, Mixture correntropy for robust learning, Pattern Recogn., 79 (2018), 318–327. https://doi.org/10.1016/j.patcog.2018.02.010 doi: 10.1016/j.patcog.2018.02.010
    [33] G. Xu, B. G. Hu, J. C. Principe, Robust C-loss kernel classifiers, IEEE T. Neur. Net. Lear., 29 (2018), 510–522. https://doi.org/10.1109/tnnls.2016.2637351 doi: 10.1109/tnnls.2016.2637351
    [34] A. Singh, R. Pokharel, J. Principe, The C-loss function for pattern classification, Pattern Recogn., 47 (2014), 441–453. https://doi.org/10.1016/j.patcog.2013.07.017 doi: 10.1016/j.patcog.2013.07.017
    [35] J. Yang, J. Cao, A. Xue, Robust maximum mixture correntropy criterion-based semi-supervised ELM with variable center, IEEE T. Circuits-II, 67 (2020), 3572–3576. https://doi.org/10.1109/tcsii.2020.2995419 doi: 10.1109/tcsii.2020.2995419
    [36] J. Yang, J. Cao, T. Wang, A. Xue, B. Chen, Regularized correntropy criterion based semi-supervised ELM, Neural Networks, 122 (2020), 117–129. https://doi.org/10.1016/j.neunet.2019.09.030 doi: 10.1016/j.neunet.2019.09.030
    [37] P. L. Bartlett, S. Mendelson, Rademacher and Gaussian complexities: Risk bounds and structural results, In: Conference on Computational Learning Theory & European Conference on Computational Learning Theory, Berlin/Heidelberg: Springer, 2001,224–240. https://doi.org/10.1007/3-540-44581-1-15
    [38] P. J. Huber, Robust estimation of a location parameter, Ann. Math. Stat., 35 (1964), 73–101. https://doi.org/10.1214/aoms/1177703732 doi: 10.1214/aoms/1177703732
    [39] Q. S. Xu, Y. Z. Liang, Monte Carlo cross validation, Chemometr. Intell. Lab., 56 (2001), 1–11. https://doi.org/10.1016/s0169-7439(00)00122-2 doi: 10.1016/s0169-7439(00)00122-2
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(263) PDF downloads(30) Cited by(0)

Article outline

Figures and Tables

Figures(3)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog