Research article Special Issues

Semi-supervised random forest regression model based on co-training and grouping with information entropy for evaluation of depression symptoms severity

  • Semi-supervised learning has always been a hot topic in machine learning. It uses a large number of unlabeled data to improve the performance of the model. This paper combines the co-training strategy and random forest to propose a novel semi-supervised regression algorithm: semi-supervised random forest regression model based on co-training and grouping with information entropy (E-CoGRF), and applies it to the evaluation of depression symptoms severity. The algorithm inherits the ensemble characteristics of random forest, and combines well with co-training. In order to balance the accuracy and diversity of co-training random forests, the algorithm proposes a grouping strategy to decision trees. Moreover, the information entropy is used to measure the confidence, which avoids unnecessary repeated training and improves the efficiency of the model. In the practical application of evaluation of depression symptoms severity, we collect cognitive behavioral data of emotional conflict based on the depressive affective disorder. And on this basis, feature construction and normalization preprocessing are carried out. Finally, the test is conducted on 35 labeled and 80 unlabeled depression patients. The result shows that the proposed algorithm obtains MAE (Mean Absolute Error) = 3.63 and RMSE (Root Mean Squared Error) = 4.50, which is better than other semi-supervised regression algorithms. The proposed method effectively solves the modeling difficulties caused by insufficient labeled samples, and has important reference value for the diagnosis of depression symptoms severity.

    Citation: Shengfu Lu, Xin Shi, Mi Li, Jinan Jiao, Lei Feng, Gang Wang. Semi-supervised random forest regression model based on co-training and grouping with information entropy for evaluation of depression symptoms severity[J]. Mathematical Biosciences and Engineering, 2021, 18(4): 4586-4602. doi: 10.3934/mbe.2021233

    Related Papers:

    [1] Bo An . Construction and application of Chinese breast cancer knowledge graph based on multi-source heterogeneous data. Mathematical Biosciences and Engineering, 2023, 20(4): 6776-6799. doi: 10.3934/mbe.2023292
    [2] Jiajia Jiao, Xiao Xiao, Zhiyu Li . dm-GAN: Distributed multi-latent code inversion enhanced GAN for fast and accurate breast X-ray image automatic generation. Mathematical Biosciences and Engineering, 2023, 20(11): 19485-19503. doi: 10.3934/mbe.2023863
    [3] Lingli Gan, Xiaoling Yin, Jiating Huang, Bin Jia . Transcranial Doppler analysis based on computer and artificial intelligence for acute cerebrovascular disease. Mathematical Biosciences and Engineering, 2023, 20(2): 1695-1715. doi: 10.3934/mbe.2023077
    [4] Yali Ouyang, Zhuhuang Zhou, Weiwei Wu, Jin Tian, Feng Xu, Shuicai Wu, Po-Hsiang Tsui . A review of ultrasound detection methods for breast microcalcification. Mathematical Biosciences and Engineering, 2019, 16(4): 1761-1785. doi: 10.3934/mbe.2019085
    [5] Jun Gao, Qian Jiang, Bo Zhou, Daozheng Chen . Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: An overview. Mathematical Biosciences and Engineering, 2019, 16(6): 6536-6561. doi: 10.3934/mbe.2019326
    [6] Muhammad Bilal Shoaib Khan, Atta-ur-Rahman, Muhammad Saqib Nawaz, Rashad Ahmed, Muhammad Adnan Khan, Amir Mosavi . Intelligent breast cancer diagnostic system empowered by deep extreme gradient descent optimization. Mathematical Biosciences and Engineering, 2022, 19(8): 7978-8002. doi: 10.3934/mbe.2022373
    [7] Jian-xue Tian, Jue Zhang . Breast cancer diagnosis using feature extraction and boosted C5.0 decision tree algorithm with penalty factor. Mathematical Biosciences and Engineering, 2022, 19(3): 2193-2205. doi: 10.3934/mbe.2022102
    [8] Xiao Zou, Jintao Zhai, Shengyou Qian, Ang Li, Feng Tian, Xiaofei Cao, Runmin Wang . Improved breast ultrasound tumor classification using dual-input CNN with GAP-guided attention loss. Mathematical Biosciences and Engineering, 2023, 20(8): 15244-15264. doi: 10.3934/mbe.2023682
    [9] Xiaoli Zhang, Ziying Yu . Pathological analysis of hesperetin-derived small cell lung cancer by artificial intelligence technology under fiberoptic bronchoscopy. Mathematical Biosciences and Engineering, 2021, 18(6): 8538-8558. doi: 10.3934/mbe.2021423
    [10] Xi Lu, Xuedong Zhu . Automatic segmentation of breast cancer histological images based on dual-path feature extraction network. Mathematical Biosciences and Engineering, 2022, 19(11): 11137-11153. doi: 10.3934/mbe.2022519
  • Semi-supervised learning has always been a hot topic in machine learning. It uses a large number of unlabeled data to improve the performance of the model. This paper combines the co-training strategy and random forest to propose a novel semi-supervised regression algorithm: semi-supervised random forest regression model based on co-training and grouping with information entropy (E-CoGRF), and applies it to the evaluation of depression symptoms severity. The algorithm inherits the ensemble characteristics of random forest, and combines well with co-training. In order to balance the accuracy and diversity of co-training random forests, the algorithm proposes a grouping strategy to decision trees. Moreover, the information entropy is used to measure the confidence, which avoids unnecessary repeated training and improves the efficiency of the model. In the practical application of evaluation of depression symptoms severity, we collect cognitive behavioral data of emotional conflict based on the depressive affective disorder. And on this basis, feature construction and normalization preprocessing are carried out. Finally, the test is conducted on 35 labeled and 80 unlabeled depression patients. The result shows that the proposed algorithm obtains MAE (Mean Absolute Error) = 3.63 and RMSE (Root Mean Squared Error) = 4.50, which is better than other semi-supervised regression algorithms. The proposed method effectively solves the modeling difficulties caused by insufficient labeled samples, and has important reference value for the diagnosis of depression symptoms severity.



    The breast tumor is a common clinical disease in women. Ultrasound has always been considered the most appropriate method for breast examination and tumor screening [1]. However, ultrasound has a strong technical dependence on the examiners, and the results of different examiners are different, which has been considered the shortcomings of ultrasound [2]. With the imaging technology development, more and more methods have been used to objectively analyze ultrasound images to help diagnose breast tumors [3], such as elastic ultrasound, computer-aided diagnosis technology, etc.

    The S-Detect artificial intelligence system is a set of auxiliary ultrasound imaging in diagnosing a deep learning system [4,5,6]. This system extracts the morphological features from the breast imaging reporting data system recommended by the American Society of Radiology (BI-RADS). Combined with pathological results of mass, the information can be used to diagnose breast lumps automatically. Two-dimensional breast ultrasound is widely used in diagnosing breast lesions. Still, due to its strong operator dependence, the accuracy and repeatability of BI-RADS classification results between different operators need to be improved [7,8,9]. Artificial intelligence-assisted diagnosis system eliminates the influence of many human factors. It can carry out image analysis objectively and efficiently to improve doctors' working efficiency and diagnostic efficiency [10,11,12].

    One of the main problems encountered in ultrasound diagnosis is that senior doctors' diagnostic level is significantly higher than that of junior doctors. The training of a junior doctor often takes four years or more. The use of AI will significantly reduce the difficulty of training doctors. The purpose of this study was to evaluate the value of the S-Detect artificial intelligence system in the auxiliary diagnosis of benign and malignant breast masses.

    From November 2019 to June 2020, the patients with breast masses undergoing ultrasound examination in the ultrasound imaging department of our hospital were randomly collected.

    Inclusion criteria: The diagnosis had definite pathological support; adult patients; hospitalized patients.

    Exclusion criteria: The mass boundary was not clear; there were blood flow, arrow, and text on the image; patients who can't cooperate with the exam.

    Finally, 40 patients were enrolled in the study cohort. All the patients were female, with an age of (50.9 ± 13.9) years old. These patients were diagnosed with single or multiple breast lesions. However, only one typical image was selected from a patient. This study was approved by the ethics committee of the First People's Hospital of Anqing City. The written informed consent was obtained from patients.

    The Samsung RS80A ultrasonic diagnostic instrument was used to detect the breast, with an L3-12A probe, frequency 3~12MHz.The patient was placed in the supine position, with his hands over his head, to expose the mammary glands and axilla's scanning area as large as possible. The scan was executed by two ultrasound doctors (engaged in breast ultrasound > 3 years) for regular breast two-dimensional (2D) gray-scale and color doppler ultrasound. The 2D images were collected, and the location, size, shape, boundary, internal echo, rear and side, blood flow, and spectrum of the echo were observed. The information was evaluated by the BI-RADS classification.

    Then, the S-Detect mammary gland mode was selected to display the 2D horizontal and vertical gray-scale section of the mass. After clicking the center of lesion, the system automatically outlined the lesion boundary as the region of interest. If the border drawn automatically did not match the mass's solid edge, the operator could readjust and sketch the frame. After selecting the most appropriate boundary, the system automatically listed the various characteristics of the mass (size, depth, shape, boundary, internal echo, and so on) and the diagnostic results, which probably benign or malignant (Figures 13).

    Figure 1.  Malignant breast mass from a patient with 82 years old. A: Two-dimensional gray-scale ultrasound image; B: Elastic image of nodule; C: Transverse section image in S-Detect system; D: longitudinal section image in S-Detect system. The diagnosis was probably malignant.
    Figure 2.  Malignant breast mass from a patient with 59 years old. A: Two-dimensional gray-scale ultrasound image; B: Elastic image of nodule; C: Transverse section image in S-Detect system; D: longitudinal section image in S-Detect system. The diagnosis was probably malignant.
    Figure 3.  Benign breast mass from a patient with 27 years old. A: Two-dimensional gray-scale ultrasound image; B: Elastic image of nodule; C: Transverse section image in S-Detect system; D: longitudinal section image in S-Detect system. The diagnosis was probably benign.

    Finally, the model was converted to breast elasticity, and the patient was asked to hold their breath without pressure. Satisfactory elastic images were collected. The color of elastography was divided into green and blue, which reflects the hardness of the tissue. If the lesion is very hard, it is dark blue and tends to be malignant. The lesions were soft, light green, and managed to be benign.

    The BI-RADS classification results for each breast mass were divided into the senior physician (engaged in breast ultrasound > 15 years) and the junior group. Both of them were classified into two groups based on whether the auxiliary S-Detect AI were used. One group was the BI-RADS initial classification of each mass by sonographers based on conventional ultrasound images; another group was the BI-RADS classification for each mass again performed for the physician combined with S-Detect assisted diagnosis.

    Since each mass was randomly examined more than twice before the final surgical biopsy, the multiple BI-RADS classifications for all lesions obtained by multiple sonographers were defined in this study as a junior set of results. The S-Detect system diagnosis results were also combined with the junior set of products. The BI-RADS classification diagnosis results Category ≤ 4a was defined as benign lesions, while Category ≥ 4b was defined as malignant lesions.

    The S-Detect diagnosis of each breast mass in both transverse and longitudinal sections was recorded as B/B, and both areas were likely benign. The diagnosis results of the two regions are different, which was denoted as M/B. Both sections are likely malignant, denoted as M/M. The S-Detect diagnostic result B/B was defined as benign lesions, while the M/B or M/M were defined as malignant lesions.

    According to the standards of elastic imaging Guidelines, the Tsukuba score (Elasticity Score) was performed for each breast mass, with the ES value as the ES group data. The ES ≤ 3 points of Tsukuba were defined as benign lesions, while ES ≥ 4 points were defined as malignant lesions.

    The SPSS 21.0 software was used for statistical analysis of the data. The qualitative data were presented in terms of rate and the results of different diagnostic methods for the same lesion were compared by paired chi-square test.

    Based on the pathological results, the receiver operator characteristics (ROC) curves were drawn for multiple groups of diagnostic products to obtain the area under curve area (AUC), sensitivity, specificity, and diagnostic accuracy. Kappa test was used to analyze the consistency of diagnostic results of different groups of data. The accuracy was calculated by the following formula:

    accuracy=a+dn×100%(1)

    In the formula, the "a" represents the true positive number, "d" represents the true negative number, and "n" represents the total number.

    Among the 40 patients, benign lesions accounted for 40.0% (16/40), and malignant lesions accounted for 60.0% (24/40). In the benign lesions, there were 4 cases of mammary gland disease, 8 cases of fibroadenoma, 2 cases of papilloma, and 2 cases of other benign tumors. In the malignant lesions, there were 17 cases of invasive ductal carcinoma, 1 case of mucinous carcinoma, 1 case of papillary carcinoma, 3 case of intraductal carcinoma, 1 cases of invasive lobular carcinoma, and 1 case of intraductal carcinoma in situ..

    When the results of pathological examination were used as the gold standard, the S-Detect AI system itself had high sensitivity, specificity, and accuracy (95.8%, 93.8%, 89.6%) for the identification of benign and malignant breast masses. The diagnostic efficiency of two-dimensional gray-scale ultrasound was the lowest, with the sensitivity and specificity was lower than 80%. The elastic image had a relatively high specificity (93.8%) and a relatively lower sensitivity (79.2%). The detail was shown in Table 1 and Figure 4.

    Table 1.  Diagnostic effectiveness of different methods.
    Group AUC Sensitivity (%) Specificity (%) Accuracy (%)
    AI 0.948 95.8 93.8 89.6
    Elastic image 0.865 79.2 93.8 73.0
    Gray-scale 0.719 75.0 68.8 43.8

     | Show Table
    DownLoad: CSV
    Figure 4.  The ROC curve of different methods.

    Before the S-Detect AI system was used, the BI-RADS classification of 40 patients was category 3 in 8 cases, Category 4a in 13 cases, category 4b in 11 cases, and category 4c in 6 cases, and category 5 in 2 cases. From Table 2, doctors primarily use the S-Detect auxiliary system for escalation and degradation of category 4a and class 4b. For BI-RADS3, 4C, and 5 breast lesions, the S-Detect system diagnostic difference did not substantially affect the judgment of doctor.

    Table 2.  The BI-RADS classification whether S-Detect AI were assisted.
    Before assistance S-Detect After assistance
    Category   8 B/B   7 Category 3   8
    M/B   1
    Category4a   13 B/B   6 Category 3   3
    M/B   4 Category 4a   4
    M/M   3 Category 4b   4
    Category 4c   2
    Category 4b   11 B/B   2 Category 4a   2
    M/B   4 Category 4b   6
    M/M   5 Category 4c   3
    Category 4c   6 M/B   2 Category 4c   6
    M/M   4
    Category 5   2 M/M   2 Category 5   2

     | Show Table
    DownLoad: CSV

    With the assistance of S-Detect AI system, the accuracy of BI-RADS classification was improved significantly. The detail was showed in Table 3 and Figure 5.

    Table 3.  Diagnostic effectiveness of auxiliary S-Detect AI system.
    Group Area under ROC curve Sensitivity (%) Specificity (%) Accuracy (%)
    Senior 0.802 79.2 81.3 60.5
    Junior 0.719 75.0 68.8 43.8
    Senior+AI 0.969 100.0 93.8 93.8
    Junior+AI 0.948 95.8 93.8 89.6

     | Show Table
    DownLoad: CSV
    Figure 5.  The ROC curve of auxiliary S-Detect AI system.

    The S-Detect artificial intelligence system can effectively differentiate benign from malignant breast lesions. With the assistance of this technology, the BI-RADS classification diagnostic performance of senior sonographers has been on the rise [13]. The S-Detect auxiliary diagnosis results combined with the results of random ultrasound examination can significantly improve diagnostic accuracy. Therefore, this clinical diagnosis technique can enhance the quality of random breast ultrasound examinations obtained by patients and reduce missed diagnoses and misdiagnosis. The S-Detect technique and the elasticity scoring technique can improve the sonographer's diagnostic efficiency in breast cancer diagnosis. Still, there are some differences between the diagnostic ability and the auxiliary diagnostic ability [14].

    Artificial intelligence technology based on big data and deep learning algorithm is widely integrated into the medical field [15,16,17]. Combining medical imaging with it can reduce the work burden of imaging physicians and improve the overall work efficiency. Computer aided diagnosis system has the advantages of objectivity, stability and high repeatability, and has been used in the early research and application of lung mass, breast mass and cardiovascular disease. The S-Detect auxiliary diagnostic system is intelligent and efficient, using the physico-acoustic characteristics of 2D gray-scale images of breast lesions for rapid differential diagnosis. In this study, the S-Detect technique has a high diagnostic efficiency, with differential diagnosis sensitivity, specificity and accuracy exceeding 90%. With the aid of this technology, the diagnostic efficiency of senior doctors has an upward trend. As a result, the S-Detect technology can provide effective advice and increase diagnostic confidence for junior sonographer in the diagnosis of 2D gray-scale images.

    According to the BI-RADS classification standards recommended by the American Society of Radiology, Category3mass for the possibility of malignant 2% or less, Category 4a mass is the possibility of malignant is 2~10%. In this study, of the 16 benign groups, 93.8% (15/16) were classified as Category 3 or 4a, and the pathological findings were mostly adenopathy or fibroadenoma of the breast. The clinician will perform a pathological biopsy or surgical excision of masses classified as Category 4a or above, most of which pathologically turn out benign. The S-Detect technology has higher diagnostic performance for BI-RADS4a and 4B mass upgrades and downgrades, improving the accuracy of the random ultrasound surgeon's diagnosis and reducing unnecessary biopsies and surgeries to BI-RADS4a diagnosis. The operators in 2D ultrasound examination of the breast are highly dependent. The operation technique and image discrimination ability of ultrasound doctors with different years of experience lead to poor repeatability among the operators [18,19,20]. The S-Detect system automatically analyzes a 2D gray-scale image of a breast mass, allowing for more objective identification of a breast mass, regardless of the operator's years of service, experience, resolution, etc. In this study, to our hospital patients randomized to breast ultrasound, diagnostic accuracy was less than 80%. With the help of artificial intelligence, the joint diagnosis accuracy up to 90%, both in the senior and junior doctor. The results reduced the low qualification doctor with high qualification doctor diagnosis efficiency of the differences, thus improving the hospital's overall quality [10].

    Elasticity score technology reflects the hardness characteristics of breast masses [21,22]. It uses the elasticity information of breast masses to improve the diagnostic efficiency of ultrasound doctors. The S-Detect technique uses the same physical-acoustic information and diagnostic classification as conventional breast ultrasound complements and optimizes the two-dimensional information of conventional ultrasound. In this study, the elasticity scoring technique was less sensitive than the S-Detect technique, and the accuracy was similar to the S-Detect technique. Still, the combination of the elasticity scoring with routine breast ultrasound was slightly better than the S-Detect technique, possibly due to the complementarity of physical information from different directions.

    In summary, the S-Detect artificial intelligence system can improve breast cancer diagnosis accuracy by ultrasound physicians and improve the quality of routine breast ultrasound diagnosis. In the future, multi-center and more extensive sample size studies are needed to verify the artificial intelligence system's clinical application prospect. The S-Detect technology has its limitations. Its characteristic analysis does not include important information such as blood flow signals, calcifications, and mass hardness, and the system has errors in the identification of masses < 1 cm in diameter.

    All the doctors in the ultrasound department of the first people's Hospital of Anqing city were acknowledged for their data collection contribution.

    All authors declare no conflicts of interest in this paper.



    [1] G Casalino, G Castellano, F Galetta, K. Kaczmarek-Majer, Dynamic incremental semi-supervised fuzzy clustering for bipolar disorder episode prediction, in International Conference on Discovery Science, Springer, Cham, (2020), 79-93.
    [2] J. C. Wakefield and S. Demazeux, Introduction: Depression, one and many, Sadness or Depression?, Netherlands, Springer, 2016, 1-15.
    [3] M. E. Gerbasi, A. Eldar-Lissai, S. Acaster, M. Fridman, V. Bonthapally, P. Hodgkins, et al., Associations between commonly used patient-reported outcome tools in postpartum depression clinical practice and the Hamilton Rating Scale for Depression, Arch. Women's Mental Health, 23 (2020), 727-735.
    [4] C. L. Allan, C. E. Sexton, N. Filippini, A. Topiwala, A. Mahmood, E. Zsoldos, et al., Sub-threshold depressive symptoms and brain structure: A magnetic resonance imaging study within the Whitehall Ⅱ cohort, J. Affective Disord., 204 (2016), 219-225.
    [5] X. Li, Z. Jing, B. Hu, J. Zhu, N. Zhong, M. Li, et al., A resting-state brain functional network study in MDD based on minimum spanning tree analysis and the hierarchical clustering, Complexity, 2017 (2017), 9514369.
    [6] K. Yoshida, Y. Shimizu, J. Yoshimoto, M. Takamura, G. Okada, Y. Okamoto, et al., Prediction of clinical depression scores and detection of changes in whole-brain using resting-state functional MRI data with partial least squares regression, Plos One, 12 (2017), e0179638.
    [7] S. Sun, X. Li, J. Zhu, Y. Wang, R. La, X. Zhang, et al., Graph theory analysis of functional connectivity in major depression disorder with high-density resting state EEG data, IEEE Trans. Neural Syst. Rehabil. Eng., 27 (2019), 429-439.
    [8] U. R. Acharya, S. L. Oh, Y Hagiwara, J. Tan, H. Adeli, D. P. Subha, Automated EEG-based screening of depression using deep convolutional neural network, Comput. Methods Prog. Biomed., 161 (2018), 103-113. doi: 10.1016/j.cmpb.2018.04.012
    [9] R. W. Lam, S. H. Kennedy, R. S. McIntyre, A. Khullar, Cognitive dysfunction in major depressive disorder: effects on psychosocial functioning and implications for treatment, Can. J. Psychiatry, 59 (2014), 649-654. doi: 10.1177/070674371405901206
    [10] R. S. McIntyre, D. S. Cha, J. K. Soczynska, H. O. Woldeyohannes, L. A. Gallaugher, P. Kudlow, et al., Cognitive deficits and functional outcomes in major depressive disorder: determinants, substrates, and treatment interventions, Depression Anxiety, 30 (2013), 515-527.
    [11] Y. Kang, X. Jiang, Y. Yin, Y. Shang, X. Zhou, Deep transformation learning for depression diagnosis from facial images, in Chinese Conference on Biometric Recognition, Springer, Cham, (2017), 13-22.
    [12] A. Haque, M. Guo, A. S. Miner, F. Li, Measuring depression symptom severity from spoken language and 3D facial expressions, preprint, arXiv: 1811.08592.
    [13] M. Muzammel, H. Salam, Y. Hoffmann, M. Chetouani, A. Othmani, AudVowelConsNet: A phoneme-level based deep CNN architecture for clinical depression diagnosis, Mach. Learn. Appl., 2 (2020), 100005.
    [14] J. Zhu, J. Li, X. Li, J. Rao, Y. Hao, Z. Ding, et al., Neural basis of the emotional conflict processing in major depression: ERPs and source localization analysis on the N450 and P300 components, Front. Human Neurosci., 12 (2018), 214.
    [15] B. W. Haas, K. Omura, R. T. Constable, T. Canli, Interference produced by emotional conflict associated with anterior cingulate activation, Cognit. Affective Behav. Neurosci., 6 (2006), 152-156. doi: 10.3758/CABN.6.2.152
    [16] T. Armstrong, B. O. Olatunji, Eye tracking of attention in the affective disorders: a meta-analytic review and synthesis, Clin. Psychol. Rev., 32 (2012), 704-723. doi: 10.1016/j.cpr.2012.09.004
    [17] A. Duque, C. Vázquez, Double attention bias for positive and negative emotional faces in clinical depression: Evidence from an eye-tracking study, J Behav. Ther. Exp. Psychiatry, 46 (2015), 107-114. doi: 10.1016/j.jbtep.2014.09.005
    [18] S. P. Karparova, A. Kersting, T. Suslow, Disengagement of attention from facial emotion in unipolar depression, Psychiatry Clin. Neurosci., 59 (2005), 723-729. doi: 10.1111/j.1440-1819.2005.01443.x
    [19] M. P. Caligiuri, J. Ellwanger, Motor and cognitive aspects of motor retardation in depression, J. Affective Disord., 57 (2000), 83-93. doi: 10.1016/S0165-0327(99)00068-3
    [20] A. Etkin, T. Egner, D. M. Peraza, E. R. Kandel, J. Hirsch, Resolving emotional conflict: a role for the rostral anterior cingulate cortex in modulating activity in the amygdala, Neuron, 51 (2006), 871-882. doi: 10.1016/j.neuron.2006.07.029
    [21] K Mohan, A Seal, O Krejcar, A. Yazidi, FER-net: facial expression recognition using deep neural net, Neural Comput. Appl., (2021), 1-12.
    [22] K Mohan, A Seal, O Krejcar, A. Yazidi, Facial expression recognition using local gravitational force descriptor-based deep convolution neural networks, IEEE Trans. Instrum. Meas., 70 (2020), 1-12.
    [23] Z. Zhou, M. Li, Semi-supervised regression with co-training, in IJCAI, (2005), 908-913.
    [24] M. A. Lei, W. Xili, Semi-supervised regression based on support vector machine co-training, Comput. Eng. Appl., 47 (2011), 177-180.
    [25] Y. Q. Li, M. Tian, A semi-supervised regression algorithm based on co-training with SVR-KNN, in Advanced Materials Research, Trans Tech Publications Ltd, (2014), 2914-2918.
    [26] L. Bao, X. Yuan, Z. Ge, Co-training partial least squares model for semi-supervised soft sensor development, Chemom. Intell. Lab. Syst., 147 (2015), 75-85. doi: 10.1016/j.chemolab.2015.08.002
    [27] D. Li, Y. Liu, D. Huang, Development of semi-supervised multiple-output soft-sensors with Co-training and tri-training MPLS and MRVM, Chemom. Intell. Lab. Syst., 199 (2020), 103970.
    [28] M. F. A. Hady, F. Schwenker and G. Palm, Semi-supervised learning for regression with co-training by committee, in International Conference on Artificial Neural Networks, Springer, Berlin, Heidelberg, (2009), 121-130.
    [29] F. Saitoh, Predictive modeling of corporate credit ratings using a semi-supervised random forest regression, 2016 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), IEEE, (2016), 429-433.
    [30] J. Levatić, M. Ceci, D. Kocev, S. Džeroski, Self-training for multi-target regression with tree ensembles, Knowl. Based Syst., 123 (2017), 41-60. doi: 10.1016/j.knosys.2017.02.014
    [31] S. Xue, S. Wang, X. Kong, J. Qiu, Abnormal neural basis of emotional conflict control in treatment-resistant depression: An event-related potential study, Clin. EEG Neurosci., 48 (2017), 103-110. doi: 10.1177/1550059416631658
    [32] N. Tottenham, J. W. Tanaka, A. C. Leon, T. McCarry, M. Nurse, T. A. Hare, et al., The NimStim set of facial expressions: Judgments from untrained research participants, Psychiatry Res., 168 (2009), 242-249.
    [33] M. Lei, J. Yang, S. Wang, L. Zhao, P. Xia, G. Jiang, et al., Semi-supervised modeling and compensation for the thermal error of precision feed axes, Int. J. Adv. Manuf. Technol., 104 (2019), 4629-4640.
  • This article has been cited by:

    1. Peng Xue, Mingyu Si, Dongxu Qin, Bingrui Wei, Samuel Seery, Zichen Ye, Mingyang Chen, Sumeng Wang, Cheng Song, Bo Zhang, Ming Ding, Wenling Zhang, Anying Bai, Huijiao Yan, Le Dang, Yuqian Zhao, Remila Rezhake, Shaokai Zhang, Youlin Qiao, Yimin Qu, Yu Jiang, Unassisted Clinicians Versus Deep Learning–Assisted Clinicians in Image-Based Cancer Diagnostics: Systematic Review With Meta-analysis, 2023, 25, 1438-8871, e43832, 10.2196/43832
    2. Anastasia-Maria Leventi-Peetz, Kai Weber, Probabilistic machine learning for breast cancer classification, 2022, 20, 1551-0018, 624, 10.3934/mbe.2023029
    3. Peng-fei Lyu, Yu Wang, Qing-Xiang Meng, Ping-ming Fan, Ke Ma, Sha Xiao, Xun-chen Cao, Guang-Xun Lin, Si-yuan Dong, Mapping intellectual structures and research hotspots in the application of artificial intelligence in cancer: A bibliometric analysis, 2022, 12, 2234-943X, 10.3389/fonc.2022.955668
    4. Xiaolei Wang, Shuang Meng, Diagnostic accuracy of S-Detect to breast cancer on ultrasonography: A meta-analysis (PRISMA), 2022, 101, 1536-5964, e30359, 10.1097/MD.0000000000030359
    5. Samara Acosta-Jiménez, Javier Camarillo-Cisneros, Abimael Guzmán-Pando, Susana Aideé González-Chávez, Jorge Issac Galván-Tejada, Graciela Ramírez-Alonso, César Francisco Pacheco-Tena, Rosa Elena Ochoa-Albiztegui, 2023, Chapter 8, 978-3-031-18255-6, 83, 10.1007/978-3-031-18256-3_8
    6. Yuqun Wang, Lei Tang, Pingping Chen, Man Chen, The Role of a Deep Learning-Based Computer-Aided Diagnosis System and Elastography in Reducing Unnecessary Breast Lesion Biopsies, 2022, 15268209, 10.1016/j.clbc.2022.12.016
    7. Peizhen Huang, Bin Zheng, Mengyi Li, Lin Xu, Sajjad Rabbani, Abdulilah Mohammad Mayet, Chengchun Chen, Beishu Zhan, He Jun, Ateeq Ur Rehman, The Diagnostic Value of Artificial Intelligence Ultrasound S-Detect Technology for Thyroid Nodules, 2022, 2022, 1687-5273, 1, 10.1155/2022/3656572
    8. Qiyu Liu, Meijing Qu, Lipeng Sun, Hui Wang, Accuracy of ultrasonic artificial intelligence in diagnosing benign and malignant breast diseases, 2021, 100, 0025-7974, e28289, 10.1097/MD.0000000000028289
    9. N.N. Bayandina, E.N. Slavnova, A cytological method in the early diagnosis of cervical cancer: evolution, principles, technologies, prospects, 2023, 12, 2305-218X, 49, 10.17116/onkolog20231202149
    10. Peng Sun, 2023, Chapter 35, 978-981-19-9372-5, 323, 10.1007/978-981-19-9373-2_35
    11. Hee Jeong Kim, Hak Hee Kim, Ki Hwan Kim, Ji Sung Lee, Woo Jung Choi, Eun Young Chae, Hee Jung Shin, Joo Hee Cha, Woo Hyun Shim, Use of a commercial artificial intelligence-based mammography analysis software for improving breast ultrasound interpretations, 2024, 34, 1432-1084, 6320, 10.1007/s00330-024-10718-3
    12. E. L. Teodozova, E. Yu. Khomutova, Artificial intelligence in radial diagnostics of breast cancer, 2023, 3, 2782-3024, 26, 10.61634/2782-3024-2023-12-26-35
    13. Xiao Zou, Jintao Zhai, Shengyou Qian, Ang Li, Feng Tian, Xiaofei Cao, Runmin Wang, Improved breast ultrasound tumor classification using dual-input CNN with GAP-guided attention loss, 2023, 20, 1551-0018, 15244, 10.3934/mbe.2023682
    14. Lu Li, Hongyan Deng, Xinhua Ye, Yong Li, Jie Wang, Comparison of the diagnostic efficacy of mathematical models in distinguishing ultrasound imaging of breast nodules, 2023, 13, 2045-2322, 10.1038/s41598-023-42937-x
    15. Na Li, Wanling Liu, Yunyun Zhan, Yu Bi, Xiabi Wu, Mei Peng, Value of S-Detect combined with multimodal ultrasound in differentiating malignant from benign breast masses, 2024, 55, 2090-4762, 10.1186/s43055-023-01183-x
    16. Pengjie Song, Li Zhang, Longmei Bai, Qing Wang, Yanlei Wang, Diagnostic performance of ultrasound with computer-aided diagnostic system in detecting breast cancer, 2023, 9, 24058440, e20712, 10.1016/j.heliyon.2023.e20712
    17. Qing Dan, Ziting Xu, Hannah Burrows, Jennifer Bissram, Jeffrey S. A. Stringer, Yingjia Li, Diagnostic performance of deep learning in ultrasound diagnosis of breast cancer: a systematic review, 2024, 8, 2397-768X, 10.1038/s41698-024-00514-z
    18. Panpan Zhang, Min Zhang, Menglin Lu, Chaoying Jin, Gang Wang, Xianfang Lin, Comparative Analysis of the Diagnostic Value of S-Detect Technology in Different Planes Versus the BI-RADS Classification for Breast Lesions, 2024, 10766332, 10.1016/j.acra.2024.08.005
    19. Zhuohua Lin, Ligang Cui, Yan Xu, Qiang Fu, Youjing Sun, Feasibility and potential of intraoperative ultrasound in arthroscopy of femoroacetabular impingement, 2024, 2054-8397, 10.1093/jhps/hnad050
    20. Manisha Bahl, Jung Min Chang, Lisa A. Mullen, Wendie A. Berg, Artificial Intelligence for Breast Ultrasound: AJR Expert Panel Narrative Review, 2024, 0361-803X, 1, 10.2214/AJR.23.30645
    21. Jie He, Nan Liu, Li Zhao, New progress in imaging diagnosis and immunotherapy of breast cancer, 2025, 16, 1664-3224, 10.3389/fimmu.2025.1560257
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3907) PDF downloads(164) Cited by(14)

Figures and Tables

Figures(5)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog