Loading [MathJax]/jax/output/SVG/jax.js
Review Topical Sections

Immunotherapy for diffuse large B-cell lymphoma: current use of immune checkpoint inhibitors therapy

  • Received: 27 February 2023 Revised: 14 July 2023 Accepted: 17 August 2023 Published: 12 September 2023
  • Patients diagnosed with diffuse large B-cell lymphoma (DLBCL) have high cure rates with current treatment options including immuno-polychemotherapy. However, around 30% of cases do not respond or develop relapse disease. For this, it is necessary to search for new therapeutic options. In recent years, therapy using chimeric antigen receptor (CAR) T-cells has been a strategy for those patients with LBDCG in progression or relapse, although only 30–40% of cases achieve durable remissions. The programmed death-1 (PD-1) receptor regulates the T-cell-mediated immune response through binding to its ligands (PD-L1). Some tumor cells present high expression of PD-L1, which down-regulates T-cell activation. The beneficial antitumor activity of PD-1 and PD-L1 has been widely demonstrated in certain solid organ malignancies. However, their utility in the treatment of lymphomas is complex. To date, different clinical trials have demonstrated its usefulness as an innovative therapeutic alternative in these tumors. In this review article, we evaluate the literature on the role of the PD-1/PD-L1 pathway in DLBCL and describe future strategies involving these new anticancer agents in this lymphoid neoplasm.

    Citation: Luis Miguel Juárez-Salcedo, Luis Manuel González, Samir Dalia. Immunotherapy for diffuse large B-cell lymphoma: current use of immune checkpoint inhibitors therapy[J]. AIMS Medical Science, 2023, 10(3): 259-272. doi: 10.3934/medsci.2023020

    Related Papers:

    [1] Yi Liu, Jiahuan Lu, Jie Yang, Feng Mao . Sentiment analysis for e-commerce product reviews by deep learning model of Bert-BiGRU-Softmax. Mathematical Biosciences and Engineering, 2020, 17(6): 7819-7837. doi: 10.3934/mbe.2020398
    [2] Quan Zhu, Xiaoyin Wang, Xuan Liu, Wanru Du, Xingxing Ding . Multi-task learning for aspect level semantic classification combining complex aspect target semantic enhancement and adaptive local focus. Mathematical Biosciences and Engineering, 2023, 20(10): 18566-18591. doi: 10.3934/mbe.2023824
    [3] Wei Hong, Yiting Gu, Linhai Wu, Xujin Pu . Impact of online public opinion regarding the Japanese nuclear wastewater incident on stock market based on the SOR model. Mathematical Biosciences and Engineering, 2023, 20(5): 9305-9326. doi: 10.3934/mbe.2023408
    [4] Xiaobo Zhang, Donghai Zhai, Yan Yang, Yiling Zhang, Chunlin Wang . A novel semi-supervised multi-view clustering framework for screening Parkinson's disease. Mathematical Biosciences and Engineering, 2020, 17(4): 3395-3411. doi: 10.3934/mbe.2020192
    [5] Jinzhu Yang, Meihan Fu, Ying Hu . Liver vessel segmentation based on inter-scale V-Net. Mathematical Biosciences and Engineering, 2021, 18(4): 4327-4340. doi: 10.3934/mbe.2021217
    [6] Ziyue Wang, Junjun Guo . Self-adaptive attention fusion for multimodal aspect-based sentiment analysis. Mathematical Biosciences and Engineering, 2024, 21(1): 1305-1320. doi: 10.3934/mbe.2024056
    [7] Ruiping Yuan, Jiangtao Dou, Juntao Li, Wei Wang, Yingfan Jiang . Multi-robot task allocation in e-commerce RMFS based on deep reinforcement learning. Mathematical Biosciences and Engineering, 2023, 20(2): 1903-1918. doi: 10.3934/mbe.2023087
    [8] Zijian Wang, Yaqin Zhu, Haibo Shi, Yanting Zhang, Cairong Yan . A 3D multiscale view convolutional neural network with attention for mental disease diagnosis on MRI images. Mathematical Biosciences and Engineering, 2021, 18(5): 6978-6994. doi: 10.3934/mbe.2021347
    [9] Zhigao Zeng, Cheng Huang, Wenqiu Zhu, Zhiqiang Wen, Xinpan Yuan . Flower image classification based on an improved lightweight neural network with multi-scale feature fusion and attention mechanism. Mathematical Biosciences and Engineering, 2023, 20(8): 13900-13920. doi: 10.3934/mbe.2023619
    [10] Kun Lan, Jianzhen Cheng, Jinyun Jiang, Xiaoliang Jiang, Qile Zhang . Modified UNet++ with atrous spatial pyramid pooling for blood cell image segmentation. Mathematical Biosciences and Engineering, 2023, 20(1): 1420-1433. doi: 10.3934/mbe.2023064
  • Patients diagnosed with diffuse large B-cell lymphoma (DLBCL) have high cure rates with current treatment options including immuno-polychemotherapy. However, around 30% of cases do not respond or develop relapse disease. For this, it is necessary to search for new therapeutic options. In recent years, therapy using chimeric antigen receptor (CAR) T-cells has been a strategy for those patients with LBDCG in progression or relapse, although only 30–40% of cases achieve durable remissions. The programmed death-1 (PD-1) receptor regulates the T-cell-mediated immune response through binding to its ligands (PD-L1). Some tumor cells present high expression of PD-L1, which down-regulates T-cell activation. The beneficial antitumor activity of PD-1 and PD-L1 has been widely demonstrated in certain solid organ malignancies. However, their utility in the treatment of lymphomas is complex. To date, different clinical trials have demonstrated its usefulness as an innovative therapeutic alternative in these tumors. In this review article, we evaluate the literature on the role of the PD-1/PD-L1 pathway in DLBCL and describe future strategies involving these new anticancer agents in this lymphoid neoplasm.



    The rapid development of newly emerging techniques such as computer, mobile computing and big data has facilitated the arrival of new media era [1,2]. As the native of mobile Internet, college students have strong sense in pursuit of free voice, as well as strong willingness in discussion of public topics [3,4]. Nowadays, they have been one of the most active user groups in mobile social media [5,6]. But impacted by increasingly fruitful multiple information channels, the information monopoly of mainstream media has been challenged unprecedentedly [7,8]. Various negative and false information on the Internet are mixed with massive information, easily bringing emotional fluctuation to college students [9,10]. Among them, the more typical is the impact of film and television works [11]. At present, the variety of film and television works on the Internet is complex, which has a positive or negative impact on college students' life attitude to a certain extent [12]. This circumstance may even influence mental health of college students and harmony of campuses [13,14]. Therefore, effective sentiment analysis for campus public opinions in mobile social media [15], is of great importance to ideological management in colleges and universities [16].

    During the past few years, the wide application of deep neural networks have been continuously explored in terms of its powerful information processing ability [17], and have also been the mainstream technology for sentiment analysis [18]. The most popular deep sentiment analysis methods are established upon recurrent neural networks (RNN) [19]. The RNN emphasizes the dependency among word-building units of text sequences, and builds a global feature representation on this basis [20]. From the perspective of linguistics, there is often some contextual dependency among the characters of text, which is more consistent with the RNN-based methods in scenarios [21]. Typical models include long short-term memory (LSTM) and its bidirectional version [22]. However, existing methods only considered the vertical features of the text, yet ignoring the potential horizontal features. From the perspective of model structure, they concentrated more on learning depth than learning width to some extent [23,24].

    In order to make up the aforementioned gap, this paper extends the RNN-based models from a single scale to multi-dimensional scales. Keeping the depth stable, the number of concurrency to increase the learning fields is increased [25]. It is expected to get a more comprehensive semantic feature expression to handle the imbalance problem between model depth and model width. Therefore, taking English text based campus public opinion as the object, this paper proposes a novel sentiment analysis method based on multi-scale deep learning for college public opinions (named as MSLSTM-CPO for short). In order to match the bidirectional dependency of semantic sequences, Bi-LSTM is selected as the basic networking unit. Vertically, a single word in English is used as the minimum processing unit. Horizontally, more than two basic networking units are adopted to form a parallel computing structure. As a result, the model has been enhanced with respect to horizonal layer numbers, realizing the trade-off between depth and breadth. We can summarize major working points of this paper as following aspects:

    ● This work discusses and demonstrates semantic characteristics from both vertical direction and horizonal direction.

    ● This paper develops, MSLSTM-CPO, a novel sentiment analysis method via multi-scale deep learning for college public opinions.

    ● This work conducts some experiments to comprehensively assess performance of the proposal.

    In recent years, sentiment analysis has received more and more attention as an important research direction in the field of natural language processing, and has gradually become a research hotspot. From the analysis of its research development history, sentiment analysis has mainly gone through three stages: lexicon based method, machine learning method and deep learning method.

    In the initial stages of sentiment analysis, lexicon-based statistics on the number of sentiment subscripts was the main method. It is obvious that lexicon-based statistical methods, although easy to understand, do not develop generalisation very well. With the development of natural language processing, machine learning methods research has so far also yielded many effective results [26,27,28]. Rashmi et al. [29] proposed a soft voting classifier by integrating five baseline models of logistic regression, balanced random forest, eXtreme Gradient Boosting, random forest, and support vector machine with the task goal of classifying mixed Indian languages. The method is useful for classifying positive, negative, neutral, mixed emotions and unemotional states with better results. Maipraditj et al. [30] proposed a machine learning-based approach that uses three different datasets for text processing, using the N-gram IDF method for feature extraction, and then utilizes automated machine learning to classify positive, neutral emotions, and negative emotions. However, traditional machine learning methods have the drawback of not being able to combine semantic information with textual context, so deep learning is gradually becoming a research trend [31,32,33,34,35,36]. Huang et al. [37] explored changes in emotion in blended learning using LSTM-based text mining methods and epistemic network analysis. Jia et al. [38] constructed a sentiment classification model using BERT, CNN and attention mechanism methods to mine text for contextual connections and features. Harendranath et al. [39] proposed a modeling method based on recurrent neural network to classify the emotions of political comments.

    To sum up, almost all the related works dealt with various semantic analysis problems from the perspective of vertical-directional semantics. Nevertheless, semantics modeling from both vertical and horizonal directions, still needs to be deeply discussed and considered. Therefore, the next section of this paper is going to display the proposed technical framework from such point.

    Currently, almost all valid natural language models are built on modelling the serialisation of natural utterances. The data for this experiment is English comment text data, and in most cases the semantic information of each comment keyword is closely related to the information of the previous text. Therefore, a neural network that can remember the context can better handle the sentiment analysis of the comment text for this experiment. The traditional LSTM model achieves the ability to remember the text and can handle the problem of internal information complexity and overload caused by long sequential data. In order to match the bidirectional dependency of semantic sequences, Bi-LSTM is chosen as the basic networking unit for this experiment. Hence, a novel sentiment analysis method for college public opinions via multi-scale deep learning is develped in this work. As shown in Figure 1, the whole workflow of the MSLSTM-CPO model consists of two parts: Word-level semantic encoding and Sentence-level semantic encoding.

    Figure 1.  The major workflow of the proposed MSLSTM-CPO.

    The data for this experiment is English text data and the classification model cannot be trained directly on the input text. Therefore, each English comment text needs to be separated using space characters and then the words are converted into a vector representation. If there are many words in the corpus, this will result in a very high dimensionality of the vector for each word. As a result, a normal one hot encoding would make the word vectors very sparse. Word embedding is a way of representing words in natural language by representing each word as a vector in a high-dimensional space. In this way, natural language computation is converted into vector computation. The detailed steps of the word embedding are shown in Figure 2.

    Figure 2.  The workflow diagram for the computation of word embedding.

    Step 1: Dictionary lookup. The text is first cut into words using spaces to divide long sentences into a number of words. The words in the text sentence are converted into fixed ID integers by querying the dictionary.

    Step 2: One-hot encoding. If the dictionary has P words in its word list, each particular word can be represented by a P-dimensional vector, thus converting each ID into a fixed-length vector. For a word with ID x, the xth element of the vector is 1 and the rest of the P1 elements are 0. This process is One-hot encoding and can be expressed as:

    vx=1=[1,0,0,,0P1] (3.1)

    Step 3: Embedding lookup. In a real-life experimental scenario, each text is of different lengths, both long and short. In order to avoid the overall training result of the model being affected by the length of the data being too long or too short, a parameter max_seq_len is set to truncate and complement the text. After One-hot encoding, the sentence tensor is denoted as V. Then, this tensor V is multiplied by another dense tensor W, WRP×l. P denotes the word table size and l denotes the vector size of each word. After tensor multiplication, this can then be mapped to an embedding representation X, thus completing the purpose of representing words as vectors.

    Based on word-level semantic encoding, this subsection is sentence-level semantic encoding. For each input text sentence, the sentences are truncated and padded by the set global variable max_seq_len, turning them into fixed-length vectors. The processed information is then trained using the model proposed in this paper to obtain the results of sentiment classification. As shown in Figure 3, the MSLSTM-CPO model proposed in this paper has a total of four layers: embedding layer, LSTM layer, average layer and output layer.

    Figure 3.  Schematic diagram of the MSLSTM-CPO network structure.

    The LSTM model cannot process the raw text data directly, and requires Word-Level Semantic Encoding for Word Embedding, which is a vectorised representation of the input text words. Assuming a set of input sequences as XRB×L×M, where B is the batch size, L is the length of the sequence and M is the input feature dimension, the LSTM scans the sequences sequentially from left to right and updates the internal state CtRB×D and the output state HtRB×D of the state at each moment computationally through the loop unit. D denotes the dimensionality of the hidden state vector. The computational steps of the LSTM consist mainly of computing the three gates, computing the internal state and computing the output state.

    Step 1: Calculating the three "doors". At moment t, the loop unit of the LSTM computes a set of input gates It, oblivion gates Ft and output gates Ot using the input XtRB×M at the current moment and the output state Ht1RB×D at the previous moment. This experiment uses the paddle framework to build the model, which differs from the conventional LSTM model implemented by itself, with two additional biases and the parameters in front of the input data when the matrix is multiplied. The calculation formulas are as follows:

    It=σ(WiiXt+bii+UhiHt1+bhi) (3.2)
    Ft=σ(WifXt+bif+UhfHt1+bhf) (3.3)
    Ot=σ(WioXt+bio+UhoHt1+bho) (3.4)

    where WRM×D, URD×D, biR1×D, bhR1×D are learnable parameters and σ is a logistic function that controls the values of the "gates" in the (0, 1) interval. The "gates" here are all matrices of B samples, each row being a vector of "gates" of one sample.

    Step 2: Calculating internal states. The first step is to calculate the internal state of the candidate with the following equation:

    ˜Ct=tanh(WicXt+bic+UhcHt1+bhc) (3.5)

    where WicRM×D,UhcRD×D,bicR1×D,bhcR1×D are learnable parameters. Next, the internal state at moment t is then calculated using the use of forgetting gates and input gates, using the following equation:

    Ct=FtCt1+It˜Ct (3.6)

    where denotes the element-by-element product.

    Step 3: Calculating the output state. The current cell state of the LSTM can be calculated according to Eq (3.6) and as follows:

    Ht=Ottanh(Ct) (3.7)

    The input of the LSTM cyclic cell structure is the internal state vector Ct1RB×D and the hidden state vector Ht1RB×D at moment t1, and the output is the state vector CtRB×D and the hidden state vector HtRB×D at the current moment t. With the LSTM cyclic cell, the whole network can establish longer distance temporal dependencies, thus solving the difficulty when the sequence data carried by the RNN is too long to be handled. In addition, the LSTM can help the model capture the semantic information of long sentences more fully by selectively ignoring or reinforcing the current memory and input information.

    The Paddle framework's built-in LSTM model has several parameters, including direction and num_layers. Direction indicates the direction of the network iteration and can be set to forward or bidirectional, with the default being forward. Num_layers indicates the number of layers in the network and defaults to 1. The multi-layer bidirectional LSTM needs to receive a sequence of vectors to update the cyclic units with forward and reverse respectively. Therefore, it is only necessary to set the parameter direction to bidirectional and num_layers to any desired n when defining the LSTM to use the multi-layer bidirectional LSTM directly.

    The average layer is calculated by averaging the hidden states at all positions of the bidirectional LSTM layer and then used as a representation of the whole sentence. In the experiments, the AveragePooling operator is implemented for the aggregation of hidden states. First, the sequence length vector is used to generate a mask matrix, which is used to mask the vectors that fill the placeholder positions in the text sequence. The vectors of the sequence are then summed and averaged.

    The final output layer, by using Linear to output the results of the classification. Sentiment analysis is essentially a classification problem and in the practice of classification problems only the logarithmic odds of classification are usually required to be output by the model. First, the Linear layer transforms the last moment of the hidden state vector HLRB×D linearly and then outputs the logits of the classification. The formula is as follows:

    Y=HLWout+bout (3.8)

    where WoutRD×N and boutRN are the learnable weight matrix and bias. N indicates the number of classifications.

    As far as we know, there is no datasets about college public opinions that are publicly available. This work selects a standard dataset IMDB [40] in area of sentiment analysis for evaluation. It was a kind of dataset that records short reviews in social media and sentiment information is also associated. The data of IMDB comes from the Internet Movie Database, including the user's comment text and rating information for a movie. In the current Internet era, university students are an inescapable part of the film industry's consumer base. It is easy to see from everyday entertainment life that the majority of the film industry's loyal audience comes from university students who have free time. Movie reviews can be used to determine students' emotions and capture the direction of public opinion in a timely manner, thus guiding students to develop the right values.

    The IMDB dataset collected the review information of many films, with a total of 50,000 English review texts. The film viewer can give a score when commenting on the film, ranging from 1–10 points. In the data processing, if the rating is below 5, it is judged as a negative review and the label is 0. If the rating is above 6, it is judged as a positive review and the label is 1. The purpose of this experiment is to determine whether the emotion expressed by users is positive or negative according to the text information of the comments. After 50,000 pieces of data are processed, each sample data includes the user's comment text and 0/1 label for a movie. There are 25,000 training data and 25,000 test data. In the training set and test set, there are 12,500 positive and negative samples respectively.

    In the experiments, the MSLSTM-CPO model proposed in this paper is compared with both machine learning methods and deep learning methods, respectively. The machine learning models discussed are SVC, GaussianNB, MultinomialNB, BernoulliNB. The deep learning models are mainly discussed in the MSLSTM-CPO proposed in this paper and other baseline methods: CNN model, LSTM model, and bidirectional LSTM model. The text of the comments in this experiment varied in length, so the max-seq-len parameter was set to 256 to truncate and complement the text. For the other parameters in the model training, batch-size was set to 128, hidden layer size was set to 256 and embedding size was set to 256. Considering the time consumption of model training and test accuracy, epoch was set to 3. The learning rate was set to 0.001, 0.002 and 0.005 respectively. The num_layers of the MSLSTM-CPO model was set to 3. According to the composition of the initial dataset, the split ratio between the training and test sets was 50%. In order to fully test the validity of the experiments, four common evaluation metrics of classification models were used: accuracy, precision, recall and F1-score. Their detailed descriptions can be found from references like [2,41] and are left out here.

    Sentiment analysis is essentially a binary classification problem, so one SVC model and three typical Naive Bayes-based methods were chosen as machine learning comparison models for this experiment. They are named as Gaussian Naive Bayes, Multinomial Naive Bayes, and Bernoulli Naive Bayes, separately. Among them, GaussianNB is Naive Bayes with Gaussian distribution a priori, MultinomialNB is Naive Bayes with polynomial distribution a priori, and BernoulliNB is Naive Bayes with Bernoulli distribution a priori. Machine learning-based sentiment classification models generally have only simple steps: data pre-processing, text vectorisation, and training the classifier.

    As shown in Table 1, this experiment computes the four metric values for the five models. The learning rate of the MSLSTM-CPO model is 0.002, and the other parameters are kept constant. From the results, it can be seen that the machine learning method can generally only achieve an accuracy of 0.5 for classification, while the MSLSTM-CPO model can achieve as high as 0.86, which is much higher than the classification effectiveness of the machine learning methods. A visualisation of the evaluation results of MSLSTM-CPO compared to four traditional machine learning models is shown in Figure 6. Its horizontal coordinates indicate the four metric types and the vertical coordinates indicate the magnitude of the values. It is clear from this that the MSLSTM-CPO model, indicated by the blue bars, performs approximately 50% better than the other machine learning models on all four evaluation metrics. Thus, it is again verified that deep learning models are generally able to perform better than traditional machine learning models in terms of sentiment analysis problems.

    Table 1.  Experimental results of machine learning models.
    Model Accuracy F1-score Recall Precision
    SVC 0.5206 0.4889 0.5275 0.5206
    GaussianNB 0.5057 0.4262 0.5075 0.5128
    MultinomialNB 0.4936 0.4935 0.4936 0.4936
    BernoulliNB 0.5 0.3333 0.5 0.25
    MSLSTM-CPO 0.8678 0.8676 0.86755 0.86905

     | Show Table
    DownLoad: CSV

    In order to discuss the effectiveness of the MSLSTM-CPO proposed in this paper, three other typical deep learning methods are chosen as comparison models: CNN model, LSTM model, and bidirectional LSTM model.

    As shown in Figures 4 and 5, the trends of accuracy and loss in the training phase of the models are indicated for the four methods when the learning rates are set to 0.001, 0.002, 0.003 respectively. For each subplot, the x-axis represents the number of iterative rounds ranging from 1 to 600. The y-axis in Figure 4 represents the value of the accuracy of the model training, and the y-axis in Figure 5 represents the value of the loss of the model training. Each subplot is plotted from 30 consecutive sample points of data, representing the training trend of the same model at different learning rates. It is clear from this that the red curve with the learning rate set to 0.002 shows better results in the training phase of the four models, with the MSLSTM-CPO model showing the most intuitive comparison of trends. As the number of iteration rounds increases, the training accuracy of the four models shows an increasing trend and the training loss shows a decreasing trend, eventually tending to stabilise. This demonstrates that the models are working properly and reasonably well.

    Figure 4.  Tendency of four methods with respect to training accracy.
    Figure 5.  Tendency of four methods with respect to training loss.
    Figure 6.  The evaluation results of MSLSTM-CPO with four machine learning models.

    The above trends only represent the normal training of the model, but only a comparative analysis with each evaluation index can reflect the comprehensive expression ability of the model. Therefore, in this experiment, the learning rate was used as the variable to calculate the evaluation index values of the four models MSLSTM-CPO, CNN, LSTM and Bi-LSTM respectively, while the other parameters were set unchanged. As shown in Table 2, the learning rates were set to 0.001, 0.002 and 0.005, respectively. Sentiment analysis is essentially a dichotomous problem, and the commonly used evaluation metrics are accuracy, precision, recall and F1-score. From the cross-sectional data of the table, the evaluation metrics of each model are stable under different parameter settings. From the analysis of the longitudinal data in the table, the MSLSTM-CPO model works best when the learning rate is 0.002, with an ACC of 0.8678, which is 2, 3 and 1.7% higher than CNN, LSTM and Bi-LSTM respectively. The evaluation metrics of the MSLSTM-CPO model were the highest and stable for different learning rate settings. As shown in Figure 7, the representation is a visualisation of the evaluation results of MSLSTM-CPO compared to the other three deep learning models. It has three subfigures that demonstrate such visualization effect from two angles. For Figure 7(a), it demonstrates comparison of four metric values between MSLSTM-CPO and other three methods. For Figure 7(b), (c), they select two typical metrics (Accuracy and F1-score) and display the evaluation results of the four models under three learning rate values (0.001, 0.002 and 0.005). This figure can clearly reflect the fact that the proposed MSLSTM-CPO has better performance results compared with baseline methods.

    Table 2.  Experimental results of deep learning models.
    Learning rate Model Accuracy F1-score Recall Precision
    0.001 CNN 0.8565 0.8565 0.8565 0.8566
    LSTM 0.8465 0.84645 0.8465 0.84675
    Bi-LSTM 0.8485 0.8483 0.8487 0.85005
    MSLSTM-CPO 0.8585 0.85815 0.85555 0.85885
    0.002 CNN 0.8516 0.8515 0.85165 0.85305
    LSTM 0.8449 0.84485 0.8452 0.84605
    Bi-LSTM 0.8535 0.85345 0.8537 0.8545
    MSLSTM-CPO 0.8678 0.8676 0.86755 0.86905
    0.005 CNN 0.83225 0.83255 0.83225 0.8322
    LSTM 0.8411 0.8412 0.8412 0.8411
    Bi-LSTM 0.84645 0.84545 0.8471 0.85485
    MSLSTM-CPO 0.85225 0.8522 0.85235 0.8529

     | Show Table
    DownLoad: CSV
    Figure 7.  The evaluation results of MSLSTM-CPO with three deep learning models.

    Currently, sentiment analysis tasks are generally well developed in both machine learning and deep learning approaches. This experimental design compares the MSLSTM-CPO model proposed in this paper with traditional machine learning models and deep learning models respectively. Based on the data in Tables 1 and 2, it can be concluded that the comprehensive performance of the MSLSTM-CPO model has some validity and reliability compared to these models.

    The reason why the MSLSTM-CPO model proposed in this paper can achieve better performance can be explained from the perspectives of both machine learning models and deep learning models respectively. First, the focus of the sentiment analysis task is on the exploitation of textual content. Traditional machine learning methods focus on feature extraction from a large amount of labelled data, which leads to classification results. This approach neglects the contextual coherence of the text content and leads to poor model training results. Secondly, RNN and LSTM models with memory capability are commonly used in sentiment analysis tasks. However, only LSTM can adapt to data with long sequences for memorisation. Therefore, the MSLSTM-CPO model proposed in this paper is based on the LSTM. While a normal LSTM model construction is unidirectional and has only one layer, MSLSTM-CPO is a bidirectional and multilayer model. As a result, the model is enhanced in terms of the number of layers, achieving a certain degree of balance between depth and breadth.

    In summary, the MSLSTM-CPO model proposed in this paper can effectively implement a new type of college opinion analysis. And compared with machine learning and deep learning baseline methods, MSLSTM-CPO has better classification performance.

    This paper successfully proposes a MSLSTM-CPO model based on multi-scale deep learning for sentiment analysis of university students. Firstly, the method performs a word embedding operation on the text. Secondly, Bi-LSTM is chosen as the basic networking unit for layer superposition, and the sentiment classification results are obtained by embedding layer, LSTM layer, average layer and linear output layer. The limitations of traditional methods are balanced by the combination of depth and breadth of the model. Experiments on real datasets show that the MSLSTM-CPO model exhibits better performance than traditional machine learning models and commonly used deep learning models.

    In future work, it is expected that the user's image and audio information can be mined and fused with textual information in the future to better improve the efficiency and accuracy of sentiment analysis of campus opinion. In addition, this study will further explore more machine learning and deep learning methods in future work. Their performance characteristics in real-world applications will be analysed in order to investigate sentiment analysis tasks based on deep learning frameworks more effectively. This will help to promote a positive attitude towards life and the development of mental health among university students.

    This work was funded by the Researchers Supporting Project number (RSPD2023R681) King Saud University, Riyadh, Saudi Arabia, and also funded by Humanities and Social Sciences Project of Chongqing Municipal Education Commission (22SKSZ047).

    The authors declare there is no conflicts of interest.


    Acknowledgments



    We would like to thank the colleagues who helped provide clinical information and research support.

    Conflict of interest



    The authors declare no competing interests.

    [1] Swerdlow SH, Campo E, Pileri SA, et al. (2016) The 2016 revision of the World Health Organization classification of lymphoid neoplasms. Blood 127: 2375-2390. https://doi.org/10.1182/blood-2016-01-643569
    [2] Alaggio R, Amador C, Anagnostopoulos I, et al. (2022) The 5th edition of the World Health Organization classification of haematolymphoid tumours: lymphoid neoplasms. Leukemia 36: 1720-1748. https://doi.org/10.1038/s41375-022-01620-2
    [3] Keane C, Vari F, Hertzberg M, et al. (2015) Ratios of T-cell immune effectors and checkpoint molecules as prognostic biomarkers in diffuse large B-cell lymphoma: A population-based study. Lancet Haematol 2: e445-455. https://doi.org/10.1016/S2352-3026(15)00150-7
    [4] Purroy N, Bergua J, Gallur L, et al. (2015) Long-term follow-up of dose-adjusted EPOCH plus rituximab (DA-EPOCH-R) in untreated patients with poor prognosis large B-cell lymphoma. A phase II study conducted by the Spanish PETHEMA group. Br J Haematol 169: 188-198. https://doi.org/10.1111/bjh.13273
    [5] Major A, Smith SM (2021) DA-R-EPOCH vs R-CHOP in DLBCL: how do we choose?. Clin Adv Hematol Oncol 19: 698-709.
    [6] Dunleavy K (2018) Approach to the diagnosis and treatment of adult Burkitt's lymphoma. J Oncol Pract 14: 665-671. https://doi.org/10.1200/JOP.18.00148
    [7] Howlett C, Snedecor SJ, Landsburg DJ, et al. (2015) Front-line, dose-escalated immunochemotherapy is associated with a significant progression-free survival advantage in patients with double-hit lymphomas: A systematic review and meta-analysis. Br J Haematol 170: 504-514. https://doi.org/10.1111/bjh.13463
    [8] Gisselbrecht C, Glass B, Mounier N, et al. (2010) Salvage regimens with autologous transplantation for relapsed large B-cell lymphoma in the rituximab era. J Clin Oncol 28: 4184-4190. https://doi.org/10.1200/JCO.2010.28.1618
    [9] Crump M, Kuruvilla J, Couban S, et al. (2014) Randomized comparison of gemcitabine, dexamethasone, and cisplatin versus dexamethasone, cytarabine, and cisplatin chemotherapy before autologous stem-cell transplantation for relapsed and refractory aggressive lymphomas: NCIC-CTG LY.12. J Clin Oncol 32: 3490-3496. https://doi.org/10.1200/JCO.2013.53.9593
    [10] Crump M, Neelapu SS, Farooq U, et al. (2017) Outcomes in refractory diffuse large B-cell lymphoma: results from the international SCHOLAR-1 study. Blood 130: 1800-1808. https://doi.org/10.1182/blood-2017-03-769620
    [11] Locke F, Miklos DB, Jacobson C, et al. (2021) Primary analysis of ZUMA‑7: A phase 3 randomized trial of axicabtagene ciloleucel (Axi-Cel) versus standard‑of‑care therapy in patients with relapsed/refractory large B-cell lymphoma. Blood 138: 2. https://doi.org/10.1182/blood-2021-148039
    [12] Kamdar M, Solomon SR, Arnason J, et al. (2022) Lisocabtagene maraleucel versus standard of care with salvage chemotherapy followed by autologous stem cell transplantation as second-line treatment in patients with relapsed or refractory large B-cell lymphoma (TRANSFORM): results from an interim analysis of an open-label, randomised, phase 3 trial. Lancet 399: 2294-2308. https://doi.org/10.1016/S0140-6736(22)00662-6
    [13] Nastoupil LJ, Jain MD, Feng L, et al. (2020) Standard-of-care axicabtagene ciloleucel for relapsed or refractory large B-cell lymphoma: results from the US lymphoma CAR T consortium. J Clin Oncol 38: 3119-3128. https://doi.org/10.1200/JCO.19.02104
    [14] Schuster SJ, Svoboda J, Chong EA, et al. (2017) Chimeric antigen receptor T cells in refractory B-cell lymphomas. N Engl J Med 377: 2545-2554. https://doi.org/10.1056/NEJMoa1708566
    [15] Schuster SJ, Tam CS, Borchmann P, et al. (2021) Long-term clinical outcomes of tisagenlecleucel in patients with relapsed or refractory aggressive B-cell lymphomas (JULIET): A multicentre, open-label, single-arm, phase 2 study. Lancet Oncol 22: 1403-1415. https://doi.org/10.1016/S1470-2045(21)00375-2
    [16] Zhang X, Schwartz JCD, Guo X, et al. (2004) Structural and functional analysis of the costimulatory receptor programmed death-1. Immunity 20: 337-347. https://doi.org/10.1016/s1074-7613(04)00051-2
    [17] Dermani FK, Samadi P, Rahmani G, et al. (2019) PD-1/PD-L1 immune checkpoint: potential target for cancer therapy. J Cell Physiol 234: 1313-1325. https://doi.org/10.1002/jcp.27172
    [18] Chatterjee P, Patsoukis N, Freeman GJ, et al. (2013) Distinct roles of PD-1 ITSM and ITIM in regulating interactions with SHP-2, ZAP-70 and Lck, and PD-1-mediated inhibitory function. Blood 122: 191. https://doi.org/10.1182/blood.V122.21.191.191
    [19] Youngnak P, Kozono Y, Kozono H, et al. (2003) Differential binding properties of B7-H1 and B7-DC to programmed death-1. Biochem Biophys Res Commun 307: 672-677. https://doi.org/10.1016/s0006-291x(03)01257-9
    [20] Juárez-Salcedo LM, Sandoval-Sus J, Sokol L, et al. (2017) The role of anti-PD-1 and anti-PD-L1 agents in the treatment of diffuse large B-cell lymphoma: the future is now. Crit Rev Oncol Hematol 113: 52-62. https://doi.org/10.1016/j.critrevonc.2017.02.027
    [21] Xu-Monette ZY, Zhou J, Young KH (2018) PD-1 expression and clinical PD-1 blockade in B-cell lymphomas. Blood 131: 68-83. https://doi.org/10.1182/blood-2017-07-740993
    [22] Cioroianu AI, Stinga PI, Sticlaru L, et al. (2019) Tumor microenvironment in diffuse large B-Cell lymphoma: role and prognosis. Anal Cell Pathol 2019: 8586354. https://doi.org/10.1155/2019/8586354
    [23] Ansell SM (2016) Where do programmed death-1 inhibitors fit in the management of malignant lymphoma?. J Oncol Pract 12: 101-106. https://doi.org/10.1200/JOP.2015.009191
    [24] Sun C, Mezzadra R, Schumacher TN (2018) Regulation and function of the PD-L1 checkpoint. Immunity 48: 434-452. https://doi.org/10.1016/j.immuni.2018.03.014
    [25] Song MK, Park BB, Uhm J (2019) Understanding immune evasion and therapeutic targeting associated with PD-1/PD-L1 pathway in diffuse large B-cell lymphoma. Int J Mol Sci 20: 1326. https://doi.org/10.3390/ijms20061326
    [26] Georgiou K, Chen L, Berglund M, et al. (2016) Genetic basis of PD-L1 overexpression in diffuse large B-cell lymphomas. Blood 127: 3026-3034. https://doi.org/10.1182/blood-2015-12-686550
    [27] Satou A, Nakamura S (2021) EBV-positive B-cell lymphomas and lymphoproliferative disorders: review from the perspective of immune escape and immunodeficiency. Cancer Med 10: 6777-6785. https://doi.org/10.1002/cam4.4198
    [28] Shi Y, Liu Z, Lin Q, et al. (2021) MiRNAs and cancer: key link in diagnosis and therapy. Genes 12: 1289. https://doi.org/10.3390/genes12081289
    [29] He B, Yan F, Wu C (2018) Overexpressed miR-195 attenuated immune escape of diffuse large B-cell lymphoma by targeting PD-L1. Biomed Pharmacother 98: 95-101. https://doi.org/10.1016/j.biopha.2017.11.146
    [30] Ansell SM, Minnema MC, Johnson P, et al. (2019) Nivolumab for relapsed/refractory diffuse large B-cell lymphoma in patients ineligible for or having failed autologous transplantation: A single-arm, phase II study. J Clin Oncol 37: 481-489. https://doi.org/10.1200/JCO.18.00766
    [31] Godfrey J, Tumuluru S, Bao R, et al. (2019) PD-L1 gene alterations identify a subset of diffuse large B-cell lymphoma harboring a T-cell-inflamed phenotype. Blood 133: 2279-2290. https://doi.org/10.1182/blood-2018-10-879015
    [32] Hu LY, Xu XL, Rao HL, et al. (2017) Expression and clinical value of programmed cell death-ligand 1 (PD-L1) in diffuse large B cell lymphoma: A retrospective study. Chin J Cancer 36: 94. https://doi.org/10.1186/s40880-017-0262-z
    [33] Roemer MGM, Advani RH, Ligon AH, et al. (2016) PD-L1 and PD-L2 genetic alterations define classical hodgkin lymphoma and predict outcome. J Clin Oncol 34: 2690-2697. https://doi.org/10.1200/JCO.2016.66.4482
    [34] Kiyasu J, Miyoshi H, Hirata A, et al. (2015) Expression of programmed cell death ligand 1 is associated with poor overall survival in patients with diffuse large B-cell lymphoma. Blood 126: 2193-2201. https://doi.org/10.1182/blood-2015-02-629600
    [35] Andorsky DJ, Yamada RE, Said J, et al. (2011) Programmed death ligand 1 is expressed by non-hodgkin lymphomas and inhibits the activity of tumor-associated T cells. Clin Cancer Res 17: 4232-4244. https://doi.org/10.1158/1078-0432.CCR-10-2660
    [36] Tuscano JM, Maverakis E, Groshen S, et al. (2019) A phase I study of the combination of rituximab and ipilimumab in patients with relapsed/refractory B-cell lymphoma. Clin Cancer Res 25: 7004-7013. https://doi.org/10.1158/1078-0432.CCR-19-0438
    [37] Zhang J, Medeiros LJ, Young KH (2018) Cancer immunotherapy in diffuse large B-cell lymphoma. Front Oncol 8: 351. https://doi.org/10.3389/fonc.2018.00351
    [38] Lesokhin AM, Ansell SM, Armand P, et al. (2016) Nivolumab in patients with relapsed or refractory hematologic malignancy: preliminary results of a phase Ib study. J Clin Oncol 34: 2698-2704. https://doi.org/10.1200/JCO.2015.65.9789
    [39] Smith SD, Till BG, Shadman MS, et al. (2020) Pembrolizumab with R-CHOP in previously untreated diffuse large B-cell lymphoma: potential for biomarker driven therapy. Br J Haematol 189: 1119-1126. https://doi.org/10.1111/bjh.16494
    [40] Green MR, Monti S, Rodig SJ, et al. (2010) Integrative analysis reveals selective 9p24.1 amplification, increased PD-1 ligand expression, and further induction via JAK2 in nodular sclerosing Hodgkin lymphoma and primary mediastinal large B-cell lymphoma. Blood 116: 3268-3277. https://doi.org/10.1182/blood-2010-05-282780
    [41] Camus V, Bigenwald C, Ribrag V, et al. (2021) Pembrolizumab in the treatment of refractory primary mediastinal large B-cell lymphoma: safety and efficacy. Expert Rev Anticancer Ther 21: 941-956. https://doi.org/10.1080/14737140.2021.1953986
    [42] Shah NJ, Kelly WJ, Liu SV, et al. (2018) Product review on the Anti-PD-L1 antibody atezolizumab. Hum Vaccin Immunother 14: 269-276. https://doi.org/10.1080/21645515.2017.1403694
    [43] Younes A, Burke JM, Cheson BD, et al. (2023) Safety and efficacy of atezolizumab with rituximab and CHOP in previously untreated diffuse large B-cell lymphoma. Blood Adv 7: 1488-1495. https://doi.org/10.1182/bloodadvances.2022008344
    [44] Palomba ML, Till BG, Park SI, et al. (2022) Combination of atezolizumab and obinutuzumab in patients with relapsed/refractory follicular lymphoma and diffuse large B-cell lymphoma: results from a phase 1b study. Clin Lymphoma Myeloma Leuk 22: e443-e451. https://doi.org/10.1016/j.clml.2021.12.010
    [45] Palomba ML, Cartron G, Popplewell L, et al. (2022) Combination of atezolizumab and tazemetostat in patients with relapsed/refractory diffuse large B-cell lymphoma: results from a phase Ib study. Clin Lymphoma Myeloma Leuk 22: 504-512. https://doi.org/10.1016/j.clml.2021.12.014
    [46] Jeanson A, Barlesi F (2017) MEDI 4736 (durvalumab) in non-small cell lung cancer. Expert Opin Biol Ther 17: 1317-1323. https://doi.org/10.1080/14712598.2017.1351939
    [47] Nowakowski GS, Willenbacher W, Greil R, et al. (2022) Safety and efficacy of durvalumab with R-CHOP or R2-CHOP in untreated, high-risk DLBCL: A phase 2, open-label trial. Int J Hematol 115: 222-232. https://doi.org/10.1007/s12185-021-03241-4
  • This article has been cited by:

    1. Chigorizim Onvusiribe, Galina Astratova, Nataliya Simchenko, Features of the application of semantic and sentiment analysis methods in the process of evaluating the effectiveness of digital educational technologies in Russian universities, 2024, 0, 2221-3260, 39, 10.52957/2221-3260-2024-7-39-58
    2. Chafika Ouni, Emna Benmohamed, Hela Ltifi, Sentiment analysis deep learning model based on a novel hybrid embedding method, 2024, 14, 1869-5469, 10.1007/s13278-024-01367-x
    3. Abdulfattah Ba Alawi, Ferhat Bozkurt, A hybrid machine learning model for sentiment analysis and satisfaction assessment with Turkish universities using Twitter data, 2024, 11, 27726622, 100473, 10.1016/j.dajour.2024.100473
    4. Chafika Ouni, Emna Benmohamed, Hela Ltifi, Deep learning-based Soft word embedding approach for sentiment analysis, 2024, 246, 18770509, 1355, 10.1016/j.procs.2024.09.720
    5. Y Swathi, Mahesh Kumar Jha, Manoj Challa, 2024, Topic Modeling and EDA for Analyzing User Sentiments for ChatGPT, 979-8-3503-8520-5, 1, 10.1109/SPARC61891.2024.10829066
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1733) PDF downloads(109) Cited by(0)

Figures and Tables

Figures(1)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog