Research article Special Issues

Optimal harvesting strategy for stochastic hybrid delay Lotka-Volterra systems with Lévy noise in a polluted environment


  • Received: 21 September 2022 Revised: 13 December 2022 Accepted: 27 December 2022 Published: 31 January 2023
  • This paper concerns the dynamics of two stochastic hybrid delay Lotka-Volterra systems with harvesting and Lévy noise in a polluted environment (i.e., predator-prey system and competitive system). For every system, sufficient and necessary conditions for persistence in mean and extinction of each species are established. Then, sufficient conditions for global attractivity of the systems are obtained. Finally, sufficient and necessary conditions for the existence of optimal harvesting strategy are provided. The accurate expressions for the optimal harvesting effort (OHE) and the maximum of expectation of sustainable yield (MESY) are given. Our results show that the dynamic behaviors and optimal harvesting strategy are closely correlated with both time delays and three types of environmental noises (namely white Gaussian noises, telephone noises and Lévy noises).

    Citation: Sheng Wang, Lijuan Dong, Zeyan Yue. Optimal harvesting strategy for stochastic hybrid delay Lotka-Volterra systems with Lévy noise in a polluted environment[J]. Mathematical Biosciences and Engineering, 2023, 20(4): 6084-6109. doi: 10.3934/mbe.2023263

    Related Papers:

    [1] Aleksander Panichkin, Alma Uskenbayeva, Aidar Kenzhegulov, Axaule Mamaeva, Akerke Imbarova, Balzhan Kshibekova, Zhassulan Alibekov, Didik Nurhadiyanto, Isti Yunita . Assessment of the effect of small additions of some rare earth elements on the structure and mechanical properties of castings from hypereutectic chromium white irons. AIMS Materials Science, 2023, 10(3): 517-540. doi: 10.3934/matersci.2023029
    [2] Falah Mustafa Al-Saraireh . Cold-curing mixtures based on biopolymer lignin complex for casting production in single and small-series conditions. AIMS Materials Science, 2023, 10(5): 876-890. doi: 10.3934/matersci.2023047
    [3] Bauyrzhan Rakhadilov, Lyaila Bayatanova, Sherzod Kurbanbekov, Ravil Sulyubayev, Nurdaulet Shektibayev, Nurbol Berdimuratov . Investigation on the effect of technological parameters of electrolyte-plasma cementation method on phase structure and mechanical properties of structural steel 20X. AIMS Materials Science, 2023, 10(5): 934-947. doi: 10.3934/matersci.2023050
    [4] Andrew H. Morgenstern, Thomas M. Calascione, Nathan A. Fischer, Thomas J. Lee, John E. Wentz, Brittany B. Nelson-Cheeseman . Thermoplastic magnetic elastomer for fused filament fabrication. AIMS Materials Science, 2019, 6(3): 363-376. doi: 10.3934/matersci.2019.3.363
    [5] Wolfgang Zúñiga-Mera, Sonia Gaona Jurado, Alejandra Isabel Guerrero Duymovic, Claudia Fernanda Villaquirán Raigoza, José Eduardo García . Effect of the incorporation of BiFeO3 on the structural, electrical and magnetic properties of the lead-free Bi0.5Na0.5TiO3. AIMS Materials Science, 2021, 8(5): 792-808. doi: 10.3934/matersci.2021048
    [6] Ayu Azera Izad, Rosimah Nulit, Che Azurahanim Che Abdullah, A'fifah Abdul Razak, Teh Huey Fang, Mohd Hafiz Ibrahim . Growth, leaf gas exchange and biochemical changes of oil palm (Elaeis guineensis Jacq.) seedlings as affected by iron oxide nanoparticles. AIMS Materials Science, 2019, 6(6): 960-984. doi: 10.3934/matersci.2019.6.960
    [7] Reginald Umunakwe, Ifeoma Janefrances Umunakwe, Uzoma Samuel Nwigwe, Wilson Uzochukwu Eze, Akinlabi Oyetunji . Review on properties of hybrid aluminum–ceramics/fly ash composites. AIMS Materials Science, 2020, 7(6): 859-870. doi: 10.3934/matersci.2020.6.859
    [8] Anna Godymchuk, Alexey Ilyashenko, Yury Konyukhov, Peter Ogbuna Offor, Galiya Baisalova . Agglomeration and dissolution of iron oxide nanoparticles in simplest biological media. AIMS Materials Science, 2022, 9(4): 642-652. doi: 10.3934/matersci.2022039
    [9] Yana Fajar Prakasa, Sumari Sumari, Aman Santoso, Muhammad Roy Asrori, Ririn Cahyanti . The performance of radar absorption of MnxFe3–xO4/rGO nanocomposites prepared from iron sand beach and coconut shell waste. AIMS Materials Science, 2023, 10(2): 227-248. doi: 10.3934/matersci.2023013
    [10] Andrea Di Schino . Corrosion behavior of new generation super-ferritic stainless steels. AIMS Materials Science, 2019, 6(5): 646-656. doi: 10.3934/matersci.2019.5.646
  • This paper concerns the dynamics of two stochastic hybrid delay Lotka-Volterra systems with harvesting and Lévy noise in a polluted environment (i.e., predator-prey system and competitive system). For every system, sufficient and necessary conditions for persistence in mean and extinction of each species are established. Then, sufficient conditions for global attractivity of the systems are obtained. Finally, sufficient and necessary conditions for the existence of optimal harvesting strategy are provided. The accurate expressions for the optimal harvesting effort (OHE) and the maximum of expectation of sustainable yield (MESY) are given. Our results show that the dynamic behaviors and optimal harvesting strategy are closely correlated with both time delays and three types of environmental noises (namely white Gaussian noises, telephone noises and Lévy noises).



    With the continuous advancement of medical artificial intelligence, it is urgent to build a smart clinical decision support system. As an essential component of clinical decision support system, clinical event detection (CED) has attracted constant attention from academia and industry. The CED task aims to identify and classify all clinically relevant events and situations, including symptoms, exams, treatments and other occurrences in Chinese electronic medical records (EMRs). The Chinese EMRs contain a lot of valuable information. It is important that how to extract these relevant information from a large amount of Chinese EMRs quickly and accurately. The correct extraction result can improve the quality of medical text analysis. Moreover, extracting these information quickly and accurately can help doctors make decisions in the process of treatment. In the last decades, numerous methods have been proposed for Chinese CED task, including Hidden Markov Models (HMMs) [1], Support Vector Machines (SVMs) [2] and Conditional Random Fields (CRFs) [3]. Recently, with the development of deep learning, researchers begin to introduce the neural networks [4,5,6,7] for Chinese CED task.

    Although these methods have achieved significant improvements in Chinese CED task, some issues still have not been well addressed. One significant drawback is that there is no publicly annotated corpus for Chinese CED task. The researchers need to annotate a corpus manually. Owing to the limitation of annotation costs, the scale of manually annotated corpora are usually small. The corpora also contain a great deal of noise. However, the improvements of performance and robustness crucially depend on a large amount of annotated training data. The small-scale corpus will limit the performance and robustness of model. To solve this problem, some researchers integrated external features into Chinese character representations to improve the performance of model [8,9]. Nevertheless, the above two approaches rely on external resources. They only work well when external resources are exhaustive. Another weakness is that the above two approaches ignore the importance of contextual information in semantic understanding. Fortunately, the BERT (Bidirectional Encoder Representations from Transformers) [10] trained on massive training data and can generate contextual representations dynamically according to contexts. The contextual information coming from the BERT can help model to understand obscure medical terms correctly. Moreover, the introduction of the BERT can improve the performance and robustness of models trained on a small amount of annotated training data. Thus, how to exploit the contextual information coming from the BERT for Chinese CED task is an important problem.

    Another issue is that the distribution of categories is unbalanced in corpus. The class imbalance problem degrades overall performance of model and model's decision is biased to majority category samples, which leads to classifying minority category samples incorrectly. If the performance of model varies too much on each category, we can not evaluate overall performance of model objectively. Therefore, how to improve the performance of model on minority category samples is another important problem.

    To address the above mentioned issues, we propose a transfer learning method to integrate contextual representations coming from the BERT into Chinese CED task. Moreover, to solve the class imbalance problem, we introduce an adversarial loss to improve the proportion of loss on minority category samples. Finally, we evaluate our model on our manually annotated corpus. Experimental results show that our proposed model achieves better overall performance than state-of-the-art models. In particular, our proposed model outperforms other models on minority category samples.

    Many methods have been proposed for Chinese CED task. All these existing approaches can be roughly divided into four categories: rule-based approaches, knowledge-based approaches, traditional machine learning approaches and deep learning approaches.

    Rule-based approaches rely on handcrafted rules to identify clinical events [11]. Because of their simplicity, they were used in early Chinese CED systems widely. They work effectively when rules are exhaustive. Rule-based approaches have a shortage of flexibility.

    Knowledge-based approaches do not require annotated training data as they rely on lexicon resources and domain-specific knowledge to identify clinical events [12]. They also have poor flexibility. Moreover, they may achieve high precision and low recall.

    Traditional machine learning approaches aim to make predictions by training on example inputs and their corresponding outputs. Typical methods are HMMs [1], SVMs [2] and CRFs [17]. However, they rely on handcrafted features. They work effectively when handcrafted features are excellent.

    In recent years, deep learning approaches have been introduced for Chinese CED task [4,5,6,7,13]. Tang et al. [6] exploited an attention-based convolution neural network (CNN) to generate the representation of Chinese characters and fed into a bidirectional long short term memory (BiLSTM) to extract features. Finally, they used a conditional random field (CRF) as a decoder and gave a predicted label to each character in the sentences. The BiLSTM-CRF model achieved state-of-the-art performance in Chinese CED task and obtained a competitive result compared with traditional statistical models. Zhou et al. [7] treated each clinical narrative as a sequence of short sentences and proposed an end-to-end deep neural network framework for Chinese CED task. Moreover, they also proposed a smoothed viterbi decoder as a sequence labeller without additional parameter training, which can be a good alternative to the conditional random field (CRF) when computing resources are limited. Luo et al. [14] integrated attention mechanism into the BiLSTM-CRF model to utilize global information to enforce tag consistency across multiple instances of same token in a document. After that, the BiLSTM-CRF model is usually exploited as a baseline. Recently, transfer learning methods have achieved great success in sequence labeling tasks. For example, Cao et al. [15] introduced an adversarial transfer learning framework to jointly train Chinese named entity recognition (NER) task and Chinese word segmentation (WS) task, aiming to extract task-shared word boundary information from Chinese WS task. Johnson et al. [16] explored a cross-lingual transfer learning for NER task, focusing on bootstrapping Japanese from English. Different from the above transfer learning methods, we utilize a transfer learning method to integrate contextual representations coming from the BERT into Chinese CED task to enhance Chinese character representations.

    In recent years, language representation models have been shown to be effective for improving many natural language processing (NLP) tasks [19,20]. So we consider integrating the language representation model BERT into Chinese CED task to improve the performance of task. The BERT trained on massive unlabeled texts and merged both left and right contexts in all layers together to represent contextual information. In this work, we integrate contextual information coming from the BERT into Chinese character representations to enhance semantic understanding.

    The class imbalance problem is prevalent in classification tasks and sequence labeling tasks. All these existing approaches solving this issue can be roughly divided into two categories: Data-based approaches and algorithm-based approaches. The data-based approaches address this issue through two strategies: Over-sampling and under-sampling. However, these two strategies may lead to over-fitting and information loss. The algorithm-based approaches solve this problem by accounting for the disadvantages of algorithms. For instance, Lin et al. [18] addressed the class imbalance problem by modifying the standard cross-entropy loss to adjust the loss assigned to well-classified examples. Inspired by Lin et al. [18], we introduce the punitive weight to adjust the proportion of loss on each category. The equal proportion of loss on each category can assist the model to obtain an equal opportunity to optimize performance on each category.

    In this paper, we propose an encoder-decoder structure based on transfer learning for Chinese CED task. The architecture of our proposed model is illustrated in Figure 1. The model mainly consists of two components: Semantic encoder and label decoder. In the following section, we will describe each part of our proposed model in detail.

    Figure 1.  The general architecture of our proposed model.

    In this work, we encode input sequences as contextual representations by the BERT. Here, we introduce the structure of the BERT briefly [10].

    The BERT is a multi-layer bidirectional transformer encoder. Each transformer encoder consists of two sub-layers. The first layer is a multi-head self-attention mechanism, and the second is a simple position-wise fully connected feed-forward network. The residual connection network and normalization layer follow each sub-layer. The input of the BERT is constructed by summing character embedding, segment embedding and position embedding.

    During the semantic encoding stage, we extract the output at the last layer of the BERT as contextual information of Chinese characters. Then, we concatenate each character's contextual representation and initial character embedding as decoder's input. Given an input sequence s={c1,c2,c3,,cn}, each character's semantic representation Tci can be described as follows:

    R=BERT(c1,c2,c3,,cn) (3.1)
    Tci=[Rci;Eci] (3.2)

    where R represents the output at the last layer of the BERT. The Rci,Eci represent contextual information and initialized character embedding of character ci, respectively.

    In the decoding stage, we adopt two transformer blocks as our label decoder to predict a label for each character. Each transformer block consists of two sub-layers: A multi-head self-attention mechanism and a simple position-wise fully connected feed-forward network. Here, we introduce the structure of transformer block briefly.

    The transformer block exploits the multi-head self-attention mechanism to capture the dependencies between any two characters in the sentence and learn inner structure of sentence. The scaled dot-product attention can be defined as follows:

    Attention(Q,K,V)=softmax(QKTd)V (3.3)

    where Q, K and V represent the query matrix, key matrix and value matrix, respectively. The d represents the dimension of key matrix. The multi-head self-attention can be expressed as follows:

    headj=Attention(QWQj,KWKj,VWVj) (3.4)
    MultiHead(Q,K,V)=Concat(head1,,headh)WO (3.5)

    where WQj, WKj, WVj and WO are trainable projection parameters.

    Besides the multi-head self-attention mechanism, another essential component of transformer block is the position-wise fully connected feed-forward network, which is applied to each position separately and identically. The position-wise fully connected feed-forward network can be described as follows:

    FFN(x)=max(0,xW1+b1)W2+b2 (3.6)

    This formula consists of two linear transformations with a ReLU activation function in between.

    We introduce the punitive weight into a standard cross-entropy loss to adjust the contribution of loss on each category. This strategy changes the proportion of loss on each category. The proportion of loss on majority category samples will confront the proportion of loss on minority category samples. More specifically, we utilize a penalty factor to balance the difference of data scale between different categories. We exploit the probability of labels to adjust the contribution of loss on each category. We call this novel loss "adversarial loss". The adversarial loss can be defined as follows:

    L=1NNi=1Kk=1(1yi,kPi,k)αyi,klogPi,k (3.7)

    where N represents the amount of characters, K represents the amount of Chinese clinical event's categories, yi,k represents the true value of the ith character on the kth category, Pi,k represents the probability which the ith character is predicted as the kth category, 1yi,kPi,k represents the penalty term and α represents the penalty factor.

    To evaluate the effectiveness of our proposed model, we conduct a range of experiments on our manually annotated corpus. Specially, we name our manually annotated corpus Chinese CED corpus. The Chinese CED corpus consists of 2000 Chinese EMRs, coming from a third-level grade-A hospital in Gansu province. The Chinese CED corpus is divided as follows during the stage of experiment. Firstly, we divide 2000 annotated clinical narratives into initial train set and test set at a ratio of 3:1. Then, we select 400 clinical narratives from initial train set and test set as the development set randomly. Finally, we choose the rest of initial train set and test set to construct the final train set and test set, respectively. Table 1 shows the distribution of Chinese CED corpus.

    Table 1.  The distribution of Chinese CED corpus.
    Train Development Test Total
    1300 400 300 2000

     | Show Table
    DownLoad: CSV

    In the stage of dataset annotation, we do as follows. More specifically, we work out an annotation specification and develop an annotation tool for Chinese CED corpus. The annotation specification refers to 2012 i2b2 (Informatics for Integrating Biology & the Bedside) clinical temporal relations challenge annotation guidelines. Moreover, we make the following improvements and supplements. We annotate the discharge summaries and progress notes with seven types of clinical events (Problem, Exam, Treatment, Clinical department, Evidence, Occurrence and Aspectual). The "Problem" event type includes patient's complaints, symptoms, diseases and diagnoses. The "Exam" event type is used for clinical tests (laboratory and physical) and test results. The "Treatment" event type includes medications, surgeries and other procedures. The "Clinical department" event type is used to mark the clinical unit. The "Evidence" event type is used to state the source of information. The "Occurrence" event type is used for all the other kinds of clinically relevant events which happened to the patient. The "Aspectual" event type includes the state of current clinical event. Just marking the type of each clinical event is not enough. In order for the annotation to be useful in text analysis, we need to describe each clinical event in more detail. Besides clinical event category, we also annotate another two attributes of clinical events: Polarity and degree. The polarity attribute marks whether a clinical event is positive or negative. Most of the clinical events have "POS" polarity value, that is, the clinical event is not negated. It is to be noted that a clinical event can be POS even if it did not actually occur (If the clinical event is hypothetical or proposed.). If a clinical event is negated by words such as "not", "deny", and so on, its polarity is "NEG". Moreover, we also utilize degree attribute to mark the degree of clinical events. There are three type of clinical event degree attributes: "MOST", "LITTLE" and "NA". Table 2 lists the examples of Chinese clinical events. The bold words represent the clinical events in the sentence.

    Table 2.  The examples of Chinese clinical events.
    Sentence Category Polarity Degree
    右侧肢体麻木加重
    (Numbness in the right limb has increased)
    Problem POS MOST
    密切监测血常规
    (Monitor blood routine closely)
    Exam POS MOST
    胰岛素强化治疗
    Intensive insulin therapy)
    Treatment POS MOST
    患儿收住儿科
    (The kid was admitted to pediatrics)
    Clinical department POS NA
    双肺叩诊呈过清音
    (The sound of percussion in both lungs is too clear)
    Evidence POS NA
    患者收住入院
    (The patient was admitted to hospital)
    Occurrence POS NA
    患者停用丹红注射液
    (The patient stopped using Danhong injection)
    Aspectual POS NA

     | Show Table
    DownLoad: CSV

    Twenty copies of same clinical narratives were annotated in pairs before the formal annotation. The annotators record the uncertain textfields in the process of annotation. Two people in the group exchange annotation results and proofread their partner's annotation results after the own data annotation. They also record inconsistent textfields. All crews discuss the uncertain and inconsistent textfields and modify the annotation specification and annotation results together after a round of annotation. Then, we conduct five round of preliminary annotations. The clinical narratives in the five round of preliminary annotations are different. When we conduct the fifth round preliminary annotation, the error rate of all annotators were both less than 5%. So, we think that the annotation specification and annotators have met the requirements of formal annotation. Moreover, the annotators may encounter difficult medical terms during the process of annotation. So our annotation group members consider that twelve laboratory members and two medical students with more than five years of learning experience are involved in annotation specification development and annotation discussion. Besides difficult medical terms, there are other challenges in the process of corpus construction. For example, the corpus may contain a great deal of noise. Noise comes from two stages. Doctors may make writing errors in the EMRs. Moreover, the annotators may make annotation mistakes. In order to reduce the noise, we take the following two steps. Firstly, two medical students in our annotation group modify the writing errors in the EMRs before the formal annotation. Secondly, all annotation members discuss the annotation results and make a double check after the annotation.

    In addition, we also record the detailed statistics of the Chinese CED corpus. Table 3 shows the detailed statistics of Chinese CED corpus. We can find that the data scale belonging to "Clinical department" is very small. We view "Clinical department" as a minority category. We adopt the Precision, Recall and F1-score as an evaluation metrics of overall performance in our experiments. Besides, we also use the Recall and F1-score as an evaluation metrics to evaluate the performance of "Clinical department".

    Table 3.  The detailed statistics of Chinese CED corpus.
    Category Train Development Test Total
    Problem 47669 4006 14811 66486
    Exam 18939 1949 5983 26871
    Treatment 9622 989 2943 13554
    Clinical department 114 9 44 167
    Evidence 6624 666 2136 9426
    Occurrence 3104 356 980 4440
    Aspectual 2386 291 812 3489
    Other 568897 50763 173099 792759
    Total 657355 59029 200808 917192

     | Show Table
    DownLoad: CSV

    For hyper-parameter configurations, we adjust them according to our proposed model's performance on development set of Chinese CED corpus. It's worth noting that we fix all parameters coming from the BERT and only update all parameters coming from label decoder during the model's training stage. In this section, we will introduce the information about semantic encoder and label decoder in detail.

    In this work, we apply the BERT-Base-Chinese Pre-trained Model for obtaining each character's contextual information. The BERT-Base-Chinese Pre-trained Model trained on massive cased Chinese simplified and traditional texts. Table 4 shows the detailed statistics of pre-trained language model.

    Table 4.  The hyper-parameter configuration of pre-trained language model.
    Parameter description Value
    The amount of transformer encoder blocks 12
    The dimension of intermediate layer 768
    The amount of multi-head attention mechanism's heads 12
    The amount of parameters 110M

     | Show Table
    DownLoad: CSV

    In addition, we exploit two transformer blocks as our proposed model's decoder. Each transformer block consists of two sub-layers: A multi-head self-attention mechanism and a simple position-wise fully connected feed-forward network. Table 5 shows the major settings of decoder.

    Table 5.  The hyper-parameter configuration of decoder.
    Parameter description Value
    The amount of transformer decoder blocks 2
    Chinese character embedding size 308
    Contextual representation size 204
    The amount of multi-head attention mechanism's heads 8
    The dimension of intermediate layer 512
    Initial learning rate 0.001
    Batch size 50
    Penalty factor 2
    Dropout rate 0.1

     | Show Table
    DownLoad: CSV

    In the experimental section, we use multiple baseline models to compare with our proposed method. Here, we introduce baseline models briefly.

    CRF: In this work, we use the CRF++ to implement the CRF model.

    CNN-Softmax: The model utilizes the CNN to extract features and feeds into the multilayer perceptron (MLP) to decode.

    CNN-CRF: The model adopts the CNN to extract features and feeds into the CRF to decode.

    BiGRU-Softmax: The model exploits the bidirectional gated recurrent unit (BiGRU) to extract features and feeds into the MLP to decode.

    BiLSTM-CRF: The model adopts the BiLSTM to extract features and feeds into the CRF to decode.

    Attention-based CNN-BiLSTM-CRF: Tang et al. [6] exploited an attention-based CNN to generate the representations of Chinese characters and fed into the BiLSTM to extract features. Finally, they used the CRF as the model's decoder.

    BERT-Softmax: The model utilizes the BERT to obtain semantic representations and feeds into the MLP to decode.

    BERT-BiLSTM-Softmax: The model uses the BERT to obtain semantic representations and feeds into the BiLSTM to decode.

    BERT-Transformer-Softmax: The model adopts the BERT to obtain the semantic representations and feeds into the transformer to decode.

    We compare overall performance of our proposed model with baseline models on test set of Chinese CED corpus. Table 6 shows the detailed experimental results of baseline models and our proposed model. The first column of Table 6 lists baseline models and our proposed model. Our proposed model achieves the highest precision of 83.73%, recall of 86.56% and F1-score of 85.12%. Compared with the BiLSTM-CRF model, our proposed model improves the F1-score from 82.97 to 85.12% and obtains an increase of 2.15%. Compared with the BERT-Softmax model, our proposed model improves the F1-score from 83.34 to 85.12% and obtains an increase of 1.78%. All in all, our proposed model outperforms other state-of-the-art methods significantly and consistently.

    Table 6.  Overall experimental results.
    Model Precision(%) Recall(%) F1-score(%)
    CRF 79.96 85.37 82.58
    CNN-Softmax 75.20 83.40 79.09
    CNN-CRF 72.93 74.38 73.65
    BiGRU-Softmax 80.27 85.62 82.86
    BiLSTM-CRF 81.32 84.68 82.97
    CNN-BiLSTM-CRF 80.98 74.84 77.79
    BERT-Softmax 81.43 85.34 83.34
    BERT-BiLSTM-Softmax 82.62 84.77 83.68
    BERT-Transformer-Softmax 83.54 85.23 84.38
    Ours 83.73 86.56 85.12

     | Show Table
    DownLoad: CSV

    Here, we summarize several reasons for the success of our proposed model. Firstly, we adopt the BERT to generate contextual representations and integrate contextual representations into Chinese character embeddings, which enhances the representations of Chinese characters. Secondly, we use the transformer block as our model's decoder, which pays different attention to each position of semantic representations. Thirdly, we utilize an "adversarial loss" to balance the difference of data scale on each category.

    Besides, we also record the performance of our proposed model on each category. Table 7 shows the detailed experimental results. As shown in Table 7, our proposed model achieves impressive results on majority category samples. For example, the F1-score of "Problem" and "Exam" are 88.77% and 85.85%, respectively.

    Table 7.  Experimental results of our proposed model on each category.
    Category Precision(%) Recall(%) F1-score(%)
    Problem 87.42 90.16 88.77
    Exam 82.81 89.12 85.85
    Treatment 81.85 88.06 84.84
    Clinical department 74.72 62.44 68.03
    Evidence 76.13 86.40 80.94
    Occurrence 89.13 92.44 90.75
    Aspectual 75.96 86.87 81.05

     | Show Table
    DownLoad: CSV

    In our manually annotated corpus, "Clinical department" is viewed as a minority category. The main reason is that the data scale of "Clinical department" is much less than the data scale of other clinical event's categories. As is shown in Table 3, the data scale of "Clinical department" varies from tens to hundreds of times compared with other clinical event's categories. In this work, we introduce an "adversarial loss" to adjust the proportion of loss on each category to improve the performance of model on each category. This strategy compels model to learn more knowledge about "Clinical department". To some extent, it reduces the gap of data scale on each category. Our proposed model achieves the recall of 62.44% and F1-score of 68.03% on "Clinical department".

    To prove the effectiveness of "adversarial loss", we conduct a contrasting experiment on Chinese CED corpus. The only difference between contrasting experiment and our proposed model is loss function. Our proposed model exploits the "adversarial loss" as loss function. The contrasting experiment uses a standard cross-entropy loss as loss function. Table 8 shows the experimental results of contrasting experiment.

    Table 8.  Experimental results of contrasting experiment on each category.
    Category Precision(%) Recall(%) F1-score(%)
    Problem 84.17 88.94 86.49
    Exam 80.42 90.12 84.99
    Treatment 81.92 88.34 85.01
    Clinical department 80.52 57.65 67.19
    Evidence 75.58 84.19 79.65
    Occurrence 90.21 90.39 90.30
    Aspectual 76.34 85.71 80.75

     | Show Table
    DownLoad: CSV

    As shown in Table 8, the F1-score of "Problem" and "Exam" are 86.49 and 84.99%, respectively. We discover that the introduction of "adversarial loss" hardly degrades the performance of model on majority category samples. If model's performance degrades in a particular clinical event, the degradation is minimal. For example, the F1-score of "Treatment" is 85.01% on contrasting experiment. The F1-score of "Treatment" is 84.84% on our proposed model. Specially, our proposed model outperforms contrasting experiment on "Clinical department". The F1-score of "Clinical department" is 67.19% on contrasting experiment. The F1-score of "Clinical department" is 68.03% on our proposed model. Our proposed model obtains an increase of 0.84%.

    Besides, we also record the F1-score of existing models and our proposed model on "Clinical department". Figure 2 shows the detailed experimental results.

    Figure 2.  Experimental results on Clinical department.

    As shown in Figure 2, our proposed model achieves the highest F1-score of 68.03% on "Clinical department". Compared with the BiLSTM-CRF model, our proposed model improves the F1-score from 63.66 to 68.03% and obtains an increase of 4.37%. Compared with the BERT-Softmax model, our proposed model improves the F1-score from 65.67 to 68.03% and obtains an increase of 2.36%. The above results demonstrate that our proposed strategy can alleviate the class imbalance problem effectively. The main reason is that the introduction of "adversarial loss" can adjust the proportion of loss on each category. More specifically, models usually learn more knowledge from large amount of samples and learn less knowledge from small amount of samples. It will lead to giving correct labels to majority category samples. When model encounters a minority category sample, it will give an incorrect label to sample. In this work, we utilize an "adversarial loss" to adjust the proportion of loss on each category. It will reduce incorrect decisions caused by unbalanced data scale.

    To validate the effectiveness of contextual information, we conduct the following experiments. The contrasting experiment only uses initial character embeddings from the BERT as the representations of Chinese characters. Table 9 shows the detailed experimental results.

    Table 9.  Experimental results about effectiveness of contextual information.
    Model Precision(%) Recall(%) F1-score(%)
    Ours(-contextual representation) 82.34 84.07 83.20
    Ours 83.73 86.56 85.12

     | Show Table
    DownLoad: CSV

    Our proposed model achieves the F1-score of 85.12%. The contrasting experiment gets the F1-score of 83.20%. Our proposed model outperforms contrasting experiment by 1.92%, which indicates contextual information from the BERT is effective for Chinese CED task. The main reason is that our proposed model can utilize contextual information to understand the meaning of sentences under a specific semantic context. This strategy will assist model in harvesting a skill of semantic understanding. This approach is consistent with the process of human thought.

    To evaluate the effectiveness of transformer decoder, we conduct the following experiments. The contrasting experiment utilizes the BiLSTM as its decoder and "adversarial loss" to compute loss. Table 10 shows the experimental results.

    Table 10.  Experimental results about effectiveness of transformer decoder.
    Model Precision(%) Recall(%) F1-score(%)
    BERT-BiLSTM-Softmax(adversarial loss) 82.75 86.35 84.51
    Ours 83.73 86.56 85.12

     | Show Table
    DownLoad: CSV

    Our proposed model achieves the best precision of 83.73%, recall of 86.56% and F1-score of 85.12%. Compared with the BERT-BiLSTM-Softmax (adversarial loss) model, our proposed model improves the F1-score from 84.51 to 85.12% and obtains an increase of 0.61%. It verifies that transformer decoder owns a better decoding capacity than the BiLSTM. The main reason of our proposed model's success is multi-head self-attention mechanism can assist transformer block in paying different attention to each position of sentence. It will help model to capture pivotal information and use these information to assist model in making decisions.

    To confirm the effectiveness of "adversarial loss" on "Clinical department", we conduct the following experiments. We choose the BiLSTM-CRF model and the BERT-Softmax model as basic models. Then, we adopt the standard cross-entropy loss and "adversarial loss" to compute loss, respectively. In addition, we also exploit the standard cross-entropy loss to replace "adversarial loss" to conduct extra experiment. Table 11 shows the detailed experimental results.

    Table 11.  Experimental results about effectiveness of adversarial Loss.
    Model Precision(%) Recall(%) F1-score(%)
    BiLSTM-CRF 71.80 57.18 63.66
    BiLSTM-CRF (adversarial loss) 73.14 59.28 65.48
    BERT-Softmax 81.16 55.15 65.67
    BERT-Softmax(adversarial loss) 73.28 61.74 67.02
    Ours (standard cross-entropy loss) 80.52 57.65 67.19
    Ours (fixed rate of loss allocation) 70.25 57.34 63.14
    Ours 74.72 62.44 68.03

     | Show Table
    DownLoad: CSV

    The BiLSTM-CRF model using "adversarial loss" outperforms the standard BiLSTM-CRF model and obtains an increase of 1.82% on the F1-score. The BERT-Softmax model using "adversarial loss" outperforms the standard BERT-Softmax model and obtains an increase of 1.35% on the F1-score. Our proposed model outperforms corresponding contrasting experiment and obtains an increase of 0.84% on the F1-score. It proves that "adversarial loss" can alleviate class imbalance problem effectively. Moreover, we also explore the allocation strategy of loss on each category. The contrasting experiment sets a fixed rate of loss allocation on each category. We find that our proposed model outperforms contrasting experiment and obtains an increase of 4.89% on the F1-score. The main reason is that "adversarial loss" can allocate the proportion of loss for each category flexibly.

    The class imbalance problem is prevalent in Chinese CED tasks. The categories containing scarce instances may be significant. We shouldn't ignore the performance of model on minority category samples. In this work, we adopt an "adversarial loss" to solve the class imbalance problem. Here, we take a sentence from test set of Chinese CED corpus as an example for illustrating the effectiveness of our proposed strategy. Table 12 shows the detailed results. In the example, baselines give an "O" label to "神经外科" (neurosurgery department). Our proposed model gives a "Cd" label (Clinical department) to "神经外科" (neurosurgery department). The label predicted by our proposed model is correct. The baselines make incorrect predictions. The reason is that baselines adopt the standard cross-entropy loss to compute loss, which compels model to be biased toward majority category samples and can not obtain crucial information from samples belonging to minority category. Different from baseline models, our proposed model introduces the punitive weight into loss to balance the difference of each category's data scale. It will assist model in learning more crucial information from data belonging to minority category. Moreover, contextual information coming from the BERT can help model understand the meaning of sentence under a specific context.

    Table 12.  The example of class imbalance problem.
    Sentence
    Golden O O O O O B-Cd M-Cd M-Cd E-Cd
    Baselines O O O O O O O O O
    Ours O O O O O B-Cd M-Cd M-Cd E-Cd

     | Show Table
    DownLoad: CSV

    In this paper, we propose a novel encoder-decoder structure based on transfer learning for Chinese CED task, which integrates contextual representations into Chinese character embeddings to assist model in semantic understanding. Besides, we introduce an "adversarial loss" to solve the class imbalance problem. Experimental results on test set of Chinese CED corpus demonstrate that our proposed model outperforms state-of-the-art methods significantly and consistently. In particular, our model achieves superior performance than other models on minority category samples. In the future, we will explore more allocation strategies of loss and compare experiment result of each strategy.

    We would like to thank the anonymous reviewers for their valuable comments. The Publication of the article is supported by the National Natural Science Foundation of China (No. 61762081, No. 61662067, No. 61662068) and the Key Research and Development Project of Gansu Province (No. 17YF1GA016).

    The authors declare that they have no competing interests.



    [1] X. Zou, K. Wang, Optimal harvesting for a stochastic regime-switching logistic diffusion system with jumps, Nonlinear Anal. Hybrid Syst., 13 (2014), 32–44. https://doi.org/10.1016/j.nahs.2014.01.001 doi: 10.1016/j.nahs.2014.01.001
    [2] J. Roy, D. Barman, S. Alam, Role of fear in a predator-prey system with ratio-dependent functional response in deterministic and stochastic environment, Biosystems, 197 (2020), 104176. https://doi.org/10.1016/j.biosystems.2020.104176 doi: 10.1016/j.biosystems.2020.104176
    [3] Q. Liu, D. Jiang, Influence of the fear factor on the dynamics of a stochastic predator-prey model, Appl. Math. Lett., 112 (2021), 106756. https://doi.org/10.1016/j.aml.2020.106756 doi: 10.1016/j.aml.2020.106756
    [4] Q. Yang, X. Zhang, D. Jiang, Dynamical behaviors of a stochastic food chain system with Ornstein-Uhlenbeck process, J. Nonlinear Sci., 32 (2022), 1–40. https://doi.org/10.1007/s00332-021-09760-y doi: 10.1007/s00332-021-09760-y
    [5] L. Wang, D. Jiang, Ergodicity and threshold behaviors of a predator-prey model in stochastic chemostat driven by regime switching, Math. Meth. Appl. Sci., 44 (2021), 325–344. https://doi.org/10.1002/mma.6738 doi: 10.1002/mma.6738
    [6] Q. Luo, X. Mao, Stochastic population dynamics under regime switching, J. Math. Anal. Appl., 334 (2007), 69–84. https://doi.org/10.1016/j.jmaa.2006.12.032 doi: 10.1016/j.jmaa.2006.12.032
    [7] Q. Luo, X. Mao, Stochastic population dynamics under regime switching Ⅱ, J. Math. Anal. Appl., 355 (2009), 577–593. https://doi.org/10.1016/j.jmaa.2009.02.010 doi: 10.1016/j.jmaa.2009.02.010
    [8] C. Zhu, G. Yin, On hybrid competitive Lotka-Volterra ecosystems, Nonlinear Anal., 71 (2009), 1370–1379. https://doi.org/10.1016/j.na.2009.01.166 doi: 10.1016/j.na.2009.01.166
    [9] X. Li, A. Gray, D. Jiang, X. Mao, Sufficient and necessary conditions of stochastic permanence and extinction for stochastic logistic populations under regime switching, J. Math. Anal. Appl., 376 (2011), 11–28. https://doi.org/10.1016/j.jmaa.2010.10.053 doi: 10.1016/j.jmaa.2010.10.053
    [10] M. Ouyang, X. Li, Permanence and asymptotical behavior of stochastic prey-predator system with Markovian switching, Appl. Math. Comput., 266 (2015), 539–559. https://doi.org/10.1016/j.amc.2015.05.083 doi: 10.1016/j.amc.2015.05.083
    [11] J. Bao, J. Shao, Permanence and extinction of regime-switching predator-prey models, SIAM J. Math. Anal., 48 (2016), 725–739. https://doi.org/10.1137/15M1024512 doi: 10.1137/15M1024512
    [12] M. Liu, X. He, J. Yu, Dynamics of a stochastic regime-switching predator-prey model with harvesting and distributed delays, Nonlinear Anal. Hybrid Syst., 28 (2018), 87–104. https://doi.org/10.1016/j.nahs.2017.10.004 doi: 10.1016/j.nahs.2017.10.004
    [13] Y. Cai, S. Cai, X. Mao, Stochastic delay foraging arena predator-prey system with Markov switching, Stoch. Anal. Appl., 38 (2020), 191–212. https://doi.org/10.1080/07362994.2019.1679645 doi: 10.1080/07362994.2019.1679645
    [14] J. Bao, X. Mao, G. Yin, C. Yuan, Competitive Lotka-Volterra population dynamics with jumps, Nonlinear Anal., 74 (2011), 6601–6616. https://doi.org/10.1016/j.na.2011.06.043 doi: 10.1016/j.na.2011.06.043
    [15] J. Bao, C. Yuan, Stochastic population dynamics driven by Lévy noise, J. Math. Anal. Appl., 391 (2012), 363–375. https://doi.org/10.1016/j.jmaa.2012.02.043 doi: 10.1016/j.jmaa.2012.02.043
    [16] M. Liu, K. Wang, Dynamics of a Leslie-Gower Holling-type Ⅱ predator-prey system with Lévy jumps, Nonlinear Anal., 85 (2013), 204–213. https://doi.org/10.1016/j.na.2013.02.018 doi: 10.1016/j.na.2013.02.018
    [17] M. Liu, K. Wang, Stochastic Lotka-Volterra systems with Lévy noise, J. Math. Anal. Appl., 410 (2014), 750–763. https://doi.org/10.1016/j.jmaa.2013.07.078 doi: 10.1016/j.jmaa.2013.07.078
    [18] M. Liu, M. Deng, B. Du, Analysis of a stochastic logistic model with diffusion, Appl. Math. Comput., 266 (2015), 169–182. https://doi.org/10.1016/j.amc.2015.05.050 doi: 10.1016/j.amc.2015.05.050
    [19] X. Zhang, W. Li, M. Liu, K. Wang, Dynamics of a stochastic Holling Ⅱ one-predator two-prey system with jumps, Phys. A, 421 (2015), 571–582. https://doi.org/10.1016/j.physa.2014.11.060 doi: 10.1016/j.physa.2014.11.060
    [20] D. Valenti, G. Denaro, A. Cognata, B. La Spagnolo, A. Bonanno, G. Basilone, et al., Picophytoplankton dynamics in noisy marine environment, Acta Phys. Pol. B, 43 (2012), 1227–1240. https://doi.org/10.5506/APhysPolB.43.1227 doi: 10.5506/APhysPolB.43.1227
    [21] C. Guarcello, D. Valenti, G. Augello, B. Spagnolo, The role of non-Gaussian sources in the transient dynamics of long Josephson junctions, Acta Phys. Pol. B, 44 (2013), 997–1005. https://doi.org/10.5506/APhysPolB.44.997 doi: 10.5506/APhysPolB.44.997
    [22] C. Guarcello, D. Valenti, B. Spagnolo, V. Pierro, G. Filatrella, Josephson-based threshold detector for Lévy-distributed current fluctuations, Phys. Rev. Appl., 11 (2019), 044078. https://doi.org/10.1103/PhysRevApplied.11.044078 doi: 10.1103/PhysRevApplied.11.044078
    [23] A. A. Dubkov, A. La Cognata, B. Spagnolo, The problem of analytical calculation of barrier crossing characteristics for Lévy flights, J. Stat. Mech. Theory Exp., 2009 (2019), P01002. https://doi.org/10.1088/1742-5468/2009/01/P01002 doi: 10.1088/1742-5468/2009/01/P01002
    [24] B. Lisowski, D. Valenti, B. Spagnolo, M. Bier, E. Gudowska-Nowak, Stepping molecular motor amid Lévy white noise, Phys. Rev. E, 91 (2015), 042713. https://doi.org/10.1103/PhysRevE.91.042713 doi: 10.1103/PhysRevE.91.042713
    [25] I. A. Surazhevsky, V. A. Demin, A. I. Ilyasov, A. V. Emelyanov, K. E. Nikiruy, V. V. Rylkov, et al., Noise-assisted persistence and recovery of memory state in a memristive spiking neuromorphic network, Chaos Solitons Fractals, 146 (2021), 110890. https://doi.org/10.1016/j.chaos.2021.110890 doi: 10.1016/j.chaos.2021.110890
    [26] A. N. Mikhaylov, D. V. Guseinov, A. I. Belov, D. S. Korolev, V. A. Shishmakova, M. N. Koryazhkina, et al., Stochastic resonance in a metal-oxide memristive device, Chaos Solitons Fractal, 144 (2021), 110723. https://doi.org/10.1016/j.chaos.2021.110723 doi: 10.1016/j.chaos.2021.110723
    [27] Y. V. Ushakov, A. A. Dubkov, B. Spagnolo, Spike train statistics for consonant and dissonant musical accords in a simple auditory sensory model, Phys. Rev. E, 81 (2010), 041911. https://doi.org/10.1103/PhysRevE.81.041911 doi: 10.1103/PhysRevE.81.041911
    [28] N. V. Agudov, A. V. Safonov, A. V. Krichigin, A. A. Kharcheva, A. A. Dubkov, D. Valenti, et al., Nonstationary distributions and relaxation times in a stochastic model of memristor, J. Stat. Mech. Theory Exp., 2020 (2020), 024003. https://doi.org/10.1088/1742-5468/ab684a doi: 10.1088/1742-5468/ab684a
    [29] D. O. Filatov, D. V. Vrzheshch, O. V. Tabakov, A. S. Novikov, A. I. Belov, I. N. Antonov, et al., Noise-induced resistive switching in a memristor based on ZrO2(Y)/Ta2O5 stack, J. Stat. Mech. Theory Exp., 2019 (2019), 124026. https://doi.org/10.1088/1742-5468/ab5704 doi: 10.1088/1742-5468/ab5704
    [30] A. Carollo, B. Spagnolo, A. A. Dubkov, D. Valenti, On quantumness in multi-parameter quantum estimation, J. Stat. Mech. Theory Exp., 2019 (2019), 094010. https://doi.org/10.1088/1742-5468/ab3ccb doi: 10.1088/1742-5468/ab3ccb
    [31] R. Stassi, S. Savasta, L. Garziano, B. Spagnolo, F. Nori, Output field-quadrature measurements and squeezing in ultrastrong cavity-QED, New J. Phys., 18 (2016), 123005. https://doi.org/10.1088/1367-2630/18/12/123005 doi: 10.1088/1367-2630/18/12/123005
    [32] S. Ciuchi, F. De Pasquale, B. Spagnolo, Nonlinear relaxation in the presence of an absorbing barrier, Phys. Rev. E, 47 (1993), 3915. https://doi.org/10.1103/PhysRevE.47.3915 doi: 10.1103/PhysRevE.47.3915
    [33] Y. Kuang, Delay Differential Equations: With Applications in Population Dynamics, Academic Press, Boston, 1993.
    [34] W. Zuo, D. Jiang, X. Sun, T. Hayat, A. Alsaedi, Long-time behaviors of a stochastic cooperative Lotka-Volterra system with distributed delay, Phys. A, 506 (2018), 542–559. https://doi.org/10.1016/j.physa.2018.03.071 doi: 10.1016/j.physa.2018.03.071
    [35] F. A. Rihan, H. J. Alsakaji, Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species, Discret. Contin. Dyn. Syst. Ser. S, 15 (2020), 245. https://doi.org/10.3934/dcdss.2020468 doi: 10.3934/dcdss.2020468
    [36] H. J. Alsakaji, S. Kundu, F. A. Rihan, Delay differential model of one-predator two-prey system with Monod-Haldane and holling type Ⅱ functional responses, Appl. Math. Comput., 397 (2021), 125919. https://doi.org/10.1016/j.amc.2020.125919 doi: 10.1016/j.amc.2020.125919
    [37] L. Wang, R. Zhang, Y. Wang, Global exponential stability of reaction-diffusion cellular neural networks with S-type distributed time delays, Nonlinear Anal., 10 (2009), 1101–1113. https://doi.org/10.1016/j.nonrwa.2007.12.002 doi: 10.1016/j.nonrwa.2007.12.002
    [38] L. Wang, D. Xu, Global asymptotic stability of bidirectional associative memory neural networks with S-type distributed delays, Int. J. Syst. Sci., 338 (2002), 869–877. https://doi.org/10.1080/00207720210161777 doi: 10.1080/00207720210161777
    [39] S. Abbas, D. Bahuguna, M. Banerjee, Effect of stochastic perturbation on a two species competitive model, Nonlinear Anal. Hybrid Syst., 3 (2009), 195–206. https://doi.org/10.1016/j.nahs.2009.01.001 doi: 10.1016/j.nahs.2009.01.001
    [40] Q. Han, D. Jiang, C. Ji, Analysis of a delayed stochastic predator-prey model in a polluted environment, Appl. Math. Model., 38 (2014), 3067–3080. https://doi.org/10.1016/j.apm.2013.11.014 doi: 10.1016/j.apm.2013.11.014
    [41] Q. Liu, Q. Chen, Analysis of a stochastic delay predator-prey system with jumps in a polluted environment, Appl. Math. Comput., 242 (2014), 90–100. https://doi.org/10.1016/j.amc.2014.05.033 doi: 10.1016/j.amc.2014.05.033
    [42] Y. Zhao, S. Yuan, Optimal harvesting policy of a stochastic two-species competitive model with Lévy noise in a polluted environment, Phys. A, 477 (2017), 20–33. https://doi.org/10.1016/j.physa.2017.02.019 doi: 10.1016/j.physa.2017.02.019
    [43] M. Liu, X. He, J. Yu, Dynamics of a stochastic regime-switching predator-prey model with harvesting and distributed delays, Nonlinear Anal. Hybrid Syst., 28 (2018), 87–104. https://doi.org/10.1016/j.nahs.2017.10.004 doi: 10.1016/j.nahs.2017.10.004
    [44] M. Liu, C. Bai, Dynamics of a stochastic one-prey two-predator model with Lévy jumps, Appl. Math. Comput., 284 (2016), 308–321. https://doi.org/10.1016/j.amc.2016.02.033 doi: 10.1016/j.amc.2016.02.033
    [45] Y. Zhao, L. You, D. Burkow, S. Yuan, Optimal harvesting strategy of a stochastic inshore-offshore hairtail fishery model driven by Lévy jumps in a polluted environment, Nonlinear Dyn., 95 (2019), 1529–1548. https://doi.org/10.1007/s11071-018-4642-y doi: 10.1007/s11071-018-4642-y
    [46] Q. Liu, D. Jiang, N. Shi, T. Hayat, A. Alsaedi, Stochastic mutualism model with Lévy jumps, Commun. Nonlinear Sci. Numer. Simul., 43 (2017), 78–90. https://doi.org/10.1016/j.cnsns.2016.05.003 doi: 10.1016/j.cnsns.2016.05.003
    [47] H. Qiu, W. Deng, Optimal harvesting of a stochastic delay competitive Lotka-Volterra model with Lévy jumps, Appl. Math. Comput., 317 (2018), 210–222. https://doi.org/10.1016/j.amc.2017.08.044 doi: 10.1016/j.amc.2017.08.044
    [48] M. Liu, K. Wang, Survival analysis of stochastic single-species population models in polluted environments, Ecol. Model., 220 (2009), 1347–1357. https://doi.org/10.1016/j.ecolmodel.2009.03.001 doi: 10.1016/j.ecolmodel.2009.03.001
    [49] G. Liu, X. Meng, Optimal harvesting strategy for a stochastic mutualism system in a polluted environment with regime switching, Phys. A, 536 (2019), 120893. https://doi.org/10.1016/j.physa.2019.04.129 doi: 10.1016/j.physa.2019.04.129
    [50] S. Wang, L. Wang, T. Wei, Optimal harvesting for a stochastic logistic model with S-type distributed time delay, J. Differ. Equation Appl., 23 (2017), 618–632. https://doi.org/10.1080/10236198.2016.1269761 doi: 10.1080/10236198.2016.1269761
    [51] M. Liu, K. Wang, Q. Wu, Survival analysis of stochastic competitive models in a polluted environment and stochastic competitive exclusion principle, Bull. Math. Biol., 73 (2011), 1969–2012. https://doi.org/10.1007/s11538-010-9569-5 doi: 10.1007/s11538-010-9569-5
    [52] M. Liu, C. Bai, On a stochastic delayed predator-prey model with Lévy jumps, Appl. Math. Comput., 228 (2014), 563–570. https://doi.org/10.1016/j.amc.2013.12.026 doi: 10.1016/j.amc.2013.12.026
    [53] Q. Liu, Q. Chen, Z. Liu, Analysis on stochastic delay Lotka-Volterra systems driven by Lévy noise, Appl. Math. Comput., 235 (2014), 261–271. https://doi.org/10.1016/j.amc.2014.03.011 doi: 10.1016/j.amc.2014.03.011
    [54] X. Mao, Stochastic Differential Equations and Applications, Horwood Publishing Limited, 2007. https://doi.org/10.1533/9780857099402
    [55] D. Applebaum, Lévy Processes and Stochastic Calculus, Cambridge University Press, 2009. https://doi.org/10.1017/CBO9780511809781
    [56] I. Barbalat, Systems dequations differentielles d'osci d'oscillations, Rev. Roumaine Math. Pures Appl., 4 (1959), 267–270.
    [57] M. Kinnally, R. Williams, On existence and uniqueness of stationary distributions for stochastic delay differential equations with positivity constraints, Electron. J. Probab., 15 (2010), 409–451. https://doi.org/10.1214/EJP.v15-756 doi: 10.1214/EJP.v15-756
    [58] M. Hairer, J. C. Mattingly, M. Scheutzow, Asymptotic coupling and a general form of Harris' theorem with applications to stochastic delay equations, Probab. Theory Related Fields, 149 (2011), 223–259. https://doi.org/10.1007/s00440-009-0250-6 doi: 10.1007/s00440-009-0250-6
    [59] G. Prato, J. Zabczyk, Ergodicity for Infinite Dimensional Systems, Cambridge University Press, 1996.
    [60] M. Liu, Optimal harvesting policy of a stochastic predator-prey model with time delay, Appl. Math. Lett., 48 (2015), 102–108. https://doi.org/10.1016/j.aml.2014.10.007 doi: 10.1016/j.aml.2014.10.007
  • This article has been cited by:

    1. Julieta Kaleicheva, Rumyana Lasarova, Georgi Avdeev, Valentin Mishev, Zdravka Karaguiozova, Krassimir Kirov, 2025, 3274, 0094-243X, 070006, 10.1063/5.0258768
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1756) PDF downloads(120) Cited by(1)

Figures and Tables

Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog