Processing math: 82%
Research article

Detection and localization of multi-scale and oriented objects using an enhanced feature refinement algorithm


  • Received: 06 February 2023 Revised: 09 May 2023 Accepted: 31 May 2023 Published: 19 July 2023
  • Object detection is a fundamental aspect of computer vision, with numerous generic object detectors proposed by various researchers. The proposed work presents a novel single-stage rotation detector that can detect oriented and multi-scale objects accurately from diverse scenarios. This detector addresses the challenges faced by current rotation detectors, such as the detection of arbitrary orientations, objects that are densely arranged, and the issue of loss discontinuity. First, the detector also adopts a progressive regression form (coarse-to-fine-grained approach) that uses both horizontal anchors (speed and higher recall) and rotating anchors (oriented objects) in cluttered backgrounds. Second, the proposed detector includes a feature refinement module that helps minimize the problems related to feature angulation and reduces the number of bounding boxes generated. Finally, to address the issue of loss discontinuity, the proposed detector utilizes a newly formulated adjustable loss function that can be extended to both single-stage and two-stage detectors. The proposed detector shows outstanding performance on benchmark datasets and significantly outperforms other state-of-the-art methods in terms of speed and accuracy.

    Citation: Deepika Roselind Johnson, Rhymend Uthariaraj Vaidhyanathan. Detection and localization of multi-scale and oriented objects using an enhanced feature refinement algorithm[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 15219-15243. doi: 10.3934/mbe.2023681

    Related Papers:

    [1] Gauhar Rahman, Iyad Suwan, Kottakkaran Sooppy Nisar, Thabet Abdeljawad, Muhammad Samraiz, Asad Ali . A basic study of a fractional integral operator with extended Mittag-Leffler kernel. AIMS Mathematics, 2021, 6(11): 12757-12770. doi: 10.3934/math.2021736
    [2] Ye Yue, Ghulam Farid, Ayșe Kübra Demirel, Waqas Nazeer, Yinghui Zhao . Hadamard and Fejér-Hadamard inequalities for generalized k-fractional integrals involving further extension of Mittag-Leffler function. AIMS Mathematics, 2022, 7(1): 681-703. doi: 10.3934/math.2022043
    [3] Bushra Kanwal, Saqib Hussain, Thabet Abdeljawad . On certain inclusion relations of functions with bounded rotations associated with Mittag-Leffler functions. AIMS Mathematics, 2022, 7(5): 7866-7887. doi: 10.3934/math.2022440
    [4] Rana Safdar Ali, Saba Batool, Shahid Mubeen, Asad Ali, Gauhar Rahman, Muhammad Samraiz, Kottakkaran Sooppy Nisar, Roshan Noor Mohamed . On generalized fractional integral operator associated with generalized Bessel-Maitland function. AIMS Mathematics, 2022, 7(2): 3027-3046. doi: 10.3934/math.2022167
    [5] Sabir Hussain, Rida Khaliq, Sobia Rafeeq, Azhar Ali, Jongsuk Ro . Some fractional integral inequalities involving extended Mittag-Leffler function with applications. AIMS Mathematics, 2024, 9(12): 35599-35625. doi: 10.3934/math.20241689
    [6] Anumanthappa Ganesh, Swaminathan Deepa, Dumitru Baleanu, Shyam Sundar Santra, Osama Moaaz, Vediyappan Govindan, Rifaqat Ali . Hyers-Ulam-Mittag-Leffler stability of fractional differential equations with two caputo derivative using fractional fourier transform. AIMS Mathematics, 2022, 7(2): 1791-1810. doi: 10.3934/math.2022103
    [7] Erhan Set, M. Emin Özdemir, Sevdenur Demirbaş . Chebyshev type inequalities involving extended generalized fractional integral operators. AIMS Mathematics, 2020, 5(4): 3573-3583. doi: 10.3934/math.2020232
    [8] Maryam Saddiqa, Ghulam Farid, Saleem Ullah, Chahn Yong Jung, Soo Hak Shim . On Bounds of fractional integral operators containing Mittag-Leffler functions for generalized exponentially convex functions. AIMS Mathematics, 2021, 6(6): 6454-6468. doi: 10.3934/math.2021379
    [9] Ghulam Farid, Maja Andrić, Maryam Saddiqa, Josip Pečarić, Chahn Yong Jung . Refinement and corrigendum of bounds of fractional integral operators containing Mittag-Leffler functions. AIMS Mathematics, 2020, 5(6): 7332-7349. doi: 10.3934/math.2020469
    [10] Hari M. Srivastava, Artion Kashuri, Pshtiwan Othman Mohammed, Abdullah M. Alsharif, Juan L. G. Guirao . New Chebyshev type inequalities via a general family of fractional integral operators with a modified Mittag-Leffler kernel. AIMS Mathematics, 2021, 6(10): 11167-11186. doi: 10.3934/math.2021648
  • Object detection is a fundamental aspect of computer vision, with numerous generic object detectors proposed by various researchers. The proposed work presents a novel single-stage rotation detector that can detect oriented and multi-scale objects accurately from diverse scenarios. This detector addresses the challenges faced by current rotation detectors, such as the detection of arbitrary orientations, objects that are densely arranged, and the issue of loss discontinuity. First, the detector also adopts a progressive regression form (coarse-to-fine-grained approach) that uses both horizontal anchors (speed and higher recall) and rotating anchors (oriented objects) in cluttered backgrounds. Second, the proposed detector includes a feature refinement module that helps minimize the problems related to feature angulation and reduces the number of bounding boxes generated. Finally, to address the issue of loss discontinuity, the proposed detector utilizes a newly formulated adjustable loss function that can be extended to both single-stage and two-stage detectors. The proposed detector shows outstanding performance on benchmark datasets and significantly outperforms other state-of-the-art methods in terms of speed and accuracy.



    Transcription is not only the most important but also the most complex step in gene expression. This double characteristic makes gene transcription receive lasting and extensive attention. With the development of measurement technologies (e.g., single-cell and single-molecule technologies), more molecular details of transcription have been experimentally uncovered. Nevertheless, some intermediate processes would not have been specified due to the complexity of gene transcription. Thus, traditional Markov models of gene transcription such as extensively studied ON-OFF models [1,2,3,4] would nor interpret experimental phenomena nor reveal the stochastic mechanisms of transcription. More biologically reasonable mathematical models need to be developed.

    It is well known that gene transcription involves RNA nuclear retention (RNR) and nuclear RNA export (NRE). However, these two important processes were often ignored in previous studies [1,2,3,4,5,6,7,8]. Main reasons include that 1) NRE was before considered to be a transient process, compared to other processes occurring in transcription. It was reported that the NRE phase lasted about 20min on average and were gene-specific [9,10]; 2) For RNR, less than 30% poly (A+) RNA is nuclear-retained and undetectable in the cytoplasm [11]. Currently, more and more experimental evidences have indicated that RNR play a key role in biological processes, e.g., in S. cerevisiae cells, RNR may play a precautionary role during stress situations [12]; in plants, the RNR process of NLP7 can orchestrate the early response to nitrate [13]; and in the signaling pathway of antiviral innate immunity, the RNA helicase DDX46 acts like a negative regulator to induce nuclear retention of antiviral innate transcripts [14]. These experimental facts suggest that RNR or NRE cannot be neglected when one makes theoretical predictions of gene expression (including expression levels and noise).

    Several works have revealed the respective roles of RNE and RNR in modulating stochastic gene expression [15,16,17,18,19]. One report exhibited that transcriptional burst attributed to promoter state switching could be substantially attenuated by the transport of mRNA from nucleus to cytoplasm [17]. Another report showed that slow pre-mRNA export from the nucleus could be an effective mechanism to attenuate protein variability arising from transcriptional burst [15]. In addition, RNR was also identified to buffer transcriptional burst in tissues and mammalian cells [16,18]. However, it has been experimentally confirmed that NRE and RNR can occur simultaneously in eukaryotes [20]. How these two dynamic processes cooperatively affect gene expression remains elusive and even unexplored.

    As a matter of fact, gene activation, NRE and RNR are multistep processes. In general, transcription begins only when the chromatin template accumulates over time until the promoter becomes active [21,22], where the accumulating process is a multistep biochemical process in which some intermediate steps would not have been specified due to experimental technologies. A representative example is that inactive phases of the promoter involving the prolactin gene in a mammalian cell are differently distributed, showing strong memory [23]. Similarly, both the export of mRNAs generated in the nucleus to the cytoplasm through nuclear pores and the retention of mRNAs among nuclear speckles or paraspeckles are in general also multistep reaction processes [24]. All these multistep processes can create memories between reaction events, leading to non-Markov dynamics. Traditional Markov models are no longer suitable to the modeling of gene transcription with molecular memory, and non-Markov models can well model multistep processes involved in gene transcription [7].

    In this paper, we introduce a non-Markov model of stochastic gene transcription. It considers not only RNR and NRE but also molecular memories created due to the multistep NRE process, due to RNR process, or due to the multistep activation process, thus including previous transcription models [1,2,3,4] as its special case. In order to solve this non-Markov model, we introduce effective transition rates, which explicitly decode the effect of molecular memory and by which we can transform the original non-Markov issue into an equivalent yet mathematical tractable Markov one. Based on this useful technique, we derive analytical results, which extend previous results [3,8,24,25] and provide insights into the role of molecular memory in affecting the nuclear and cytoplasmic mRNA mean and noise. The overall modeling and analysis provide a paradigm for studying more complex stochastic transcription processes.

    Most previous studies [15,20,26] of gene transcription focused on the dynamics of NRE processes, where mature mRNAs were released to the cytoplasm with the help of nuclear pore complex (NPC) (Figure 1) [16,27]. The number of NPCs or the count of those assistant proteins that controlled NPC determined the speed of exporting mRNA. Measuring the exporting rate was often replaced by measuring the retention time in the nucleus, which however may vary with environmental changes [16,17,18,19]. Other previous studies of gene transcription [9,10,11,28,29] focused on the dynamics of transcription initiation and elongation, where elongation time (T) was measured by the length of a gene using the length of bases (L) and the average rate of elongation (v), i.e., T=L/Lvv. These studies assumed that all mature mRNAs were first exported to the cytoplasm and then translated into proteins. However, biological experiments indicated that there were a considerable part of mature mRNAs that stayed in the nucleus in a probabilistic manner and lasted a long period (Figure 1) [24].

    Figure 1.  Schematic diagram for a model of stochastic gene transcription. First, chromation (consisting of nucleosomes) opens in a multistep manner, and then, DNA is transcribed into mRNAs also in a multistep manner. Some of these mRNAs are remained in the nucleus (forming so-called paraspeckles) in a multistep manner, and the others are exported into the cytoplasm through the nuclear pores also in a multistep manner.

    Here, we consider two cases: one case where NRE dominates over RNR and the other case where RNR dominates over NRE. For both cases, the gene is assumed to have one "off" state (corresponding to the inactive form of the promoter) and one "on" state (corresponding to the active form), and the promoter is assumed to switch randomly between these two states. Only in "on" state, can the gene generate pre-mRNA. After an alternative splicing (AS) process or an alternative polyadenylation (APA) process, which occurs frequently at the 3' UTR, a portion of mature mRNAs (one type of transcripts) may be transported to the cytoplasm through NPC wherein they execute translation tasks. The rest mature mRNAs (another type of transcripts) may be remained in the nucleus for a long time, possibly assembling on the sub-cellular region (wherein they form nuclear speckles or paraspeckles [30,31,32]) with the assistance of proteins, some of which would have been unspecified. When the intracellular environment is changed, most of mature mRNAs will release to the cytoplasm in response to this change. In addition, most genes (especially in eukaryotic cells) are expressed in a bursty manner [1,2,3,4].

    As pointed out in the introduction, gene transcription, NRE and RNR are all multistep reaction processes. In order to model these processes, we introduce a non-exponential waiting-time distribution for each intermediate reaction as done in ref. [7,33]. Since non-exponential waiting times lead to non-Markovian dynamics, the existing Markov theory cannot be used directly.

    Assume that burst size in gene transcription follows a distribution described by prob{B=i}=αi, where each αi is a nonnegative constant and i=0,1,2,. Let M1, M2 and M3 represent pre-mRNA, mature mRNA (one type of transcripts) transported to the cytoplasm and mature mRNA (another type of transcripts) retained in the nucleus respectively, and denote by m1, m2 and m3 their molecular numbers respectively. Thus, m=(m1,m2,m3)T represents the micro-state of the underlying system. Let W1(t;m), W2(t;m) and W3(t;m) be waiting-time distributions for the syntheses of pre-mRNAs, mature mRNA transported to the cytoplasm, and mature mRNA retained in the nucleus, respectively. Let W4(t;m) and W5(t;m) be waiting-time distributions for degradations of M2 and M3, respectively. To that end, the gene-expression model to be studied are described by the following five biochemical reactions labelled by Ri (1i5)

    R1:DNAW1(t;m)DNA+B×M1, R2:M1W2(t;m)M2,R3: M1W3(t;m)M3, R4:M2W4(t;m), R5:M3W5(t;m). (1)

    Let B represent the mean burst size. Note that if α1=1 and αk=0 for all k1, this impliesB1. In this case, the promoter is in the ON state all the time, and Eq (1) describes the constitutive gene expression. The other cases correspond to bursty gene expression. This is because B=0 implies that the promoter is in the OFF state (meaning that pre-mRNAs are not generated), whereas B>0 implies that the promoter is at the ON state (meaning that pre-mRNAs are generated).

    For each reaction, there is a memory function [7,33]. Denote by Mi(t;m) memory function for reaction Ri (1i5). These memory functions can be expressed by waiting-time distributions in Eq (1). In fact, if we let ˜Mi(s;m) be the Laplace transform of memory function Mi(t;m), then ˜Mi(s;m) can be expressed as ˜Mi(s;m)=s˜φi(s;m)/s˜φi(s;m)[15i=1˜φi(s;m)][15i=1˜φi(s;m)], where ˜φi(s;m) is the Laplace transform of function φi(t;m) that can be expressed as φi(t;m)=Wi(t;m)ki[1t0Wk(t;m)dt] (1i5) [7]. Let P(m;t) be the probability that the system is in state m at time t and ˜P(m;s) be the Laplace transform of P(m;t). With ˜Mi(s;m), we can show that the chemical master equation in the sense of Laplace transform takes the form

    s˜P(m;s)P(0;s)=(m1i=0αiEi1I)[˜M1(s;m)˜P(m;s)]+(E1E12I)[˜M2(s;m)˜P(m;s)] +(E1E13I)[˜M3(s;m)˜P(m;s)]+5j=4(EjI)[˜Mj(s;m)˜P(m;s)], (2)

    where E is the step operator and E1 is the inverse of E, and I is the unit operator.

    Interestingly, we find that limit lims0˜Mi(s;m) always exists, and if the limit function is denoted by Ki(m), then Ki(m) can be explicitly expressed by given waiting-time distributions Wk(t;m) (1k5), that is,

    Ki(m)= + 0Wi(t;m)[jitWj(t;m)dt]dt + 0[5j=1tWj(t;m)dt]dt,1i5. (3)

    Note that s0 corresponds to t+ according to the definition of Laplace transform. However, t+ corresponds to the steady-state case, which is our interest. We point out that function Ki(m), which will be called effective transition rate for reaction Ri, explicitly decodes the effect of molecular memory, where 1i5. More importantly, using these effective transition rates, we can construct a Markov reaction network with the same topology as the original non-Markov reaction network:

    DNAK1(m)DNA+B×M1, M1K2(m)M2, M1K3(m)M3, M2K4(m), M3K5(m). (4)

    Moreover, two reaction networks have exactly the same chemical master equation at steady state:

    (m1i=0αiEi1I)[K1(s;m)P(m)]+(E1E12I)[K2(m)P(m)] +(E1E13I)[K3(m)P(m)]+j{4,5}(EjI)[Kj(m)P(m)]=0, (5)

    implying that both stationary behaviors are exactly the same (referring to Figure 2), where P(m) is the stationary probability density function corresponding to dynamic density probability function P(m;t).

    Figure 2.  Schematic diagram for two reaction networks with the same topology and the same reactive species, where W-type functions represent reaction-event waiting-time distributions on the left-hand side whereas K-type functions are effective transition rates on the right-hand side (see the main text for details), D represents DNA, and the other symbols are explained in the context. The results in reference [7] imply that the stationary behaviors of two reaction networks are exactly the same although reaction events in the two cases take place in different manners.

    In summary, by introducing an effective transition rate (Ki(m)) for each reaction Ri, given by Eq (3), a mathematically difficult non-Markov issue is transformed into a mathematical tractable Markov issue. This brings us convenience for theoretical analysis. In the following, we will focus on analysis of Eq (5).

    Note that Gamma functions can well model multistep processes [34,35]. This is because the convolution of several exponential distributions is an Erlang distribution (a special case of Gamma distribution). Therefore, in order to model the effect of molecular memory on the mRNA expression, we assume that waiting-time distributions for gene activation, NRE and RNR processes are Gamma distributions: W1(t;m)=[Γ(L0)]1tL01(k0)L0ek0t, W2(t;m)=[Γ(Lc)]1tLc1(m1kc)Lcem1kct andW3(t;m)=[Γ(Lr)]1tLr1(m1kr)Lrem1krt, and that waiting-time distributions for the other processes are exponential ones, W4(t;m)=m2dcem2dct, and W5(t;m)=m3drem3drt. Here Γ() is the common Gamma function, k0 is the mean transcription rate, kc and kr denote the mean synthesis rates for mRNAs in the nucleus and mRNAs in the cytoplasm respectively, dc and dr are the mean degradation rates of mRNA in the nucleus and mRNA in the cytoplasm respectively. Throughout this paper, L0, Lc and Lr are called memory indices since, e.g., L0=1 corresponds to the memoryless case whereas L01 to the memory case.

    Let k1 be the total synthesis rate of mRNAs in the cytoplasm, which is composed of two parts: one part is the rate at which pre-mRNA generates a transcript through the AS process or the APA process, and the other is the rate at which the mature mRNA is finally exported to the cytoplasm with the help of NPC. The rate of generating mature mRNAs, determined by the gene length, is generally fixed. In contrast, the rate of exporting mRNAs to the cytoplasm may change in a large range, depending on cellular environments. This is because some types of mRNAs are exported in a fast manner due to RNA binding proteins or linked splicing factors, and other types of mRNAs are exported in a slow manner and the corresponding genes are most intron-containing ones [19]. Thus, we can use k1 to characterize the export rate indirectly. Similarly, if we let k2 be the synthesis rate of mRNAs in the nucleus, then it also includes two parts: one part is the rate of pre-mRNA produced through an AS or APA process, and the other is the rate at which transcripts are transported to some sub-cellular regions (e.g., nuclear speckles or paraspeckles) under the assistance of some proteins. Here, we assume that k2 changes a little so that the involved processes are simplified. Owing to AS or APA processes, the lengths of mature mRNAs of the two kinds can be significantly different. Usually, the rate k1 is faster than the rate k2. The retention and export of transcripts are random, we introduce another parameter pr, called remaining probability throughout this paper, to characterize this randomness. Then, the practical export rate and the practical retention rate should be kc=k1(1pr) and kr=k2pr respectively, where pr(0,1).

    Based on the experimental data from Halpern's group [26] that measured the whole genome-wide catalogue of nuclear and cytoplasmic mRNA from MIN6 cells, we know that most genes (~70%) has more cytoplasmic transcripts than nuclear transcripts. Thus, we can get an approximate formula for remaining probability pr: pr=Nn/Nn(Nn+Nc)(Nn+Nc) where Nn is the number of transcripts in the nucleus and Nc the number of transcripts in the cytoplasm. By considering gene INS1 for which the value of ratio Nc/NcNnNn is the maximal (13.2±4.6), we can know that the value of pr is about 5%. In that paper, the authors also mentioned that about 30% of the genes in MIN6 cells have more transcripts in the nucleus than in the cytoplasm. By considering gene ChREBP for which the value of ratio cytoplasm/nucleus is about 0.05, we can know that the value of pr is about 95%. Therefore, the range of remaining probability (pr) in the whole genome is about 5~95%. It is reasonable that the 50% value of pr is set as a threshold. For convenience, we categorize models of eukaryotic gene expression into two classes: one class where the RNE process is dominant and the other class where the RNR process is dominant. For the former, pr<0.5 holds, implying that most mRNAs are exported to the cytoplasm through nuclear pores, whereas for the latter, pr>0.5 holds, implying that most mRNAs retain in the nucleus.

    In the following analysis, memory indices Li (i=0,c,r), and remaining probability pr will be taken as key parameters while the other parameter values will be kept fixed. Without loss of generality, we assume that two degradation rates for mRNAs in the nucleus and cytoplasm are equal and denote by d the common degradation rate, i.e., dc=dr=d.

    First, if we let xi represent the concentration of reactive species Mi, i.e., xi=limmiΩmi/miΩΩ (i=1,2,3) where Ω represents the volume of the system, then the rate equations corresponding to the constructed-above Markov reaction network can be expressed as

    dxdt=SK(x), (6)

    where x=(x1,x2,x3)T is a column vector, S=(Sij)3×5=(B11000101000101) is a matrix, and K(x)=(K1(x),K2(x),K3(x),K4(x),K5(x))T is a column vector of effective transition rates. The stead states or equilibriums of the system described by Eq (6), denote by xS, are determined by solving the algebraic equation group, i.e., SK(xS)=0.

    Second, if denoting by X the mean of random variable X and taking approximation MixSi (i=1,2,3), then we can derive the following matrix equation (see Appendix A):

    ASΣS+ΣSATS+ΩDS=0, (7)

    where two square matrices AS=(Aij)3×3 and DS=(Dij)3×3 evaluated at steady state are known, and covariance matrix ΣS=(σij) with σij=(MiMi)(MjMj) is unknown. Note that diagonal elements σ22 and σ33 represent variances for random variables M2 (corresponding to mRNA in the cytoplasm) and M3 (corresponding to mRNA in the nucleus), which are our interest. In addition, we can also derive formulae similar to Eq (3) in the case of continuous variables.

    In order to show the explicit effect of molecular memory on the mRNA expression in different biological processes, we consider two special cases: 1) The process of gene activation with memory and other processes are memoryless. 2) The process of nuclear RNA export with memory and other processes are memoryless. In other cases, there are in general no analytical results.

    Case 1 L01, Lc=1 and Lr=1. In this case, the five effect transition rates become: K1(x)=(kcx1+krx1+dx2+dx3)(k0)L0(kcx1+krx1+dx2+dx3+k0)L0(k0)L0, K2(x)=kcx1, K3(x)=krx1, K4(x)=dx2, and K5(x)=dx3. Note that in the case of continuous variables, the corresponding effect transition rates Ki(x) (1i5) have the same expressions except for variable notations.

    We can show that the means of mRNAs in the nucleus and in the cytoplasm are given respectively by (see Appendix B)

    M2=˜k0˜kc,M3=˜k0˜kr, (8)

    where ˜k0=12[(1+2B)1/1L0L01]k0kc+kr>0 with B being the expectation of burst size B (random variable), ˜kc=kcd and ˜kr=krd. Apparently, Mi (i=2,3) is a monotonically decreasing function of memory index L0, implying that molecular memory always reduces the mRNA expression level in the nucleus and cytoplasm. In addition, by noting kc=k1(1pr) and kr=k2pr, we can know that if k2/k2k1k1 is fixed, then M3 (i.e., the mean of mRNAs in the nucleus) is a monotonically increasing function of remaining probability pr whereas M2 (i.e., the mean of mRNAs in the cytoplasm) is a monotonically decreasing function of pr. In addition, M3 is a monotonically decreasing function of ρ=kc/kckrkr whereas M2 is a monotonically increasing function of ρ. These results are in agreement with intuition.

    Interestingly, we find that σ22 and σ33, the variances for mRNAs in the nucleus and in the cytoplasm resepctively, have the following relationship (see Appendix B)

    σ22=(˜kr˜kc)2σ33+˜k0(˜kc+˜kr)˜kc˜kr, (9)

    indicating that the mRNA variance in the cytoplasm, σ22, is larger than that in the nucleus, σ33.

    From the viewpoint of experiments, the cytoplasmic mRNAs are easy to measure whereas the cytoplasmic mRNAs are difficult to measure. Therefore, we are interested in the cytoplasmic mRNA expression (including the level and noise). By complex calculations, we can further show that the cytoplasmic mRNA variance is given by

    σ22=˜k0˜kc˜k3r˜k3c+˜k3r{2+˜kc˜kr+122˜b+[γ(1+˜b)2](˜kc+˜kr)1+(1+˜b)(1+2˜b)(˜kc+˜kr)}. (10)

    where ˜b=14B[L0+2(L01)BL0(1+2B)(L01)/(L01)L0L0]>0 and γ=B2+BB with B2 being the second-order raw moment of burst size B. Furthermore, if we define the noise intensity as the ratio of the variance over the squared mean, then the noise intensity for the cytoplasmic mRNA, denoted by ηc, can be analytically expressed as

    ηc=1˜k0˜kc˜k3r˜k3c+˜k3r{2+˜kc˜kr+122˜b+[γ(1+˜b)2](˜kc+˜kr)1+(1+˜b)(1+2˜b)(˜kc+˜kr)}. (11)

    Note that if L0=1, which correspond to the Markov case, then ˜k0=k0Bkc+kr and ˜b=0. Thus, the cytoplasmic mRNA noise in the Markov case, denoted by ηc|L0=1, is given by

    ηc|L0=1=1˜k0˜kc˜k3r˜k3c+˜k3r(˜kc+2˜kr˜kr+γ2˜kc+˜kr1+˜kc+˜kr). (12)

    Therefore, the ratio of the noise in non-Markov (L01) case over that in the Markov (L0=1) case

    ηcηc|L0=1={˜kc+2˜kr˜kr+122˜b+[γ(1+˜b)2](˜kc+˜kr)1+(1+˜b)(1+2˜b)(˜kc+˜kr)}(˜kc+2˜kr˜kr+γ2˜kc+˜kr1+˜kc+˜kr)1, (13)

    which may be more than the unit but may also be less than the unit, depending on the size of remaining probability. However, if L0 is large enough (e.g., L0>2), then the ratio in Eq (13) is always larger than the unit, implying that molecular memory amplifies the cytoplasmic mRNA noise.

    Case 2 L0=1, Lc1 and Lr=1. In this case, five effect transition rates reduce to K1(x)=k0, K2(x)=(k0+krx1+dx2+dx3)(kcx1)Lc(k0+kcx1+krx1+dx2+dx3)Lc(kcx1)Lc, K3(x)=kcx1, K4(x)=dx2, and K5(x)=dx3. It seems to us there are no analytical results as in Case 1. However, if pr=0 (i.e., if we do not consider nuclear retention), then we can show that the steady state is given by x1=k0k1ω,x2=k0Bd,x3=0, where ω=(1+B)B1/1LcLc(1+2B)1/1LcLcB1/1LcLc is a factor depending on both transcriptional burst and molecular memory. Moreover, the mRNA noise in the cytoplasm is given by (see Appendix C for derivation)

    ηc=d(1+B)2k0B2dω(1+ω+B)+γLck1Bk1(1+2B)dω(1+ω+B)+LcB(1+2B)[dω+k1(1+B)]. (15)

    In order to see the contribution of molecular memory to the cytoplasmic mRNA noise, we calculate the ratio of the noise in non-Markov (Lc1) case over that in the Markov (Lc=1) case

    ηcηc|Lc=1=d+k12d+γk1(1+B)[2dω(1+ω+B)+γLck1Bk1(1+2B)]dω(1+ω+B)+LcB(1+2B)[dω+k1(1+B)], (16)

    which is in general larger than the unit for large Lc>1 (corresponding to strong memory), indicating that molecular memory in general enlarges the cytoplasmic mRNA noise.

    Here we numerically investigate the effect of molecular memory (Lc) from nuclear RNA export on the cytoplasm mRNA (M2) in the cases that the other model parameter values are fixed. Numerical results are demonstrated in Figure 3. From Figure 3(a), we observe that the mean level of the cytoplasm mRNA is a monotonically decreasing function of Lc, independent of the value choice of remaining probability (pr) (even though we only show the two cases of pr values). This is in agreement with our intuition since a more reaction step for mRNA synthesis inevitably leads to less mRNA molecules. On the other hand, we observe from Figure 3(b) that molecular memory reduces the cytoplasm mRNA noise (ηc) for smaller values of Lc but enlarges ηc for larger values of Lc, implying that there is an optimal Lc such that ηc arrives at the minimum. We emphasize that the dependences shown in Figure 3(a), (b) are qualitative since they are independent of the value choice of remaining probability.

    Figure 3.  Influence of molecular memory (Lc) from multistep RNA export on the cytoplasmic mRNA (M2), where solid lines represent theoretical results obtained by linear noise approximation (Appendix A), and empty circles represent numerical results obtained by a Gillespie algorithm [36]. Parameter values are set as <B>=2, k1=4, k2=0.8, k0=1, dc=dr=1. (a) Impact of molecular memory from the RNA export on the mean cytoplasmic mRNA (M2) for two values of remaining probability (pr), where the blue solid line corresponds to pr=0.2001 and the orange solid line to pr=0.3001. (b) Effect of molecular memory from the RNA export on the cytoplasmic mRNA noise (ηc) for two values of pr.

    Importantly, Figure 3 indicates that the results obtained by the linear noise approximation (solid lines) are well in agreement with the results obtained by the Gillespie algorithm [36]. Therefore, the linear noise approximation can be used in fast evaluation of the expression noise, and in the following, we will focus on results obtained by the linear noise approximation. In addition, we point out that most results obtained here and thereafter are qualitative since they are independent of the choice of parameter values. However, to demonstrate interesting phenomena clearly, we will choose special values of some model parameters.

    Here we focus on numerically investigating joint effects of molecular memory (Lc) and remaining probability (pr) on the cytoplasm mRNA (M2). Figure 4(a) demonstrates effects of pr on the M2 noise for three values of Lc. We observe that with the increase of remaining probability, the cytoplasmic mRNA noise (ηc) first decreases and then increases, implying that there is a critical pr such that ηc arrives at the minimum (referring to empty circles in Figure 4(a)) or that remaining probability can minimize the cytoplasmic mRNA noise. Moreover, this minimum is independent of the values of memory index Lc. In addition, we find that the minimal ηc first increases and then decreases with increasing Lc (the inset of Figure 4(a)). In other words, noise of cRNA can derive a optimal value with the decrease of remaining probability and the increase of memory index. Figure 4(b) shows the dependences of ηc on Lc for three different values of remaining probability. We find that molecular memory can also make cytoplasmic mRNA noise reach the minimum (referring to empty circles in Figure 4(b)), and this minimal noise is monotonically increasing function of pr.

    Figure 4.  Influence of remaining probability and molecular memory on mature mRNA transported to the cytoplasm (M2). Solid lines represent theoretical results obtained by our linear noise approximation (Appendix A). Empty circles represent the minimum of the noise of the cytoplasm mRNA. (a) Parameter values are set as <B>=40, k1=5, k2=0.8, k0=2.5, dc=dr=1. (b) Parameter values are set as <B>=2, k1=2, k2=0.8, k0=1, dc=dr=1.

    Here we focus on numerically analyzing joint effects of memory index L0 and remaining probability pr on the cytoplasm mRNA (M2). Figure 5(a) demonstrates effects of pr on the M2 noise for two representative values of L0 (note: L0=1 corresponds to the memoryless case whereas L0=2 corresponds to the memory case). We observe that with the increase of remaining probability, the cytoplasmic mRNA noise (ηc) first decreases and then increases, implying that there is a critical pr such that ηc reaches the minimum (referring to empty circles in Figure 5(a)) or that remaining probability can minimize the cytoplasmic mRNA noise. Moreover, this minimum (referring to empty circles) is a monotonically increasing function of memory index L0.

    Figure 5.  Influence of remaining probability for nuclear RNA retention and molecular memory from multistep gene activation on the cytoplasmic mRNA (M2), where lines represent the results obtained by linear noise approximation (Appendix A). Empty circles represent the minimum of the noise of the cytoplasm mRNA. (a) The dependence of cytoplasmic mRNA noise ηc on remaining probability pr for two values of memory index L0, where the inset is an enlarged diagram showing the dependence of the minimal ηc on L0. Parameter values are set as <B>=20, k1=10, k2=1, k0=2.5, dc=dr=1. (b) The dependence of cytoplasmic mRNA noise ηc on remaining probability pr for three values of remaining probability pr. Parameter values are set as <B>=2, k1=4, k2=1, k0=2.5, dc=dr=1.

    Figure 5(b) demonstrates that the cytoplasmic mRNA noise (ηc) is always a monotonically increasing function of memory index L0, independent of remaining probability. In addition, we observe that ηc is a monotonically increasing function of remaining probability (this can be seen by comparing three lines).

    Here we consider the case that RNR is a multistep process, i.e., Lr1. Numerical results are demonstrated in Figure 6. We observe from Figure 6(a) that except for the case of Lr=1, which corresponds to the Markov process and for which the cytoplasmic mRNA noise (ηc) is a monotonically increasing function of remaining probability (pr), the dependences of ηc on pr are not monotonic in the cases of Lr1 (corresponding to non-Markov processes) but there is a threshold of pr such that ηc reaches the minimum (referring to empty circles), similarly to the case of Figure 5(a). Moreover, this minimal noise is a monotonically decreasing function of memory index Lr (referring to the inset of Figure 6(a)) but the monotonicity is opposite to that in the case of Figure 5(a).

    Figure 6.  Influence of remaining probability and molecular memory on mature mRNA transported to the cytoplasm (M2), solid lines represent theoretical results obtained by linear noise approximation (Appendix A). Empty circles represent the minimum of the noise of the cytoplasm mRNA. (a) Parameter values are set as <B>=2, k1=2, k2=0.8, k0=2.5, dc=dr=1. (b) Parameter values are set as <B>=2, k1=2, k2=0.8, k0=1, dc=dr=1.

    Figure 6(b) shows how the cytoplasmic mRNA noise (ηc) depends on memory index Lr for two different values of remaining probability. Interestingly, we observe that there is an optimal value of Lr such that the cytoplasmic mRNA noise reaches the minimum. Moreover, the minimal ηc is a monotonically decreasing function of remaining probability (pr), referring to the inset in the bottom right-hand corner.

    Gene transcription in eukaryotes involve many molecular processes, some of which are well known and others are little known and even unknown [37,38]. In this paper, we have introduced a non-Markov model of stochastic transcription, which simultaneously considers RNA nuclear retention and nuclear RNA export processes and in which we have used non-exponential waiting-time distributions (e.g., Gamma distributions) to model some unknown or unspecified molecular processes involved in, e.g., the synthesis of pre-mRNA and the export of mRNAs generated in the nucleus to the cytoplasm and the retention of mRNA in the nucleus. Since non-exponential waiting times can lead to non-Markov kinetics, we have introduced effective transition rates for the reactions underlying transcription to transform a mathematically difficult issue to a mathematically tractable one. As a result, we have derived the analytical expressions of mRNA means and noise in the nucleus and cytoplasm, which revealed the importance of molecular memory in controlling or fine-tuning the expressions of two kinds of mRNA. Our modeling and analysis provided a heuristic framework for studying more complex gene transcription processes.

    Our model considered main events occurring in gene transcription such as bursty expression (burst size follows a general distribution), alternative splicing (by which two kinds of transcripts are generated), RNR (a part of RNA molecules that are kept in the nucleus) and RNE (another part of RNA molecules that are exported to the cytoplasm). Some popular experimental technologies such as single-cell sequence data [39], single-molecule fluorescence in-situ hybridization (FISH) [40] and electron micrographs (EM) of fixed cells [41] have indicated that RNR and NRE are two complex biochemical processes, each involving regulation by a large number of proteins or complexes [42]. In particular, the mRNAs exported to the cytoplasm involve the structure of nuclear pore complex (NPC) [43]. A number of challenging questions still remain unsolved, e.g., how do RNR and NRE cooperatively regulate the expressions of nuclear and cytoplasmic mRNAs? Why are these two dynamical processes necessary for the whole gene-expression processes when the cells survive in complex environments? And what advantages do they have in contrast to a single NRE process?

    Despite simple, our model can not only reproduce results for pre-mRNA (nascent mRNA) means at steady state in previous studies but also give results in agreement with experimental data on the mRNA Fano factors (define as the ratio of variance over mean) of some genes. However, we point out that some results of Fano factors obtained using our model is not always in agreement with the experimental data, e.g., for five genes, RBP3, TAF5, TAF6, TAF12 and KAP104, results obtained by our model seem not in agreement with experimental data but results obtained by a previous theoretical model [44] seems better (data are not shown). In addition, for the PRB8 gene, results on Fano factor, obtained by our model and the previous model, are poorly in agreement with experimental data (data are not shown). This indicates that constructing a theoretical model for the whole transcription process still needs more work.

    In spite of differences, our results are wholly in agreement with some experimental data or observations. First, the qualitative result that RNR always reduces the nuclear pre-mRNA noise and always amplifies the cytoplasmic mRNA noise is in agreement with some experimental observations [28,42,45] and also with intuition since the retention naturally increases the mean number of the nuclear pre-mRNAs but decreases the mean number of the cytoplasmic mRNAs. Second, we compare our theoretical predictions with experimental results [28,45]. Specifically, we use previously published experimental data for two yeast genes, RBP2 and MDN1 [28,45] to calculate the cytoplasmic mRNA Fano factors. Parameter k1 is set as k10.29±0.013/min, which is based on experimental data [28] and the degradation rates of the cytoplasmic mRNAs for RBP2 and MDN1 are set according to dc=ln2/ln2t1/2t1/2, where t1/2 is an experimental mRNA half-life. Then, we can find that the results on the Fano factors of genes RBP2 and MDN1 are well in agreement with the experimental data [45].

    At the whole genome scale, about 70% mRNAs in the nucleus are transported to the cytoplasm whereas about 30% mRNAs are retained in the nucleus [26]. This fact implies that the changing range of remaining probability is moderate or small. In addition, the nuclear export rate of a different gene is in general different. If this rate is not too large, then following the increase of remaining probability, the increase in the cytoplasmic mRNA noise is inevitable. This result indirectly interprets the reason why the noise at the protein level is quite large as shown in previous studies of gene expression [46].

    Finally, for some genes, the relative changing ranges of remaining probability and nuclear export rate may be large at the transcription level. In this case, it is in theory sufficient that adjusting one of nuclear export rate and remaining probability can fine-tune the cytoplasmic mRNA noise if the mean burst size is fixed, but differences would exist between theoretical and experimental results since NRE and RNR occur simultaneously in gene expression and are functionally cooperative. In addition, since biological regulation may be different from the theoretical assumption made here, the nuclear or cytoplasmic mRNA noise predicted in theory may be overestimated.

    This work was partially supported by National Nature Science Foundation of China (11931019), and Key-Area Research and Development Program of Guangzhou (202007030004).

    All authors declare no conflicts of interest in this paper.

    First, the chemical master equation for the constructed Markov reaction network reads

    P(m;t)t=(m1i=0αiEi1I)[K1(m)P(m;t)]+(E1E12I)[K2(m)P(m;t)] +(E1E13I)[K3(m)P(m;t)]+j{4,5}{(EjI)[Kj(m)P(m;t)]}, (A1)

    Second, the stead state or equilibrium of the system described by Eq (6) in the main text, denoted by xS=(xS1,xS2,xS3)T, can be obtained by solving the algebraic equation group

    BK1(xS)K2(xS)K3(xS)=0K2(xS)K4(xS)=0K3(xS)K5(xS)=0 (A2)

    Then, we perform the Ω-expansions [47] to derive a Lyapunov matrix equation for covariance matrix between Mi and Mj with i,j=1,2,3, i.e., for matrix Σ=((MiMi)T(MjMj))(σij). Note that

    Ki(n)=Ki(x+Ω1/122z)=Ki(x)+Ω1/122j=p,c,rzjKi(x)xj+o(Ω1),i=1,2,3 (A3a)
    m1i=0αiEi1I=Ω1/122m1i=0iαiz1+12Ω1m1i=0i2αi2z21+o(Ω3/322), (A3b)
    E1E12I=Ω1/122(z2z1)+12Ω1(2z21+2z2222z1z2)+o(Ω3/322), (A3c)
    E1E13I=Ω1/122(z3z1)+12Ω1(2z21+2z2322z1z3)+o(Ω3/322), (A3d)
    EjI=Ω1/122zj+12Ω12z2j+o(Ω3/322),j=2,3. (A3e)

    Hereafter o(y) represents the infinitestmal quantity of the same order as y0. We denote by Π(z;t) the probability density function for new random variable z. Then, the relationship between variables P(m;t) and Π(z;t) is

    P(m;t)t=Π(z;t)tΩ1/122i=1,2,3dxidtΠ(z;t)zi. (A4)

    By substituting Eqs (A3) and (A4) into Eq (A1) and comparing the coefficients of Ω1/122, we have

    i=1,2,3dxidtΠ(z;t)zi=BK1(x)Π(z;t)z1+K2(x)(Π(z;t)z2Π(z;t)z1) +K3(x)(Π(z;t)z3Π(z;t)z1)+j{2,3}KjΠ(z;t)zj, (A5)

    which naturally holds due to Eq (6) in the main text, where B=m1i=0iαi is the mean burst size. Comparing the coefficients of Ω0, we have

    Π(z;t)t=Bj=1,2,3K1(x)xj[zjΠ(z;t)]z1j=1,2,3K2(x)xj([zjΠ(z;t)]z2[zjΠ(z;t)]z1)j=1,2,3K3(x)xj([zjΠ(z;t)]z3[zjΠ(z;t)]z1)j{2,3}k=1,2,3Kj(x)xk[zkΠ(z;t)]zj+12B2K1(x)2Π(z;t)z21+12K2(x)(2Π(z;t)z21+2Π(z;t)z2222Π(z;t)z1z2)+12K3(x)(2Π(z;t)z21+2Π(z;t)z2322Π(z;t)z1z3)+12j{2,3}Kj2Π(z;t)z2j (A6)

    where B2=m1i=0i2αi is the second moment of burst size. Since Ki(x) is independent of z, Eq (A6) can be rewritten as

    Π(z;t)t=i,j{1,2,3}Aij[zjΠ(z;t)]zi+12i,j{1,2,3}Dij2Π(z;t)zizj, (A7)

    where the elements of matrix A=(Aij) take the form

    A=(Aij)=(BK1x1K2x1K3x1BK1x2K2x2K3x2BK1x3K2x3K3x3K2x1K4x1K2x2K4x2K2x3K4x3K3x1K5x1K3x2K5x2K3x3K5x3) (A8a)

    and matrix D=(Dij) takes the form

    D=(B2K1+K2+K3K2K3K2K2+K40K30K3+K5). (A8b)

    If we consider the stationary equation of Eq (A7), denote by AS and DS the corresponding matrices.

    Third, the steady-state Fokker Planck equation allows a solution of the following form

    Π(z)=1(2π)3det(ΣS)exp(12zTΣ1Sz). (A9)

    Here, matrix ΣS=((MxS)T(MxS))(σij) (covariance matrix) is determined by solving the following Lyapunov matrix equation

    ASΣS+ΣSATS+DS=0. (A10)

    Note that the diagonal elements of matrix ΣS are just the variances of the state variables, and the vector of the mean concentrations of the reactive species is given approximately by MxS. Eq (A10) is an extension of the linear noise approximation in the Markov case [48].

    In this case, we can show that effective transition rates are given by K1(x)=(x1kc+x1kr+x2d+x3d)(k0)L0(x1kc+x1kr+x2d+x3d+k0)L0(k0)L0, K2(x)=x1kc, K3(x)=x1kr, K4(x)=x2d, and K5(x)=x3d, where x=(x1,x2,x3)T. Thus, accoridng to Eq (A2), we know that the steady state is given by

    xs1=ak0kc+kr,xs2=kcdxs1,xs3=krdxs1, (B1)

    where a=(1+2B)1/1L0L012. Note that

    K1(x)x1=(kc+kr)(k0)L0[k0(1+2a)]L0(k0)L02ak0L0(kc+kr)(k0)L0[k0(1+2a)]L01{[k0(1+2a)]L0(k0)L0}2

    Therefore,

    K1(x)x1|x=xS=K1(xS)x1=(kc+kr)2BL0[(1+2B)(1+2B)(L01)/(L01)L0L0]4B2. (B2a)

    Completely similarly, we have

    K1(x)x2=K1(x)x3=d2BL0[(1+2B)(1+2B)(L01)/(L01)L0L0]4B2. (B2b)

    Thus, matrix AS reduces to

    A=(Aij)=((bB1)(kc+kr)bdBbdBkcd0kr0d), (B3)

    where b=14B2[L0+2(L01)BL0(1+2B)(L01)/(L01)L0L0]. Meanwhile, the matrix {{\bf{{ D}}}_{\text{S}}} in Eq (A10) becomes

    {{\bf{{ D}}}_{\text{S}}} = {\tilde k_0}\left( {\begin{array}{*{20}{c}} {\gamma \left( {{k_c} + {k_r}} \right)}&{ - {k_c}}&{ - {k_r}} \\ { - {k_c}}&{2{k_c}}&0 \\ { - {k_r}}&0&{2{k_r}} \end{array}} \right) , (B4)

    where {\tilde k_0} = \frac{{a{k_0}}}{{{k_c} + {k_r}}} and \gamma = \frac{{\left\langle {{B^2}} \right\rangle + \left\langle B \right\rangle }}{{\left\langle B \right\rangle }} . We can directly derive the following relationships from Eq (A10):

    \left( {b\left\langle B \right\rangle - 1} \right)\left( {{k_c} + {k_r}} \right){\sigma _{11}} + bd\left\langle B \right\rangle \left( {{\sigma _{12}} + {\sigma _{13}}} \right) = - \frac{{{{\tilde k}_0}}}{2}\gamma \left( {{k_c} + {k_r}} \right) (B5a)
    \left( {b\left\langle B \right\rangle - 1} \right)\left( {{k_c} + {k_r}} \right){\sigma _{12}} + bd\left\langle B \right\rangle {\sigma _{22}} + bd\left\langle B \right\rangle {\sigma _{23}} + {k_c}{\sigma _{11}} - d{\sigma _{12}} = {\tilde k_0}{k_c} (B5b)
    \left( {b\left\langle B \right\rangle - 1} \right)\left( {{k_c} + {k_r}} \right){\sigma _{13}} + bd\left\langle B \right\rangle {\sigma _{23}} + bd\left\langle B \right\rangle {\sigma _{33}} + {k_r}{\sigma _{11}} - d{\sigma _{13}} = {\tilde k_0}{k_r} (B5c)

    and obtain the following relationships

    {\sigma _{12}} = \frac{d}{{{k_c}}}{\sigma _{22}} - {\tilde k_0} , {\sigma _{13}} = \frac{d}{{{k_r}}}{\sigma _{33}} - {\tilde k_0} and {\sigma _{23}} = \frac{1}{2}\left({\frac{{{k_r}}}{{{k_c}}}{\sigma _{22}} + \frac{{{k_c}}}{{{k_r}}}{\sigma _{33}}} \right) - \frac{{{k_c} + {k_r}}}{{2d}}{\tilde k_0}.

    Substituting these relationships into Eq (B4a)–Eq (B4c) yields

    \left( {b\left\langle B \right\rangle - 1} \right)\left( {{k_c} + {k_r}} \right){\sigma _{11}} + bd\left\langle B \right\rangle \left( {\frac{d}{{{k_c}}}{\sigma _{22}} + \frac{d}{{{k_r}}}{\sigma _{33}}} \right) = 2bd\left\langle B \right\rangle {\tilde k_0} - \frac{{{{\tilde k}_0}}}{2}\gamma \left( {{k_c} + {k_r}} \right) , (B6a)
    \frac{{{k_c}}}{d}{\sigma _{11}} + \left( {2b\left\langle B \right\rangle - 1 + \frac{{3b\left\langle B \right\rangle - 2}}{2}\frac{{{k_r}}}{{{k_c}}} - \frac{d}{{{k_c}}}} \right){\sigma _{22}} + \frac{{b\left\langle B \right\rangle }}{2}\frac{{{k_c}}}{{{k_r}}}{\sigma _{33}} = \frac{{{k_c}}}{d}{\tilde k_0} + \left[ {\frac{{3b\left\langle B \right\rangle - 2}}{2}\frac{{{k_c} + {k_r}}}{d} - 1} \right]{\tilde k_0} , (B6b)
    \frac{{{k_r}}}{d}{\sigma _{11}} + \frac{{b\left\langle B \right\rangle }}{2}\frac{{{k_r}}}{{{k_c}}}{\sigma _{22}} + \left( {2b\left\langle B \right\rangle - 1 + \frac{{3b\left\langle B \right\rangle - 2}}{2}\frac{{{k_c}}}{{{k_r}}} - \frac{d}{{{k_r}}}} \right){\sigma _{33}} = \frac{{{k_r}}}{d}{\tilde k_0} + \left[ {\frac{{3b\left\langle B \right\rangle - 2}}{2}\frac{{{k_c} + {k_r}}}{d} - 1} \right]{\tilde k_0} . (B6c)

    The combination of Eq (B6a), (B6b) gives

    {\sigma _{22}} = {\left( {\frac{{{k_r}}}{{{k_c}}}} \right)^2}{\sigma _{33}} + \frac{{{k_c}}}{{{k_r}}}\frac{{{k_c} + {k_r}}}{d}{\tilde k_0} , or {\sigma _{33}} = {\left( {\frac{{{k_c}}}{{{k_r}}}} \right)^2}\left( {{\sigma _{22}} - \frac{{{k_c} + {k_r}}}{d}\frac{{{k_c}}}{{{k_r}}}{{\tilde k}_0}} \right). (B7a)

    The sum of Eq (B6b), (B6c) gives

    \frac{{{k_c} + {k_r}}}{d}{\sigma _{11}} + \left[ {\left( {2b\left\langle B \right\rangle - 1} \right)\frac{{{k_c} + {k_r}}}{d} - 1} \right]\left( {\frac{d}{{{k_c}}}{\sigma _{22}} + \frac{d}{{{k_r}}}{\sigma _{33}}} \right) = \left[ {\frac{{3b\left\langle B \right\rangle - 1}}{2}\frac{{{k_c} + {k_r}}}{d} - 1} \right]{\tilde k_0} . (B7b)

    The combination of Eq (B7b) and (B6a) yields

    \frac{d}{{{k_c}}}{\sigma _{22}} + \frac{d}{{{k_r}}}{\sigma _{33}} = 1 - \frac{{b\left\langle B \right\rangle + \frac{1}{2}\left[ {{{\left( {b\left\langle B \right\rangle - 1} \right)}^2} - \gamma } \right]\frac{{{k_c} + {k_r}}}{d}}}{{1 + \left( {b\left\langle B \right\rangle - 1} \right)\left( {2b\left\langle B \right\rangle - 1} \right)\frac{{{k_c} + {k_r}}}{d}}} (B7c)

    By substituting this equation into Eq (B7a), we finally obtain

    {\sigma _{22}} = \frac{{{{\tilde k}_0}{{\tilde k}_c}\tilde k_r^3}}{{\tilde k_c^3 + \tilde k_r^3}}\left\{ {2 + \frac{{{{\tilde k}_c}}}{{{{\tilde k}_r}}} + \frac{1}{2}\frac{{2\tilde b + \left[ {\gamma - {{\left( {1 + \tilde b} \right)}^2}} \right]\left( {{{\tilde k}_c} + {{\tilde k}_r}} \right)}}{{1 + \left( {1 + \tilde b} \right)\left( {1 + 2\tilde b} \right)\left( {{{\tilde k}_c} + {{\tilde k}_r}} \right)}}} \right\} , (B8a)

    and further

    {\sigma _{33}} = \frac{{{{\tilde k}_0}{{\tilde k}_r}\tilde k_c^3}}{{\tilde k_c^3 + \tilde k_r^3}}\left\{ {1 - \frac{{\tilde k_c^3}}{{\tilde k_r^3}} - \frac{{\tilde k_c^4}}{{\tilde k_r^4}} + \frac{1}{2}\frac{{2\tilde b + \left[ {\gamma - {{\left( {1 + \tilde b} \right)}^2}} \right]\left( {{{\tilde k}_c} + {{\tilde k}_r}} \right)}}{{1 + \left( {1 + \tilde b} \right)\left( {1 + 2\tilde b} \right)\left( {{{\tilde k}_c} + {{\tilde k}_r}} \right)}}} \right\} . (B8b)

    where \tilde b = - b\left\langle B \right\rangle = \frac{1}{{4\left\langle B \right\rangle }}\left[{{L_0} + 2\left({{L_0} - 1} \right)\left\langle B \right\rangle - {L_0}{{\left({1 + 2\left\langle B \right\rangle } \right)}^{{{\left({{L_0} - 1} \right)} \mathord{\left/ {\vphantom {{\left({{L_0} - 1} \right)} {{L_0}}}} \right. } {{L_0}}}}}} \right] > 0 with b < 0 , {\tilde k_c} = \frac{{{k_c}}}{d} , {\tilde k_r} = \frac{{{k_r}}}{d} , \gamma = \frac{{\left\langle {{B^2}} \right\rangle + \left\langle B \right\rangle }}{{\left\langle B \right\rangle }} .

    Thus, the cytoplasmic mRNA noise is given by

    {\eta _c} = \frac{{{\sigma _{22}}}}{{{{\left\langle {{M_2}} \right\rangle }^2}}} = \frac{1}{{{{\tilde k}_0}{{\tilde k}_c}}}\frac{{\tilde k_r^3}}{{\tilde k_c^3 + \tilde k_r^3}}\left\{ {1 + \frac{{{{\tilde k}_c} + {{\tilde k}_r}}}{{{{\tilde k}_r}}} + \frac{1}{2}\frac{{2\tilde b + \left[ {\gamma - {{\left( {1 + \tilde b} \right)}^2}} \right]\left( {{{\tilde k}_c} + {{\tilde k}_r}} \right)}}{{1 + \left( {1 + \tilde b} \right)\left( {1 + 2\tilde b} \right)\left( {{{\tilde k}_c} + {{\tilde k}_r}} \right)}}} \right\} , (B9a)

    and the nuclear mRNA noise in the nucleus by

    {\eta _r} = \frac{{{\sigma _{33}}}}{{{{\left\langle {{M_3}} \right\rangle }^2}}} = \frac{1}{{{{\tilde k}_0}{{\tilde k}_r}}}\frac{{\tilde k_c^3}}{{\tilde k_c^3 + \tilde k_r^3}}\left\{ {1 - \frac{{\tilde k_c^3}}{{\tilde k_r^3}} - \frac{{\tilde k_c^4}}{{\tilde k_r^4}} + \frac{1}{2}\frac{{2\tilde b + \left[ {\gamma - {{\left( {1 + \tilde b} \right)}^2}} \right]\left( {{{\tilde k}_c} + {{\tilde k}_r}} \right)}}{{1 + \left( {1 + \tilde b} \right)\left( {1 + 2\tilde b} \right)\left( {{{\tilde k}_c} + {{\tilde k}_r}} \right)}}} \right\} . (B9b)

    In this case, we can show that five effect transition rates take the forms: {K_1}\left(\boldsymbol{x} \right) = {k_0} , {K_2}\left(\boldsymbol{x} \right) = \frac{{\left({{k_0} + {x_1}{k_r} + d{x_2} + d{x_3}} \right){{\left({{x_1}{k_c}} \right)}^{{L_c}}}}}{{{{\left({{k_0} + {x_1}{k_c} + {x_1}{k_r} + d{x_2} + d{x_3}} \right)}^{{L_c}}} - {{\left({{x_1}{k_c}} \right)}^{{L_c}}}}} , {K_3}\left(\boldsymbol{x} \right) = {x_1}{k_r} , {K_4}\left(\boldsymbol{x} \right) = d{x_2} , and {K_5}\left(\boldsymbol{x} \right) = d{x_3} . In order to derive analytical results, we assume that remaining probability is so small that {p_r} \approx 0 , implying {k_r} = 0 , {k_c} = {k_1} and {K_3}\left(\boldsymbol{x} \right) = 0 . By solving the steady-state deterministic equation

    \left\{ \begin{array}{l} \left\langle B \right\rangle {K_1} - {K_2} - {K_3} = 0 \hfill \\ {K_2} - {K_4} = 0 \hfill \\ {K_3} - {K_5} = 0, \hfill \\ \end{array} \right. (C1)

    we obtain the analytical expression of steady state ( {\boldsymbol{x}^S} ) given by

    x_1^S = \frac{{{k_0}\left( {1 + \left\langle B \right\rangle } \right){{\left\langle B \right\rangle }^{{1 \mathord{\left/ {\vphantom {1 {{L_c}}}} \right. } {{L_c}}}}}}}{{{k_1}\left[ {{{\left( {1 + 2\left\langle B \right\rangle } \right)}^{{1 \mathord{\left/ {\vphantom {1 {{L_c}}}} \right. } {{L_c}}}}} - {{\left\langle B \right\rangle }^{{1 \mathord{\left/ {\vphantom {1 {{L_c}}}} \right. } {{L_c}}}}}} \right]}} , x_2^S = \frac{{{k_0}\left\langle B \right\rangle }}{d} and x_3^S = 0. (C2)

    Note that the elements of Jacob matrix in the linear noise approximation reduce to

    {a_{11}} = - \frac{{\partial {K_2}\left({{\boldsymbol{x}^S}} \right)}}{{\partial {x_1}}} , {a_{12}} = - \frac{{\partial {K_2}\left({{\boldsymbol{x}^S}} \right)}}{{\partial {x_2}}} , {a_{13}} = - \frac{{\partial {K_2}\left({{\boldsymbol{x}^S}} \right)}}{{\partial {x_3}}} , {a_{21}} = \frac{{\partial {K_2}\left({{\boldsymbol{x}^S}} \right)}}{{\partial {x_1}}} , {a_{22}} = \frac{{\partial {K_2}\left({{\boldsymbol{x}^S}} \right)}}{{\partial {x_2}}} - d , {a_{23}} = \frac{{\partial {K_2}\left({{\boldsymbol{x}^S}} \right)}}{{\partial {x_3}}} , {a_{31}} = 0 , {a_{32}} = 0 , and {a_{33}} = - d . Defferentiating function {K_2}\left(\boldsymbol{x} \right) with regard to {x_1} yields

    \frac{{\partial {K_2}\left( \boldsymbol{x} \right)}}{{\partial {x_1}}} = \frac{{{L_c}\left( {{k_0} + {x_2}d} \right){{\left( {{k_1}} \right)}^{{L_c}}}{{\left( {{x_1}} \right)}^{{L_c} - 1}}}}{{{{\left( {{k_0} + {x_1}{k_1} + {x_2}d} \right)}^{{L_c}}} - {{\left( {{x_1}{k_1}} \right)}^{{L_c}}}}} - \frac{{{L_c}\left( {{k_0} + {x_2}d} \right){{\left( {{x_1}{k_1}} \right)}^{{L_c}}}\left[ {{k_1}{{\left( {{k_0} + {x_1}{k_1} + {x_2}d} \right)}^{{L_c} - 1}} - {{\left( {{k_1}} \right)}^{{L_c}}}{{\left( {{x_1}} \right)}^{{L_c} - 1}}} \right]}}{{{{\left[ {{{\left( {{k_0} + {x_1}{k_1} + {x_2}d} \right)}^{{L_c}}} - {{\left( {{x_1}{k_1}} \right)}^{{L_c}}}} \right]}^2}}} .

    Therefore,

    {\left. {\frac{{\partial {K_2}\left( \boldsymbol{x} \right)}}{{\partial {x_1}}}} \right|_{\boldsymbol{x} = {\boldsymbol{x}^S}}} = - \frac{{\partial {K_2}\left( {{\boldsymbol{x}^S}} \right)}}{{\partial {x_1}}} = \frac{{{L_c}{k_0}\left\langle B \right\rangle }}{{{x_1}}}\left[ {\frac{{1 + 2\left\langle B \right\rangle }}{{1 + \left\langle B \right\rangle }} - \frac{{\left\langle B \right\rangle }}{{1 + \left\langle B \right\rangle }}{{\left( {\frac{{1 + 2\left\langle B \right\rangle }}{{\left\langle B \right\rangle }}} \right)}^{{{\left( {{L_c} - 1} \right)} \mathord{\left/ {\vphantom {{\left( {{L_c} - 1} \right)} {{L_c}}}} \right. } {{L_c}}}}}} \right] . (C3)

    Completely similarly, we have

    \frac{{\partial {K_2}\left( \boldsymbol{x} \right)}}{{\partial {x_2}}} = \frac{{\partial {K_2}\left( \boldsymbol{x} \right)}}{{\partial {x_3}}} = \frac{{d\left\langle B \right\rangle }}{{1 + \left\langle B \right\rangle }} - \frac{{d{L_c}{{\left\langle B \right\rangle }^2}}}{{1 + \left\langle B \right\rangle }}\frac{{{k_0}}}{{{k_1}{x_1}}}{\left( {\frac{{1 + 2\left\langle B \right\rangle }}{{\left\langle B \right\rangle }}} \right)^{{{\left( {{L_c} - 1} \right)} \mathord{\left/ {\vphantom {{\left( {{L_c} - 1} \right)} {{L_c}}}} \right. } {{L_c}}}}} . (C4)

    Furthermore, the Jacob matrix becomes

    {{\bf{A}}_s} = \left( {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}&{{a_{12}}} \\ { - {a_{11}}}&{ - {a_{12}} - d}&{ - {a_{12}}} \\ 0&0&{ - d} \end{array}} \right) , (C5)

    where {a_{11}}{\text{ = }} - \frac{{\partial {K_2}\left({{\boldsymbol{x}^S}} \right)}}{{\partial {x_1}}} and {a_{12}}{\text{ = }} - \frac{{\partial {K_2}\left({{\boldsymbol{x}^S}} \right)}}{{\partial {x_2}}} are given by Eqs (C3) and (C4).

    Meanwhile, the matrix {{\bf{D}}_s} in the linear noise approximation is given by

    {{\bf{D}}_s} = \left( {\begin{array}{*{20}{c}} {{k_0}\left\langle {{B^2}} \right\rangle + {k_0}\left\langle B \right\rangle }&{ - {K_2}}&0 \\ { - {K_2}}&{2{K_2}}&0 \\ 0&0&0 \end{array}} \right) . (C6)

    It follows from matrix equation {{\bf{{ A}}}_{\text{S}}}{{\bf{\Sigma}} _{\text{S}}} + {{\bf{\Sigma}} _{\text{S}}}{\bf{{ A}}}_{\text{S}}^{\text{T}} + {{\bf{D}}_{\text{S}}} = {\bf{0}} that

    {\sigma _{11}} = - \frac{{{k_0}\left\langle {{B^2}} \right\rangle + {k_0}\left\langle B \right\rangle }}{{2{a_{11}}}} - \frac{{{a_{12}}}}{{a_{11}^2}}{K_2} + \frac{{{a_{12}}\left( {{a_{12}} + d} \right)}}{{a_{11}^2}}{\sigma _{22}} , (C7a)
    {\sigma _{22}} = {k_0}\frac{{ - {a_{11}}\left( {\left\langle {{B^2}} \right\rangle + \left\langle B \right\rangle } \right) + 2d\left\langle B \right\rangle }}{{2d\left( {{a_{12}} + d - {a_{11}}} \right)}} . (C7b)

    Substituting the expressions of {a_{11}} and {a_{12}} into Eq (C7b) yields

    {\sigma _{22}} = \frac{{{k_0}\left\langle B \right\rangle }}{{2d}}\frac{{\frac{{{L_c}{k_1}}}{\omega }\frac{{1 + 2\left\langle B \right\rangle }}{{1 + \omega + \left\langle B \right\rangle }}\left( {\left\langle {{B^2}} \right\rangle + \left\langle B \right\rangle } \right) + 2d}}{{\frac{d}{{1 + \left\langle B \right\rangle }} + \frac{{{L_c}\left\langle B \right\rangle \left( {1 + 2\left\langle B \right\rangle } \right)}}{{1 + \omega + \left\langle B \right\rangle }}\left( {\frac{d}{{1 + \left\langle B \right\rangle }} + \frac{{{k_1}}}{\omega }} \right)}} , (C8)

    where \omega = \frac{{\left({1 + \left\langle B \right\rangle } \right){{\left\langle B \right\rangle }^{{1 \mathord{\left/ {\vphantom {1 {{L_c}}}} \right. } {{L_c}}}}}}}{{{{\left({1 + 2\left\langle B \right\rangle } \right)}^{{1 \mathord{\left/ {\vphantom {1 {{L_c}}}} \right. } {{L_c}}}}} - {{\left\langle B \right\rangle }^{{1 \mathord{\left/ {\vphantom {1 {{L_c}}}} \right. } {{L_c}}}}}}} .



    [1] X. Chen, J. Yu, S. Kong, Z. Wu, L. Wen, Dual refinement networks for accurate and fast object detection in real-world scenes, preprint, arXiv: 1807.08638. https://doi.org/10.48550/arXiv.1807.08638
    [2] G. Zhang, S. Lu, W. Zhang, CAD-Net: A context-aware detection network for objects in remote sensing imagery, IEEE Trans. Geosci. Remote Sensing, 57 (2019), 10015–10024. https://doi.org/10.1109/TGRS.2019.2930982 doi: 10.1109/TGRS.2019.2930982
    [3] H. D. Jang, S. Woo, P. Benz, J. Park, I. S. Kweon, Propose-and-attend single shot detector, in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), (2020), 815–824.
    [4] C. Chi, S. Zhang, J. Xing, Z. Lei, S. Z. Li, X. Zou, Selective refinement network for high performance face detection, in Proceedings of the AAAI conference on artificial intelligence, 33 (2019), 8231–8238. https://doi.org/10.1609/aaai.v33i01.33018231
    [5] K. Fu, Z. Chang, Y. Zhang, G. Xu, K. Zhang, X. Sun, Rotation-aware and multi-scale convolutional neural network for object detection in remote sensing images, ISPRS J. Photogramm. Remote Sensing, 161 (2020), 294–308. https://doi.org/10.1016/j.isprsjprs.2020.01.025 doi: 10.1016/j.isprsjprs.2020.01.025
    [6] W. Qian, X. Yang, S. Peng, J. Yan, Y. Guo, Learning modulated loss for rotated object detection, in Proceedings of the AAAI conference on artificial intelligence, 35 (2021), 2458–2466. https://doi.org/10.1609/aaai.v35i3.16347
    [7] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2014), 580–587. https://doi.org/10.1109/CVPR.2014.81
    [8] R. Girshick, Fast R-CNN, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015), 1440–1448. https://doi.org/10.1109/ICCV.2015.169
    [9] J. Dai, Y. Li, K. He, J. Sun, R-FCN: Object detection via region-based fully convolutional networks, Adv. Neural Inf. Process. Syst., 2016 (2016), 29.
    [10] S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, Adv. Neural Inf. Process. Syst., 39 (2015), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi: 10.1109/TPAMI.2016.2577031
    [11] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. LeCun, Overfeat: Integrated recognition, localization and detection using convolutional networks, preprint, arXiv: 1312.6229. https://doi.org/10.48550/arXiv.1312.6229
    [12] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, et al., Single shot multibox detector, in Computer Vision–ECCV 2016: 14th European Conference, (2016), 21–37. https://doi.org/10.1007/978-3-319-46448-0_2
    [13] J. Redmon, A. Farhadi, YOLO9000: Better, faster, stronger, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 7263–7271. https://doi.org/10.1109/CVPR.2017.690
    [14] T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2117–2125.
    [15] C. Y. Fu, W. Liu, A. Ranga, A. Tyagi, A. C. Berg, DSSD: Deconvolutional single shot detector, preprint, arXiv: 1701.06659. https://doi.org/10.48550/arXiv.1701.06659
    [16] Z. Cai, N. Vasconcelos, Cascade R-CNN: Delving into high quality object detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 6154–6162. https://doi.org/10.1109/CVPR.2018.00644
    [17] K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, et al., Hybrid task cascade for instance segmentation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 4974–4983. https://doi.org/10.1109/CVPR.2019.00511
    [18] L. Hou, K. Lu, J. Xue, L. Hao, Cascade detector with feature fusion for arbitrary-oriented objects in remote sensing images, in 2020 IEEE International Conference on Multimedia and Expo (ICME), (2020), 1–6. https://doi.org/10.1109/ICME46284.2020.9102807
    [19] Z. Tian, C. Shen, H. Chen, T. He, FCOS: Fully convolutional one-stage object detection, in IEEE/CVF International Conference on Computer Vision (ICCV), (2019), 9626–9635. https://doi.org/10.1109/ICCV.2019.00972
    [20] T. Kong, F. Sun, H. Liu, Y. Jiang, L. Li, J. Shi, FoveaBox: Beyond anchor-based object detection, IEEE Trans. Image Process., 29 (2020), 7389–7398. https://doi.org/10.1109/TIP.2020.3002345 doi: 10.1109/TIP.2020.3002345
    [21] Z. Yang, S. Liu, H. Hu, L. Wang, S. Lin, Reppoints: Point set representation for object detection, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), 9657–9666. https://doi.org/10.1109/ICCV.2019.00975
    [22] J. Ma, W. Shao, H. Ye, L. Wang, H. Wang, Y. Zheng, et al., Arbitrary-oriented scene text detection via rotation proposals, IEEE Trans. Multimedia, 20 (2017), 3111–3122. https://doi.org/10.1109/TMM.2018.2818020 doi: 10.1109/TMM.2018.2818020
    [23] X. Yang, J. Yang, J. Yan, Y. Zhang, T. Zhang, Z. Guo, et al., SCRDet: Towards more robust detection for small, cluttered and rotated objects, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), 8232–8241.
    [24] Y. Jiang, X. Zhu, X. Wang, S. Yang, W. Li, H. Wang, et al., R2CNN: Rotational region CNN for orientation robust scene text detection, preprint, arXiv: 1706.09579. https://doi.org/10.48550/arXiv.1706.09579
    [25] M. Liao, B. Shi, X. Bai, TextBoxes++: A single-shot oriented scene text detector, IEEE Trans. Image Process., 27 (2018), 3676–3690. https://doi.org/10.1109/TIP.2018.2825107 doi: 10.1109/TIP.2018.2825107
    [26] S. M. Azimi, E. Vig, R. Bahmanyar, M. Körner, P. Reinartz, Towards multi-class object detection in unconstrained remote sensing imagery, in Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, (2019), 150–165. https://doi.org/10.1007/978-3-030-20893-6_10
    [27] H. Rezatofighi, N. Tsoi, J. Y. Gwak, A. Sadeghian, I. Reid, S. Savarese, Generalized intersection over union: A metric and a loss for bounding box regression, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 658–666. https://doi.org/10.1109/CVPR.2019.00075
    [28] H. Wei, Y. Zhang, Z. Chang, H. Li, H. Wang, X. Sun, Oriented objects as pairs of middle lines, preprint, arXiv: 1912.10694. https://doi.org/10.48550/arXiv.1912.10694
    [29] S. Zhang, L. Wen, X. Bian, Z. Lei, S. Z. Li, Single-shot refinement neural network for object detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 4203–4212. https://doi.org/10.1109/CVPR.2018.00442
    [30] X. Yang, H. Sun, K. Fu, J. Yang, X. Sun, M. Yan, et al., Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks, Remote Sensing, 10 (2018), 132. https://doi.org/10.3390/rs10010132 doi: 10.3390/rs10010132
    [31] W. He, X. Y. Zhang, F. Yin, C. L. Liu, Deep direct regression for multi-oriented scene text detection, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), 745–753.
    [32] M. Liao, Z. Zhu, B. Shi, G. Xia, X. Bai, Rotation-sensitive regression for oriented scene text detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 5909–5918. https://doi.org/10.1109/CVPR.2018.00619
    [33] Y. Xu, M. Fu, Q. Wang, Y. Wang, K. Chen, G. S. Xia, et al., Gliding vertex on the horizontal bounding box for multi-oriented object detection, IEEE Trans. Pattern Anal. Mach. Intell., 43 (2019), 1452–1459. https://doi.org/10.1109/TPAMI.2020.2974745 doi: 10.1109/TPAMI.2020.2974745
    [34] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, ImageNet: A large-scale hierarchical image database, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), 248–255. https://doi.org/10.1109/CVPR.2009.5206848
    [35] J. C. Niebles, C. W. Chen, F. F. Li, Modeling temporal structure of decomposable motion segments for activity classification, in Computer Vision–ECCV 2010, (2010), 392–405. https://doi.org/10.1007/978-3-642-15552-9_29
    [36] S. M. Safdarnejad, X. Liu, L. Udpa, B. Andrus, J. Wood, D. Craven, Sports videos in the wild (SVW): A video dataset for sports analysis, in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 1 (2015), 1–7. https://doi.org/10.1109/FG.2015.7163105
    [37] C. Li, C. Xu, Z. Cui, D. Wang, T. Zhang, J. Yang, Feature-attentioned object detection in remote sensing imagery, in 2019 IEEE International Conference on Image Processing (ICIP), (2019), 3886–3890. https://doi.org/10.1109/ICIP.2019.8803521
    [38] H. Zhang, H. Chang, B. Ma, S. Shan, X. Chen, Cascade RetinaNet: Maintaining consistency for single-stage object detection, preprint, arXiv: 1907.06881. https://doi.org/10.48550/arXiv.1907.06881
    [39] X. Pan, Y. Ren, K. Sheng, W. Dong, H. Yuan, X. Guo, et al., Dynamic refinement network for oriented and densely packed object detection, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 11207–11216. https://doi.org/10.1109/CVPR42600.2020.01122
    [40] L. Liu, Z. Pan, G. Chen, Y. Gao, Drbox family: A group of object detection techniques for remote sensing images, in IGARSS 2019 IEEE International Geoscience and Remote Sensing Symposium, (2019), 1446–1449.
    [41] Y. Lin, P. Feng, J. Guan, W. Wang, J. Chambers, IENet: Interacting embranchment one stage anchor free detector for orientation aerial object detection, preprint, arXiv: 1912.00969. https://doi.org/10.48550/arXiv.1912.00969
    [42] L. Zhou, H. Wei, H. Li, W. Zhao, Y. Zhang, Objects detection for remote sensing images based on polar coordinates, preprint, arXiv: 2001.02988.
    [43] Z. Chen, K. Chen, W. Lin, J. See, H. Yu, Y. Ke, et al., PIoU loss: Towards accurate oriented object detection in complex environments, in Computer Vision–ECCV 2020: 16th European Conference, (2020), 195–211. https://doi.org/10.1007/978-3-030-58558-7_12
    [44] J. Ding, N. Xue, Y. Long, G. S. Xia, Q. Lu, Learning RoI transformer for oriented object detection in aerial images, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 2849–2858. https://doi.org/10.1109/CVPR.2019.00296
    [45] J. Wang, W. Yang, H. C. Li, H. Zhang, G. S. Xia, Learning center probability map for detecting objects in aerial images, IEEE Trans. Geosci. Remote Sensing, 59 (2020), 4307–4323. https://doi.org/10.1109/TGRS.2020.3010051 doi: 10.1109/TGRS.2020.3010051
    [46] H. Liu, L. Jiao, R. Wang, C. Xie, J. Du, H. Chen, et al., WSRD-Net: A convolutional neural network-based arbitrary-oriented wheat stripe rust detection method, Front. Plant Sci., 13 (2022), 876069. https://doi.org/10.3389/fpls.2022.876069 doi: 10.3389/fpls.2022.876069
    [47] T. Zhang, Y. Zhuang, G. Wang, S. Dong, H. Chen, L. Li, Multiscale semantic fusion-guided fractal convolutional object detection network for optical remote sensing imagery, IEEE Trans. Geosci. Remote Sensing, 60 (2022), 1–20. https://doi.org/10.1109/TGRS.2021.3108476 doi: 10.1109/TGRS.2021.3108476
    [48] P. Wu, Z. Wang, B. Zheng, H. Li, F. E. Alsaadi, N. Zeng, AGGN: Attention-based glioma grading network with multi-scale feature extraction and multi-modal information fusion, Comput. Biol. Med., 152 (2023), 106457. https://doi.org/10.1016/j.compbiomed.2022.106457 doi: 10.1016/j.compbiomed.2022.106457
    [49] N. Zeng, P. Wu, Z. Wang, H. Li, W. Liu, X. Liu, A small-sized object detection oriented multi-scale feature fusion approach with application to defect detection, IEEE Trans. Instrum. Meas., 71 (2022), 1–14. https://doi.org/10.1109/TIM.2022.3153997 doi: 10.1109/TIM.2022.3153997
    [50] H. Li, N. Zeng, P. Wu, K. Clawson, Cov-Net: A computer-aided diagnosis method for recognizing COVID-19 from chest X-ray images via machine vision, Exp. Syst. Appl., 207 (2022), 118029. https://doi.org/10.1016/j.eswa.2022.118029 doi: 10.1016/j.eswa.2022.118029
    [51] D. R. Johnson, V. R. Uthariaraj, A novel parameter initialization technique using RBM-NN for human action recognition, Comput. Intell. Neurosci., 2020 (2020). https://doi.org/10.1155/2020/8852404 doi: 10.1155/2020/8852404
    [52] G. S. Xia, X. Bai, J. Ding, Z. Zhu, S. Belongie, J. Luo, et al., DOTA: A A large-scale dataset for object detection in aerial images, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 3974–3983.
    [53] W. Yu, B. Lei, M. K. Ng, A. C. Cheung, Y. Shen, S. Wang, Tensorizing GAN with high-order pooling for Alzheimer's disease assessment, IEEE Trans. Neural Networks Learn. Syst., 33 (2020), 4945–4959. https://doi.org/10.1109/TNNLS.2021.3063516 doi: 10.1109/TNNLS.2021.3063516
    [54] R. Yang, Y. Yu, Artificial convolutional neural network in object detection and semantic segmentation for medical imaging analysis, Front. Oncol., 11 (2021), 638182. https://doi.org/10.3389/fonc.2021.638182 doi: 10.3389/fonc.2021.638182
    [55] S. Inthiyaz, S. K. H. Ahammad, A. S. Krishna, V. Bhargavi, D. Govardhan, V. Rajesh, YOLO (YOU ONLY LOOK ONCE) making object detection work in medical imaging on convolution detection system, Int. J. Pharm. Res., 12 (2020), 312–326. https://doi.org/10.31838/ijpr/2020.12.02.0003 doi: 10.31838/ijpr/2020.12.02.0003
    [56] A. Kaur, Y. Singh, N. Neeru, L. Kaur, A. Singh, A survey on deep learning approaches to medical images and a systematic look up into real-time object detection, Arch. Comput. Methods Eng., 29 (2021), 2071–2111. https://doi.org/10.1007/s11831-021-09649-9 doi: 10.1007/s11831-021-09649-9
    [57] S. Jaiswal, R. Yadav, J. D. Roselind, Emotion detection using natural language process, Int. J. Sci. Methods Intell. Eng. Networks, 2023 (2023).
  • This article has been cited by:

    1. Noreen Saba, Sana Maqsood, Muhammad Asghar, Ghulam Mustafa, Faheem Khan, An extended Mittag Leffler function in terms of extended Wright complex hypergeometric function, 2025, 117, 11100168, 364, 10.1016/j.aej.2024.12.065
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1472) PDF downloads(59) Cited by(1)

Figures and Tables

Figures(10)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog