Research article Special Issues

A comprehensive transfer news headline generation method based on semantic prototype transduction


  • Received: 09 August 2022 Revised: 11 October 2022 Accepted: 14 October 2022 Published: 26 October 2022
  • Most current deep learning-based news headline generation models only target domain-specific news data. When a new news domain appears, it is usually costly to obtain a large amount of data with reference truth on the new domain for model training, so text generation models trained by traditional supervised approaches often do not generalize well on the new domain—inspired by the idea of transfer learning, this paper designs a cross-domain transfer text generation method based on domain data distribution alignment, intermediate domain redistribution, and zero-shot learning semantic prototype transduction, focusing on the data problem with no reference truth in the target domain. Eventually, the model can be guided by the most relevant source domain data to generate headlines from the target domain news text through the semantic correlation between source and target domain data during the training process of generating headlines for the target domain news, even without any reference truth of the news headlines in the target domain, which improves the usability of the text generation model in real scenarios. The experimental results show that the proposed transfer text generation method has a good domain transfer effect and outperforms other existing transfer text generation methods in various text generation evaluation indexes, proving the proposed method's effectiveness in this paper.

    Citation: Ting-Huai Ma, Xin Yu, Huan Rong. A comprehensive transfer news headline generation method based on semantic prototype transduction[J]. Mathematical Biosciences and Engineering, 2023, 20(1): 1195-1228. doi: 10.3934/mbe.2023055

    Related Papers:

    [1] Muajebah Hidan, Abbas Kareem Wanas, Faiz Chaseb Khudher, Gangadharan Murugusundaramoorthy, Mohamed Abdalla . Coefficient bounds for certain families of bi-Bazilevič and bi-Ozaki-close-to-convex functions. AIMS Mathematics, 2024, 9(4): 8134-8147. doi: 10.3934/math.2024395
    [2] Abdulmtalb Hussen, Mohammed S. A. Madi, Abobaker M. M. Abominjil . Bounding coefficients for certain subclasses of bi-univalent functions related to Lucas-Balancing polynomials. AIMS Mathematics, 2024, 9(7): 18034-18047. doi: 10.3934/math.2024879
    [3] Tariq Al-Hawary, Ala Amourah, Abdullah Alsoboh, Osama Ogilat, Irianto Harny, Maslina Darus . Applications of qUltraspherical polynomials to bi-univalent functions defined by qSaigo's fractional integral operators. AIMS Mathematics, 2024, 9(7): 17063-17075. doi: 10.3934/math.2024828
    [4] Abeer O. Badghaish, Abdel Moneim Y. Lashin, Amani Z. Bajamal, Fayzah A. Alshehri . A new subclass of analytic and bi-univalent functions associated with Legendre polynomials. AIMS Mathematics, 2023, 8(10): 23534-23547. doi: 10.3934/math.20231196
    [5] Bilal Khan, H. M. Srivastava, Muhammad Tahir, Maslina Darus, Qazi Zahoor Ahmad, Nazar Khan . Applications of a certain q-integral operator to the subclasses of analytic and bi-univalent functions. AIMS Mathematics, 2021, 6(1): 1024-1039. doi: 10.3934/math.2021061
    [6] Sheza. M. El-Deeb, Gangadharan Murugusundaramoorthy, Kaliyappan Vijaya, Alhanouf Alburaikan . Certain class of bi-univalent functions defined by quantum calculus operator associated with Faber polynomial. AIMS Mathematics, 2022, 7(2): 2989-3005. doi: 10.3934/math.2022165
    [7] Luminiţa-Ioana Cotîrlǎ . New classes of analytic and bi-univalent functions. AIMS Mathematics, 2021, 6(10): 10642-10651. doi: 10.3934/math.2021618
    [8] F. Müge Sakar, Arzu Akgül . Based on a family of bi-univalent functions introduced through the Faber polynomial expansions and Noor integral operator. AIMS Mathematics, 2022, 7(4): 5146-5155. doi: 10.3934/math.2022287
    [9] Norah Saud Almutairi, Adarey Saud Almutairi, Awatef Shahen, Hanan Darwish . Estimates of coefficients for bi-univalent Ma-Minda-type functions associated with q-Srivastava-Attiya operator. AIMS Mathematics, 2025, 10(3): 7269-7289. doi: 10.3934/math.2025333
    [10] Tingting Du, Zhengang Wu . Some identities involving the bi-periodic Fibonacci and Lucas polynomials. AIMS Mathematics, 2023, 8(3): 5838-5846. doi: 10.3934/math.2023294
  • Most current deep learning-based news headline generation models only target domain-specific news data. When a new news domain appears, it is usually costly to obtain a large amount of data with reference truth on the new domain for model training, so text generation models trained by traditional supervised approaches often do not generalize well on the new domain—inspired by the idea of transfer learning, this paper designs a cross-domain transfer text generation method based on domain data distribution alignment, intermediate domain redistribution, and zero-shot learning semantic prototype transduction, focusing on the data problem with no reference truth in the target domain. Eventually, the model can be guided by the most relevant source domain data to generate headlines from the target domain news text through the semantic correlation between source and target domain data during the training process of generating headlines for the target domain news, even without any reference truth of the news headlines in the target domain, which improves the usability of the text generation model in real scenarios. The experimental results show that the proposed transfer text generation method has a good domain transfer effect and outperforms other existing transfer text generation methods in various text generation evaluation indexes, proving the proposed method's effectiveness in this paper.



    Let A indicate an analytic functions family, which is normalized under the condition f (0)= f(0)1=0 in U={z:zC and |z |<1} and given by the following Taylor-Maclaurin series:

    f (z)=z+n=2anzn .      (1.1)

    Further, by S we shall denote the class of all functions in A which are univalent in U.

    With a view to recalling the principle of subordination between analytic functions, let the functions f and g be analytic in U. Then we say that the function f is subordinate to g if there exists a Schwarz function w(z), analytic in U with

    ω(0)=0, |ω(z)|<1, (zU)

    such that

    f (z)=g (ω(z)).

    We denote this subordination by

    fg or f (z)g (z).

    In particular, if the function g is univalent in U, the above subordination is equivalent to

    f (0)=g (0), f (U)g (U).

    The Koebe-One Quarter Theorem [11] asserts that image of U under every univalent function fA contains a disc of radius 14. thus every univalent function f has an inverse  f1  satisfying  f1(f(z))=z and f ( f1 (w))=w (|w|<r 0(f ),r 0(f ) >14 ), where

     f1(w)=wa2w2+(2a22a3)w3(5a325a2a3+a4)w4+. (1.2)

    A function fA is said to be bi-univalent functions in U if both f and  f1 are univalent in U. A function fS is said to be bi-univalent in U if there exists a function gS such that g(z) is an univalent extension of f1 to U. Let Λ denote the class of bi-univalent functions in U. The functions z1z, log(1z), 12log(1+z1z) are in the class Λ (see details in [20]). However, the familiar Koebe function is not bi-univalent. Lewin [17] investigated the class of bi-univalent functions Λ and obtained a bound |a2|1.51. Motivated by the work of Lewin [17], Brannan and Clunie [9] conjectured that |a2|2. The coefficient estimate problem for |an|(nN,n3) is still open ([20]). Brannan and Taha [10] also worked on certain subclasses of the bi-univalent function class Λ and obtained estimates for their initial coefficients. Various classes of bi-univalent functions were introduced and studied in recent times, the study of bi-univalent functions gained momentum mainly due to the work of Srivastava et al. [20]. Motivated by this, many researchers [1], [4,5,6,7,8], [13,14,15], [20], [21], and [27,28,29], also the references cited there in) recently investigated several interesting subclasses of the class Λ and found non-sharp estimates on the first two Taylor-Maclaurin coefficients. Recently, many researchers have been exploring bi-univalent functions, few to mention Fibonacci polynomials, Lucas polynomials, Chebyshev polynomials, Pell polynomials, Lucas–Lehmer polynomials, orthogonal polynomials and the other special polynomials and their generalizations are of great importance in a variety of branches such as physics, engineering, architecture, nature, art, number theory, combinatorics and numerical analysis. These polynomials have been studied in several papers from a theoretical point of view (see, for example, [23,24,25,26,27,28,29,30] also see references therein).

    We recall the following results relevant for our study as stated in [3].

    Let p(x) and q(x) be polynomials with real coefficients. The (p,q) Lucas polynomials Lp,q,n(x) are defined by the recurrence relation

    Lp,q,n(x)=p(x)Lp,q,n1(x)+q(x)Lp,q,n2(x)(n2),

    from which the first few Lucas polynomials can be found as

    Lp,q,0(x)=2,Lp,q,1(x)=p(x),Lp,q,2(x)=p2(x)+2q(x),Lp,q,3(x)=p3(x)+3p(x)q(x),.... (1.3)

    For the special cases of p(x) and q(x), we can get the polynomials given Lx,1,n(x)Ln(x) Lucas polynomials, L2x,1,n(x)Dn(x) Pell–Lucas polynomials, L1,2x,n(x)jn(x) Jacobsthal–Lucas polynomials, L3x,2,n(x)Fn(x) Fermat–Lucas polynomials, L2x,1,n(x)Tn(x) Chebyshev polynomials first kind.

    Lemma 1.1. [16] Let G{L(x)}(z)be the generating function of the (p,q)Lucas polynomial sequence Lp,q,n(x).Then,

    G{L(x)}(z)=n=0Lp,q,n(x)zn=2p(x)z1p(x)zq(x)z2

    and

    G{L(x)}(z)=G{L(x)}(z)1=1+n=1Lp,q,n(x)zn=1+q(x)z21p(x)zq(x)z2.

    Definition 1.2. [22] For ϑ0, δR, ϑ+iδ0 and fA, let B(ϑ,δ) denote the class of Bazilevič function if and only if

    Re[(zf(z)f(z))(f(z)z)ϑ+iδ]>0.

    Several authors have researched different subfamilies of the well-known Bazilevič functions of type ϑ from various viewpoints (see [3] and [19]). For Bazilevič functions of order ϑ+iδ, there is no much work associated with Lucas polynomials in the literature. Initiating an exploration of properties of Lucas polynomials associated with Bazilevič functions of order ϑ+iδ is the main goal of this paper. To do so, we take into account the following definitions. In this paper motivated by the very recent work of Altinkaya and Yalcin [3] (also see [18]) we define a new class B(ϑ,δ), bi-Bazilevič function of Λ based on (p,q) Lucas polynomials as below:

    Definition 1.3. For fΛ, ϑ0, δR, ϑ+iδ0 and let B(ϑ,δ) denote the class of Bi-Bazilevič functions of order t  and type ϑ+iδ if only if

    [(zf(z)f(z))(f(z)z)ϑ+iδ]G{L(x)}(z)(zU) (1.4)

    and

    [(zg(w)g(w))(g(w)w)ϑ+iδ]G{L(x)}(w)(wU), (1.5)

    where GLp,q,n(z)Φ and the function g is described as g(w)=f1(w).

    Remark 1.4. We note that for δ=0 the class R(ϑ,0)=R(ϑ) is defined by Altinkaya and Yalcin [2].

    The class B(0,0)=SΛ is defined as follows:

    Definition 1.5. A function fΛ is said to be in the class SΛ, if the following subordinations hold

    zf(z)f(z)G{L(x)}(z)(zU)

    and

    wg(w)g(w)G{L(x)}(w)(wU)

    where g(w)=f1(w).

    We begin this section by finding the estimates of the coefficients |a2| and |a3| for functions in the class B(ϑ,δ).

    Theorem 2.1. Let the function f(z) given by 1.1 be in the class B(ϑ,δ). Then

    |a2|p(x)2p(x)|{((ϑ+iδ)2+3(ϑ+iδ)+2)2(ϑ+iδ+1)2}p2(x)4q(x)(ϑ+iδ+1)2|.

    and

    |a3|p2(x)(ϑ+1)2+δ2+p(x)(ϑ+2)2+δ2.

    Proof. Let fB(ϑ,δ,x) there exist two analytic functions u,v:UU with u(0)=0=v(0), such that |u(z)|<1, |v(w)|<1, we can write from (1.4) and (1.5), we have

    [(zf(z)f(z))(f(z)z)ϑ+iδ]=G{L(x)}(z)(zU) (2.1)

    and

    [(zg(w)g(w))(g(w)w)ϑ+iδ]=G{L(x)}(w)(wU), (2.2)

    It is fairly well known that if

    |u(z)|=|u1z+u2z2+|<1

    and

    |v(w)|=|v1w+v2w2+|<1.

    then

    |uk|1and|vk|1(kN)

    It follows that, so we have

    G{L(x)}(u(z))=1+Lp,q,1(x)u(z)+Lp,q,2(x)u2(z)+=1+Lp,q,1(x)u1z+[Lp,q,1(x)u2+Lp,q,2(x)u21]z2+ (2.3)

    and

    G{L(x)}(v(w))=1+Lp,q,1(x)v(w)+Lp,q,2(x)v2(w)+=1+Lp,q,1(x)v1w+[Lp,q,1(x)v2+Lp,q,2(x)v21]w2+ (2.4)

    From the equalities (2.1) and (2.2), we obtain that

    [(zf(z)f(z))(f(z)z)ϑ+iδ]=1+Lp,q,1(x)u1z+[Lp,q,1(x)u2+Lp,q,2(x)u21]z2+, (2.5)

    and

    [(zg(w)g(w))(g(w)w)ϑ+iδ]=1+Lp,q,1(x)v1w+[Lp,q,1(x)v2+Lp,q,2(x)v21]w2+, (2.6)

    It follows from (2.5) and (2.6) that

    (ϑ+iδ+1)a2=Lp,q,1(x)u1,, (2.7)
    (ϑ+iδ1)(ϑ+iδ+2)2a22(ϑ+iδ+2)a3=Lp,q,1(x)u2+Lp,q,2(x)u21, (2.8)

    and

    (ϑ+iδ+1)a2=Lp,q,1(x)v1, (2.9)
    (ϑ+iδ+2)(ϑ+iδ+3)2a22+(ϑ+iδ+2)a3=Lp,q,1(x)v2+Lp,q,2(x)v21, (2.10)

    From (2.7) and (2.9)

    u1=v1 (2.11)

    and

    2(ϑ+iδ+1)2a22=L2p,q,1(x)(u21+v21)., (2.12)

    by adding (2.8) to (2.10), we get

    ((ϑ+iδ)2+3(ϑ+iδ)+2)a22=Lp,q,1(x)(u2+v2)+Lp,q,2(x)(u21+v21), (2.13)

    by using (2.12) in equality (2.13), we have

    [((ϑ+iδ)2+3(ϑ+iδ)+2)2Lp,q,2(x)(ϑ+iδ+1)2L2p,q,1(x)]a22=Lp,q,1(x)(u2+v2),
    a22=L3p,q,1(x)(u2+v2)[((ϑ+iδ)2+3(ϑ+iδ)+2)L2p,q,1(x)2Lp,q,2(x)(ϑ+iδ+1)2]. (2.14)

    Thus, from (1.3) and (2.14) we get

    |a2|p(x)2p(x)|{((ϑ+iδ)2+3(ϑ+iδ)+2)2(ϑ+iδ+1)2}p2(x)4q(x)(ϑ+iδ+1)2|.

    Next, in order to find the bound on |a3|, by subtracting (2.10) from (2.8), we obtain

    2(ϑ+iδ+2)a32(ϑ+iδ+2)a22=Lp,q,1(x)(u2v2)+Lp,q,2(x)(u21v21)2(ϑ+iδ+2)a3=Lp,q,1(x)(u2v2)+2(ϑ+iδ+2)a22a3=Lp,q,1(x)(u2v2)2(ϑ+iδ+2)+a22 (2.15)

    Then, in view of (2.11) and (2.12), we have from (2.15)

    a3=L2p,q,1(x)2(ϑ+iδ+2)2(u21+v21)+Lp,q,1(x)2(ϑ+iδ+2)(u2v2).
    |a3|p2(x)|ϑ+iδ+1|2+p(x)|ϑ+iδ+2|=p2(x)(ϑ+1)2+δ2+p(x)(ϑ+2)2+δ2

    This completes the proof.

    Taking δ=0, in Theorem 2.1, we get the following corollary.

    Corollary 2.2. Let the function f(z) given by (1.1) be in the class B(ϑ). Then

    |a2|p(x)2p(x)|{(ϑ2+3ϑ+2)2(ϑ+1)2}p2(x)4q(x)(ϑ+1)2|

    and

    |a3|p2(x)(ϑ+2)2+p(x)ϑ+2

    Also, taking ϑ=0 and δ=0, in Theorem 2.1, we get the results given in [18].

    Fekete-Szegö inequality is one of the famous problems related to coefficients of univalent analytic functions. It was first given by [12], the classical Fekete-Szegö inequality for the coefficients of fS is

    |a3μa22|1+2exp(2μ/(1μ)) for μ[0,1).

    As μ1, we have the elementary inequality |a3a22|1. Moreover, the coefficient functional

    ςμ(f)=a3μa22

    on the normalized analytic functions f in the unit disk U plays an important role in function theory. The problem of maximizing the absolute value of the functional ςμ(f) is called the Fekete-Szegö problem.

    In this section, we are ready to find the sharp bounds of Fekete-Szegö functional ςμ(f) defined for fB(ϑ,δ) given by (1.1).

    Theorem 3.1. Let f given by (1.1) be in the class B(ϑ,δ) and μR. Then

    |a3μa22|{p(x)(ϑ+2)2+δ2,        0|h(μ)|12(ϑ+2)2+δ22p(x)|h(μ)|,             |h(μ)|12(ϑ+2)2+δ2

    where

    h(μ)=L2p,q,1(x)(1μ)((ϑ+iδ)2+3(ϑ+iδ)+2)L2p,q,1(x)2Lp,q,2(x)(ϑ+iδ+1)2.

    Proof. From (2.14) and (2.15), we conclude that

    a3μa22=(1μ)L3p,q,1(x)(u2+v2)[((ϑ+iδ)2+3(ϑ+iδ)+2)L2p,q,1(x)2Lp,q,2(x)(ϑ+iδ+1)2]+Lp,q,1(x)2(ϑ+iδ+2)(u2v2)
    =Lp,q,1(x)[(h(μ)+12(ϑ+iδ+2))u2+(h(μ)12(ϑ+iδ+2))v2]

    where

    h(μ)=L2p,q,1(x)(1μ)((ϑ+iδ)2+3(ϑ+iδ)+2)L2p,q,1(x)2Lp,q,2(x)(ϑ+iδ+1)2.

    Then, in view of (1.3), we obtain

    |a3μa22|{p(x)(ϑ+2)2+δ2,        0|h(μ)|12(ϑ+2)2+δ22p(x)|h(μ)|,             |h(μ)|12(ϑ+2)2+δ2

    We end this section with some corollaries.

    Taking μ=1 in Theorem 3.1, we get the following corollary.

    Corollary 3.2. If fB(ϑ,δ), then

    |a3a22|p(x)(ϑ+2)2+δ2.

    Taking δ=0 in Theorem 3.1, we get the following corollary.

    Corollary 3.3. Let f given by (1.1) be in the class B(ϑ,0). Then

    |a3μa22|{p(x)ϑ+2,        0|h(μ)|12(ϑ+2)2p(x)|h(μ)|,             |h(μ)|12(ϑ+2)

    Also, taking ϑ=0, δ=0 and μ=1 in Theorem 3.1, we get the following corollary.

    Corollary 3.4. Let f given by (1.1) be in the class B. Then

    |a3a22|p(x)2.

    All authors declare no conflicts of interest in this paper.



    [1] X. Ao, X. Wang, L. Luo, PENS: A dataset and generic framework for personalized news headline generation, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 1 (2021), 82–92. https://doi.org/10.18653/v1/2021.acl-long.7
    [2] F. Z. Zhuang, P. Luo, Q. He, Z. Z. Shi, Survey on transfer learning, J. Software, 26 (2015), 26–39. https://doi.org/10.13328/j.cnki.jos.004631 doi: 10.13328/j.cnki.jos.004631
    [3] H. Choi, J. Kim, S. Joe, Analyzing Zero-shot cross-lingual transfer in supervised NLP tasks, in 2020 25th International Conference on Pattern Recognition (ICPR), (2021), 9608–9613. https://doi.org/10.1109/icpr48806.2021.9412570
    [4] W. Wang, V. W. Zheng, H. Yu, A survey of Zero-shot learning: Settings, methods, and applications, ACM Trans. Intell. Syst. Technol., 10 (2019), 1–37. https://doi.org/10.1145/3293318 doi: 10.1145/3293318
    [5] N. Y. Wang, Y. X. Ye, L. Liu, L. Z. Feng, T. Bao, T. Peng, Advances in deep learning-based language modeling research, J. Software, 32 (2021), 1082–1115. https://doi.org/10.13328/j.cnki.jos.006169 doi: 10.13328/j.cnki.jos.006169
    [6] S. Bae, T. Kim, J. Kim, Summary level training of sentence rewriting for abstractive summarization, in Proceedings of the 2nd Workshop on New Frontiers in Summarization, (2019), 10–20. https://doi.org/10.18653/v1/d19-5402
    [7] K. Krishna, B. V. Srinivasan, Generating topic-oriented summaries using neural attention, in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1 (2018), 1697–1705. https://doi.org/10.18653/v1/n18-1153
    [8] T. Ma, H. Wang, Y. Zhao, Topic-based automatic summarization algorithm for Chinese short text, Math. Biosci. Eng., 17 (2020), 3582–3600. https://doi.org/10.3934/mbe.2020202 doi: 10.3934/mbe.2020202
    [9] S. Narayan, J. Maynez, J. Adamek, Stepwise extractive summarization and planning with structured transformers, preprint, arXiv: 1810.04805.
    [10] A. See, P. J. Liu, C. D. Manning, Get to the point: Summarization with pointer-generator networks, in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 1 (2017), 1073–1083. https://doi.org/10.18653/v1/p17-1099
    [11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in Advances in Neural Information Processing Systems, (2017), 1–30.
    [12] P. F. Du, X. Y. Li, Y. L. Gao, Survey on multimodal visual language representation learning, J. Software, 32 (2021), 327–348. https://doi.org/10.13328/j.cnki.jos.006125 doi: 10.13328/j.cnki.jos.006125
    [13] S. Golovanov, R. Kurbanov, S. Nikolenko, Large-scale transfer learning for natural language generation, in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, (2019), 6053–6058. https://doi.org/10.18653/v1/p19-1608
    [14] J. J. Huang, P. W. Li, M. Peng, Q. Q. Xie, C. Xu, Research on deep learning-based topic models, Chin. J. Comput., 43 (2020), 827–855.
    [15] N. Dethlefs, Domain transfer for deep natural language generation from abstract meaning representations, IEEE Comput. Intell. Mag., 12 (2017), 18–28. https://doi.org/10.1109/mci.2017.2708558 doi: 10.1109/mci.2017.2708558
    [16] X. Qiu, T. Sun, Y. Xu, Pre-trained models for natural language processing: A survey, Sci. Chin. Technol. Sci., 63 (2020), 1872–1897. https://doi.org/10.1109/iceib53692.2021.9686420 doi: 10.1109/iceib53692.2021.9686420
    [17] C. Raffel, N. Shazeer, A. Roberts, Exploring the limits of transfer learning with a unified text-to-text transformer, J. Mach. Learn. Res., 21 (2020), 1–67.
    [18] M. Lewis, Y. Liu, N. Goyal, BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, (2020), 7871–7880. https://doi.org/10.18653/v1/2020.acl-main.703
    [19] J. Zhang, Y. Zhao, M. Saleh, PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization, in International Conference on Machine Learning, (2020), 11328–11339.
    [20] Z. C. Zhang, M. Y. Zhang, T. Zhou, Pre-trained language model augmented adversarial training network for Chinese clinical event detection, Math. Biosci. Eng, 17 (2020), 2825–2841. https://doi.org/10.3934/mbe.2020157 doi: 10.3934/mbe.2020157
    [21] S. Chen, L. Han, X. Liu, Subspace distribution adaptation frameworks for domain adaptation, IEEE Trans. Neural Networks Learn. Syst., 31 (2020), 5204–5218. https://doi.org/10.1109/tnnls.2020.2964790 doi: 10.1109/tnnls.2020.2964790
    [22] H. Li, S. J. Pan, S. Wang, Heterogeneous domain adaptation via nonlinear matrix factorization, IEEE Trans. Neural Networks Learn. Syst., 31 (2020), 984–996. https://doi.org/10.1109/tnnls.2019.2913723 doi: 10.1109/tnnls.2019.2913723
    [23] W. Zellinger, B. A. Moser, T. Grubinger, Robust unsupervised domain adaptation for neural networks via moment alignment, Inf. Sci., 483 (2019), 174–191. https://doi.org/10.1016/j.ins.2019.01.025 doi: 10.1016/j.ins.2019.01.025
    [24] X. Glorot, A. Bordes, Y. Bengio, Domain adaptation for large-scale sentiment classification: A deep learning approach, in International Conference on Machine Learning, (2011), 513–520.
    [25] J. Blitzer, M. Dredze, F. Pereira, Biographies, bollywood, boom-boxes, blenders: Domain adaptation for sentiment classification, in Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, 7 (2007), 440–447.
    [26] F. Wu, Y. Huang, Sentiment domain adaptation with multiple sources, in Proceedings of the 54th Annual Meeting of the Association of Computational Linguistics, (2016), 301–310, https://doi.org/10.18653/v1/p16-1029
    [27] J. Blitzer, R. McDonald, F. Pereira, Domain adaptation with structural correspondence learning, in Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, (2006), 120–128. https://doi.org/10.3115/1610075.1610094
    [28] J. Pan, X. Hu, P. Li, H. Li, W. He, Y. Zhang, Y. Lin, Domain adaptation via multi-layer transfer learning, Neurocomputing, 190 (2016), 10–24. https://doi.org/10.1016/j.neucom.2015.12.097 doi: 10.1016/j.neucom.2015.12.097
    [29] P. Wei, R. Sagarna, Y. Ke, Y. S. Ong, C. K. Goh, Source-target similarity modelings for multi-source transfer gaussian process regression, in Proceedings of the 34th International Conference on Machine Learning, (2017), 3722–3731.
    [30] N. Houlsby, A. Giurgiu, S. Jastrzebski, Parameter-efficient transfer learning for NLP, in PMLR, (2019), 2790–2799.
    [31] H. Zhang, L. Liu, Y. Long, Deep transductive network for generalized zero shot learning, Pattern Recogn., 105 (2020), 107370. https://doi.org/10.1016/j.patcog.2020.107370 doi: 10.1016/j.patcog.2020.107370
    [32] T. Zhao, M. Eskenazi, Zero-shot dialog generation with cross-domain latent actions, in Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, (2018), 1–10. https://doi.org/10.18653/v1/w18-5001
    [33] Z. Liu, J. Shin, Y. Xu, Zero-shot cross-lingual dialogue systems with transferable latent variables, preprint, arXiv: 1911.04081.
    [34] Ayana, S. Shen, Y. Chen, Zero-shot cross-lingual neural headline generation, IEEE/ACM Trans. Audio Speech Lang. Process., 26 (2018), 2319–2327. https://doi.org/10.1109/taslp.2018.2842432 doi: 10.1109/taslp.2018.2842432
    [35] X. Duan, M. Yin, M. Zhang, Zero-shot cross-lingual abstractive sentence summarization through teaching generation and attention, in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, (2019), 3162–3172. https://doi.org/10.18653/v1/p19-1305
    [36] J. Devlin, M. W. Chang, K. Lee, BERT: Pre-training of deep bidirectional transformers for language understanding, preprint, arXiv: 1810.04805.
    [37] X. T. Song, H. L. Sun, A review of neural network-based automatic source code abstraction techniques, J. Software, 33 (2022), 55–77. https://doi.org/10.13328/j.cnki.jos.006337 doi: 10.13328/j.cnki.jos.006337
    [38] P. J. Rousseeuw, Silhouettes: A graphical aid to the interpretation and validation of cluster analysis, J. Comput. Appl. Math., 20 (1987), 53–65. https://doi.org/10.1016/0377-0427(87)90125-7 doi: 10.1016/0377-0427(87)90125-7
    [39] Z. Huang, P. Xu, D. Liang, TRANS-BLSTM: Transformer with bidirectional LSTM for language understanding, preprint, arXiv: 2003.07000.
    [40] Y. Liu, M. Lapata, Text summarization with pretrained encoders, preprint, arXiv: 1908.08345.
    [41] K. Yaser, R. Naren, K. R. Chandan, Deep transfer reinforcement learning for text summarization, in Proceedings of the 2019 SIAM International Conference on Data Mining, (2019), 675–683. https://doi.org/10.1137/1.9781611975673.76
    [42] K. Qian, Z. Yu, Domain adaptive dialog generation via meta learning, in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, (2019), 2639–2649. https://doi.org/10.18653/v1/p19-1253
    [43] Y. S. Chen, H. H. Shuai, Meta-transfer learning for low-resource abstractive summarization, preprint, arXiv: 2102.09397.
  • This article has been cited by:

    1. Ala Amourah, Basem Aref Frasin, Thabet Abdeljawad, Sivasubramanian Srikandan, Fekete-Szegö Inequality for Analytic and Biunivalent Functions Subordinate to Gegenbauer Polynomials, 2021, 2021, 2314-8888, 1, 10.1155/2021/5574673
    2. Mohamed Illafe, Ala Amourah, Maisarah Haji Mohd, Coefficient Estimates and Fekete–Szegö Functional Inequalities for a Certain Subclass of Analytic and Bi-Univalent Functions, 2022, 11, 2075-1680, 147, 10.3390/axioms11040147
    3. Nazmiye Yilmaz, İbrahim Aktaş, On some new subclasses of bi-univalent functions defined by generalized Bivariate Fibonacci polynomial, 2022, 33, 1012-9405, 10.1007/s13370-022-00993-y
    4. Daniel Breaz, Halit Orhan, Luminiţa-Ioana Cotîrlă, Hava Arıkan, A New Subclass of Bi-Univalent Functions Defined by a Certain Integral Operator, 2023, 12, 2075-1680, 172, 10.3390/axioms12020172
    5. Luminiţa-Ioana Cotîrlǎ, Abbas Kareem Wanas, Applications of Laguerre Polynomials for Bazilevič and θ-Pseudo-Starlike Bi-Univalent Functions Associated with Sakaguchi-Type Functions, 2023, 15, 2073-8994, 406, 10.3390/sym15020406
    6. Isra Al-Shbeil, Abbas Kareem Wanas, Afis Saliu, Adriana Cătaş, Applications of Beta Negative Binomial Distribution and Laguerre Polynomials on Ozaki Bi-Close-to-Convex Functions, 2022, 11, 2075-1680, 451, 10.3390/axioms11090451
    7. Tariq Al-Hawary, Ala Amourah, Basem Aref Frasin, Fekete–Szegö inequality for bi-univalent functions by means of Horadam polynomials, 2021, 27, 1405-213X, 10.1007/s40590-021-00385-5
    8. Abbas Kareem Wanas, Luminiţa-Ioana Cotîrlă, Applications of (M,N)-Lucas Polynomials on a Certain Family of Bi-Univalent Functions, 2022, 10, 2227-7390, 595, 10.3390/math10040595
    9. Abbas Kareem Wanas, Haeder Younis Althoby, Fekete-Szegö Problem for Certain New Family of Bi-Univalent Functions, 2022, 2581-8147, 263, 10.34198/ejms.8222.263272
    10. Arzu Akgül, F. Müge Sakar, A new characterization of (P, Q)-Lucas polynomial coefficients of the bi-univalent function class associated with q-analogue of Noor integral operator, 2022, 33, 1012-9405, 10.1007/s13370-022-01016-6
    11. Tariq Al-Hawary, Coefficient bounds and Fekete–Szegö problem for qualitative subclass of bi-univalent functions, 2022, 33, 1012-9405, 10.1007/s13370-021-00934-1
    12. Ala Amourah, Basem Aref Frasin, Tamer M. Seoudy, An Application of Miller–Ross-Type Poisson Distribution on Certain Subclasses of Bi-Univalent Functions Subordinate to Gegenbauer Polynomials, 2022, 10, 2227-7390, 2462, 10.3390/math10142462
    13. Abbas Kareem Wanas, Alina Alb Lupaş, Applications of Laguerre Polynomials on a New Family of Bi-Prestarlike Functions, 2022, 14, 2073-8994, 645, 10.3390/sym14040645
    14. Ibtisam Aldawish, Basem Frasin, Ala Amourah, Bell Distribution Series Defined on Subclasses of Bi-Univalent Functions That Are Subordinate to Horadam Polynomials, 2023, 12, 2075-1680, 362, 10.3390/axioms12040362
    15. Ala Amourah, Omar Alnajar, Maslina Darus, Ala Shdouh, Osama Ogilat, Estimates for the Coefficients of Subclasses Defined by the Bell Distribution of Bi-Univalent Functions Subordinate to Gegenbauer Polynomials, 2023, 11, 2227-7390, 1799, 10.3390/math11081799
    16. Omar Alnajar, Maslina Darus, 2024, 3150, 0094-243X, 020005, 10.1063/5.0228336
    17. Muajebah Hidan, Abbas Kareem Wanas, Faiz Chaseb Khudher, Gangadharan Murugusundaramoorthy, Mohamed Abdalla, Coefficient bounds for certain families of bi-Bazilevič and bi-Ozaki-close-to-convex functions, 2024, 9, 2473-6988, 8134, 10.3934/math.2024395
    18. Ala Amourah, Ibtisam Aldawish, Basem Aref Frasin, Tariq Al-Hawary, Applications of Shell-like Curves Connected with Fibonacci Numbers, 2023, 12, 2075-1680, 639, 10.3390/axioms12070639
    19. Tariq Al-Hawary, Ala Amourah, Abdullah Alsoboh, Osama Ogilat, Irianto Harny, Maslina Darus, Applications of qUltraspherical polynomials to bi-univalent functions defined by qSaigo's fractional integral operators, 2024, 9, 2473-6988, 17063, 10.3934/math.2024828
    20. İbrahim Aktaş, Derya Hamarat, Generalized bivariate Fibonacci polynomial and two new subclasses of bi-univalent functions, 2023, 16, 1793-5571, 10.1142/S1793557123501474
    21. Abbas Kareem Wanas, Fethiye Müge Sakar, Alina Alb Lupaş, Applications Laguerre Polynomials for Families of Bi-Univalent Functions Defined with (p,q)-Wanas Operator, 2023, 12, 2075-1680, 430, 10.3390/axioms12050430
    22. Ala Amourah, Zabidin Salleh, B. A. Frasin, Muhammad Ghaffar Khan, Bakhtiar Ahmad, Subclasses of bi-univalent functions subordinate to gegenbauer polynomials, 2023, 34, 1012-9405, 10.1007/s13370-023-01082-4
    23. Tariq Al-Hawary, Basem Aref Frasin, Abbas Kareem Wanas, Georgia Irina Oros, On Rabotnov fractional exponential function for bi-univalent subclasses, 2023, 16, 1793-5571, 10.1142/S1793557123502170
    24. Tariq Al-Hawary, Ala Amourah, Hasan Almutairi, Basem Frasin, Coefficient Inequalities and Fekete–Szegö-Type Problems for Family of Bi-Univalent Functions, 2023, 15, 2073-8994, 1747, 10.3390/sym15091747
    25. Omar Alnajar, Osama Ogilat, Ala Amourah, Maslina Darus, Maryam Salem Alatawi, The Miller-Ross Poisson distribution and its applications to certain classes of bi-univalent functions related to Horadam polynomials, 2024, 10, 24058440, e28302, 10.1016/j.heliyon.2024.e28302
    26. Tariq Al-Hawary, Basem Frasin, Daniel Breaz, Luminita-Ioana Cotîrlă, Inclusive Subclasses of Bi-Univalent Functions Defined by Error Functions Subordinate to Horadam Polynomials, 2025, 17, 2073-8994, 211, 10.3390/sym17020211
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(12578) PDF downloads(110) Cited by(1)

Figures and Tables

Figures(10)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog