Research article

Knowledge graph embedding by fusing multimodal content via cross-modal learning


  • Knowledge graph embedding aims to learn representation vectors for the entities and relations. Most of the existing approaches learn the representation from the structural information in the triples, which neglects the content related to the entity and relation. Though there are some approaches proposed to exploit the related multimodal content to improve knowledge graph embedding, such as the text description and images associated with the entities, they are not effective to address the heterogeneity and cross-modal correlation constraint of different types of content and network structure. In this paper, we propose a multi-modal content fusion model (MMCF) for knowledge graph embedding. To effectively fuse the heterogenous data for knowledge graph embedding, such as text description, related images and structural information, a cross-modal correlation learning component is proposed. It first learns the intra-modal and inter-modal correlation to fuse the multimodal content of each entity, and then they are fused with the structure features by a gating network. Meanwhile, to enhance the features of relation, the features of the associated head entity and tail entity are fused to learn relation embedding. To effectively evaluate the proposed model, we compare it with other baselines in three datasets, i.e., FB-IMG, WN18RR and FB15k-237. Experiment result of link prediction demonstrates that our model outperforms the state-of-the-art in most of the metrics significantly, implying the superiority of the proposed method.

    Citation: Shi Liu, Kaiyang Li, Yaoying Wang, Tianyou Zhu, Jiwei Li, Zhenyu Chen. Knowledge graph embedding by fusing multimodal content via cross-modal learning[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 14180-14200. doi: 10.3934/mbe.2023634

    Related Papers:

    [1] Giuseppe Maria Coclite, Lorenzo di Ruvo . A singular limit problem for conservation laws related to the Kawahara-Korteweg-de Vries equation. Networks and Heterogeneous Media, 2016, 11(2): 281-300. doi: 10.3934/nhm.2016.11.281
    [2] Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim . Deep neural network approach to forward-inverse problems. Networks and Heterogeneous Media, 2020, 15(2): 247-259. doi: 10.3934/nhm.2020011
    [3] Anya Désilles, Hélène Frankowska . Explicit construction of solutions to the Burgers equation with discontinuous initial-boundary conditions. Networks and Heterogeneous Media, 2013, 8(3): 727-744. doi: 10.3934/nhm.2013.8.727
    [4] Tong Yan . The numerical solutions for the nonhomogeneous Burgers' equation with the generalized Hopf-Cole transformation. Networks and Heterogeneous Media, 2023, 18(1): 359-379. doi: 10.3934/nhm.2023014
    [5] Jinyi Sun, Weining Wang, Dandan Zhao . Global existence of 3D rotating magnetohydrodynamic equations arising from Earth's fluid core. Networks and Heterogeneous Media, 2025, 20(1): 35-51. doi: 10.3934/nhm.2025003
    [6] Guillermo Reyes, Juan-Luis Vázquez . The Cauchy problem for the inhomogeneous porous medium equation. Networks and Heterogeneous Media, 2006, 1(2): 337-351. doi: 10.3934/nhm.2006.1.337
    [7] Caihong Gu, Yanbin Tang . Global solution to the Cauchy problem of fractional drift diffusion system with power-law nonlinearity. Networks and Heterogeneous Media, 2023, 18(1): 109-139. doi: 10.3934/nhm.2023005
    [8] Bendong Lou . Self-similar solutions in a sector for a quasilinear parabolic equation. Networks and Heterogeneous Media, 2012, 7(4): 857-879. doi: 10.3934/nhm.2012.7.857
    [9] Yannick Holle, Michael Herty, Michael Westdickenberg . New coupling conditions for isentropic flow on networks. Networks and Heterogeneous Media, 2020, 15(4): 605-631. doi: 10.3934/nhm.2020016
    [10] Elisabeth Logak, Isabelle Passat . An epidemic model with nonlocal diffusion on networks. Networks and Heterogeneous Media, 2016, 11(4): 693-719. doi: 10.3934/nhm.2016014
  • Knowledge graph embedding aims to learn representation vectors for the entities and relations. Most of the existing approaches learn the representation from the structural information in the triples, which neglects the content related to the entity and relation. Though there are some approaches proposed to exploit the related multimodal content to improve knowledge graph embedding, such as the text description and images associated with the entities, they are not effective to address the heterogeneity and cross-modal correlation constraint of different types of content and network structure. In this paper, we propose a multi-modal content fusion model (MMCF) for knowledge graph embedding. To effectively fuse the heterogenous data for knowledge graph embedding, such as text description, related images and structural information, a cross-modal correlation learning component is proposed. It first learns the intra-modal and inter-modal correlation to fuse the multimodal content of each entity, and then they are fused with the structure features by a gating network. Meanwhile, to enhance the features of relation, the features of the associated head entity and tail entity are fused to learn relation embedding. To effectively evaluate the proposed model, we compare it with other baselines in three datasets, i.e., FB-IMG, WN18RR and FB15k-237. Experiment result of link prediction demonstrates that our model outperforms the state-of-the-art in most of the metrics significantly, implying the superiority of the proposed method.



    The equation:

    {tu+xf(u)β22xu+δ3xu+κu+γ2|u|u=0,0<t<T,xR,u(0,x)=u0(x),xR, (1.1)

    was originally derived in [14,17] with f(u)=au2 focusing on microbubbles coated by viscoelastic shells. These structures are crucial in ultrasound diagnosis using contrast agents, and the dynamics of individual coated bubbles are explored, taking into account nonlinear competition and dissipation factors such as dispersion, thermal effects, and drag force.

    The coefficients β2, δ, κ, and γ2 are related to the dissipation, the dispersion, the thermal conduction dissipation, and to the drag force, repsctively.

    If κ=γ=0, we obtain the Kudryashov-Sinelshchikov [18] Korteweg-de Vries-Burgers [3,20] equation

    tu+axu2β22xu+δ3xu=0, (1.2)

    that models pressure waves in liquids with gas bubbles, taking into account heat transfer and viscosity. The mathematical results on Eq (1.2) are the following:

    ● analysis of exact solutions in [13],

    ● existence of the traveling waves in [2],

    ● well-posedness and asymptotic behavior in [7,11].

    If β=0, we derive the Korteweg-de Vries equation:

    tu+axu2+δ3xu=0, (1.3)

    which describes surface waves of small amplitude and long wavelength in shallow water. Here, u(t,x) represents the wave height above a flat bottom, x corresponds to the distance in the propagation direction, and t denotes the elapsed time. In [4,6,10,12,15,16], the completele integrability of Eq (1.3) and the existence of solitary wave solutions are proved.

    Through the manuscript, we will assume

    ● on the coefficients

    β,δ,κ,γR,β,δ,γ0; (1.4)

    ● on the flux f, one of the following conditions:

    f(u)=au2+bu3, (1.5)
    fC1(R),|f(u)|C0(1+|u|),uR, (1.6)

    for some positive constant C0;

    ● on the initial value

    u0H1(R). (1.7)

    The main result of this paper is the following theorem.

    Theorem 1.1. Assume Eqs (1.5)–(1.7). For fixed T>0, there exists a unique distributional solution u of Eq (1.1), such that

    uL(0,T;H1(R))L4(0,T;W1,4(R))L6(0,T;W1,6(R))2xuL2((0,T)×R). (1.8)

    Moreover, if u1 and u2 are solutions to Eq (1.1) corresponding to the initial conditions u1,0 and u2,0, respectively, it holds that:

    u1(t,)u2(t,)L2(R)eC(T)tu1,0u2,0L2(R), (1.9)

    for some suitable C(T)>0, and every, 0tT.

    Observe that Theorem 1.1 gives the well-posedness of (1.1), without conditions on the constants. Moreover, the proof of Theorem 1.1 is based on the Aubin-Lions Lemma [5,21]. The analysis of Eq (1.1) is more delicate than the one of Eq (1.2) due to the presence of the nonlinear sources and the very general assumptions on the coefficients.

    The structure of the paper is outlined as follows. Section 2 is dedicated to establishing several a priori estimates for a vanishing viscosity approximation of Eq (1.1). These estimates are crucial for proving our main result, which is presented in Section 3.

    To establish existence, we utilize a vanishing viscosity approximation of equation (1.1), as discussed in [19]. Let 0<ε<1 be a small parameter, and denote by uεC([0,T)×R) the unique classical solution to the following problem [1,9]:

    {tuε+xf(uε)β22xuε+δ3xuε+κu+γ2|u|u=ε4xuε,0<t<T,xR,uε(0,x)=uε,0(x),xR, (2.1)

    where uε,0 is a C approximation of u0, such that

    uε,0H1(R)u0H1(R). (2.2)

    Let us prove some a priori estimates on uε, denoting with C0 constants which depend only on the initial data, and with C(T) the constants which depend also on T.

    We begin by proving the following lemma:

    Lemma 2.1. Let T>0 be fixed. There exists a constant C(T)>0, which does not depend on ε, such that

    uε(t,)2L2(R)+2γ2e|κ|tt0Re|κ|su2ε|uε|dsdx+2β2e|κ|tt0e|κ|sxuε(s,)2L2(R)ds+2εe|κ|tt0e|κ|s2xuε(s,)2L2(R)C(T), (2.3)

    for every 0tT.

    Proof. For 0tT. Multiplying equations (2.1) by 2uε, and integrating over R yields

    ddtuε(t,)2L2(R)=2Ruεtuεdx=2Ruεf(uε)xuεdx=0+2β2Ruε2xuεdx2δRuε3xuεdxκuε(t,)2L2(R)2γ2R|uε|u2εdx2εRuε4xuεdx=2β2xuε(t,)2L2(R)+2δRxuε2xuεdxκuε(t,)2L2(R)2γ2R|uε|u2εdx+2εRxuε3xuεdx=2β2xuε(t,)2L2(R)κuε(t,)2L2(R)2γ2R|uε|u2εdx2ε2xuε(t,)2L2(R).

    Thus, it follows that

    ddtuε(t,)2L2(R)+2β2xuε(t,)2L2(R)+2γ2R|uε|u2εdx+2ε2xuε(t,)2L2(R)=κuε(t,)2L2(R)|κ|uε(t,)2L2(R).

    Therefore, applying the Gronwall's lemma and using Eq (2.2), we obtain

    uε(t,)2L2(R)+2β2e|κ|tt0e|κ|sxuε(s,)2L2(R)ds+2γ2e|κ|tt0Re|κ|t|uε|u2εdsdx+2ε2xuε(t,)2L2(R)+2εe|κ|tt0e|κ|s2xuε(s,)2L2(R)dsC0e|κ|tC(T),

    which gives Eq (2.3).

    Lemma 2.2. Fix T>0 and assume (1.5). There exists a constant C(T)>0, independent of ε, such that

    uεL((0,T)×R)C(T), (2.4)
    xuε(t,)2L2(R)+β2t02xuε(s,)2L2(R)ds (2.5)
    +2εt03xuε(s,)2L2(R)dsC(T),t0xuε(s,)4L4(R)dsC(T), (2.6)

    holds for every 0tT.

    Proof. Let 0tT. Consider A,B as two real constants, which will be specified later. Thanks to Eq (1.5), multiplying Eq (2.1) by

    22xuε+Au2ε+Bu3ε,

    we have that

    (22xuε+Au2ε+Bu3ε)tuε+2a(22xuε+Au2ε+Bu3ε)uεxuε+3b(22xuε+Au2ε+Bu3ε)u2εxuεβ2(22xuε+Au2ε+Bu3ε)2xuε+δ(22xuε+Au2ε+Bu3ε)3xuε+κ(22xuε+Au2ε+Bu3ε)uε+γ2(22xuε+Au2ε+Bu3ε)|uε|uε=ε(22xuε+Au2ε+Bu3ε)4xuε. (2.7)

    Observe that

    R(22xuε+Au2ε+Bu3ε)tuεdx=ddt(xuε(t,)2L2(R)+A3Ru3εdx+B4Ru4εdx),2aR(22xuε+Au2ε+Bu3ε)uεxuεdx=4aRuεxuε2xuεdx,3bR(22xuε+Au2ε+Bu3ε)u2εxuεdx=6bRu2εxuε2xuεdx,β2R(22xuε+Au2ε+Bu3ε)2xuεdx=2β22xuε(t,)2L2(R)+2Aβ2Ruε(xuε)2dx+3Bβ2Ru2ε(xuε)2dx,δR(22xuε+Au2ε+Bu3ε)3xuεdx=2AδRuεxuε2xuεdx3BδRu2εxuε2xuεdx,κR(22xuε+Au2ε+Bu3ε)uεdx=2κxuε(t,)2L2(R)+AκRu3εdx+BκRu4εdx,γ2R(22xuε+Au2ε+Bu3ε)|uε|uεdx=2γ2R|uε|uε2xuεdx+Aγ2R|u|u3εdx+Bγ2R|uε|u4dx,εR(22xuε+Au2ε+Bu3ε)4xuεdx=2ε3xuε(t,)2L2(R)+2AεRuεxuε3xuεdx+3BεRu2εxuε3xuεdx=2ε3xuε(t,)2L2(R)AεR(xuε)3dx6BεRuε(xuε)22xuεdx3Bεuε(t,)2xuε(t,)2L2(R)=2ε3xuε(t,)2L2(R)AεR(xuε)3dx+2BεR(xuε)4dx3Bεuε(t,)2xuε(t,)2L2(R).

    Therefore, an integration on R gives

    ddt(xuε(t,)2L2(R)+A3Ru3εdx+B4Ru4εdx)+β22xuε(t,)2L2(R)+2ε3xuε(t,)2L2(R)=(4a+Aδ)Ruεxuε2xuεdx3(2b+Bδ)Ru2εxuε2xuεdx2Aβ2Ruε(xuε)2dx3Bβ2Ru2ε(xuε)2dxκxuε(t,)2L2(R)Aκ3Ru3εdxBκ4Ru4εdx+2γ2R|uε|uε2xuεdxAγ2R|uε|u3εdxBγ2R|uε|u4εdxAεR(xuε)3dx+2BεR(xuε)4dx3Bεuε(t,)2xuε(t,)2L2(R).

    Taking

    (A,B)=(4aδ,2bδ),

    we get

    ddt(xuε(t,)2L2(R)4a3δRu3εdxbδRu4εdx)+2β22xuε(t,)2L2(R)+2ε3xuε(t,)2L2(R)=8aβ2δRuε(xuε)2dx+6bβ2δRu2ε(xuε)2dxκxuε(t,)2L2(R)+4aκ3δRu3εdx+bκ2Ru4εdx+2γ2R|uε|uε2xuεdx+4aγ2δR|uε|u3εdx+2bγ2δR|uε|u4εdx+4aεδR(xuε)3dx4bεδR(xuε)4dx+6bεδuε(t,)2xuε(t,)2L2(R). (2.8)

    Since 0<ε<1, due to the Young inequality and (2.3),

    8aβ2δR|uε|(xuε)2dx4Ru2ε(xuε)2dx+4a2β4δ2xuε(t,)2L2(R)4uε2L((0,T)×R)xuε(t,)2L2(R)+4a2β4δ2xuε(t,)2L2(R)C0(1+uε2L((0,T)×R))xuε(t,)2L2(R),|6bβ2δ|Ru2ε(xuε)2dx|6bβ2δ|uε2L((0,T)×R)xuε(t,)2L2(R),|4aκ3δ|R|uε|3dx|4aκ3δ|uεL((0,T)×R)uε(t,)2L2(R)C(T)uεL((0,T)×R),|bκ2|Ru4εdx|bκ2|uε2L((0,T)×R)uε(t,)2L2(R)C(T)uε2L((0,T)×R),2γ2R|uε|uε2xuεdx2R|γ2|uε|uεβ||β2xuε|dxγ4β2Ruε4dx+β22xuε(t,)2L2(R)γ4β2uε2L((0.T)×R)uε(t,)2L2(R)+β22xuε(t,)2L2(R)C(T)uε2L((0,T)×R)+β22xuε(t,)2L2(R),|4aγ2δ|R|uε||uε|3dx=|4aγ2δ|Ru4εdx|4aγ2δ|uε2L((0,T)×R)uε(t,)2L2(R)C(T)uε2L((0,T)×R),|2bγ2δ|R|uε|uε4dx|2bγ2δ|uε3L((0,T)×R)uε(t,)2L2(R)C(T)uε3L((0,T)×R),|4aεδ|R|xuε|3dx|4aεδ|xuε(t,)2L2(R)+|4aεδ|R(xuε)4dx|4aδ|xuε(t,)2L2(R)+|4aεδ|R(xuε)4dx.

    It follows from Eq (2.8) that

    ddt(xuε(t,)2L2(R)4a3δRu3εdxbδRu4εdx)+β22xuε(t,)2L2(R)+2ε3xuε(t,)2L2(R)C0(1+uε2L((0,T)×R))xuε(t,)2L2(R)+C(T)uεL((0,T)×R)+C(T)uε2L((0,T)×R)+C(T)uε3L((0,T)×R)+C0εR(xuε)4dx+C0εuε(t,)2xuε(t,)2L2(R)+C0xuε(t,)2L2(R). (2.9)

    [8, Lemma 2.3] says that

    R(xuε)4dx9Ru2ε(2xuε)2dx9uε2L((0,T)×R)2xuε(t,)2L2(R). (2.10)

    Moreover, we have that

    uε(t,)2xuε(t,)2L2(R)=Ru2ε(2xuε)2dxuε2L((0,T)×R)2xuε(t,)2L2(R). (2.11)

    Consequentially, by Eqs (2.9)–(2.11), we have that

    ddt(xuε(t,)2L2(R)4a3δRu3εdxbδRu4εdx)+β22xuε(t,)2L2(R)+2ε3xuε(t,)2L2(R)C0(1+uε2L((0,T)×R))xuε(t,)2L2(R)+C(T)uεL((0,T)×R)+C(T)uε2L((0,T)×R)+C(T)uε3L((0,T)×R)+C0εuε2L((0,T)×R)2xuε(t,)2L2(R)+C0xuε(t,)2L2(R).

    An integration on (0,t) and Eqs (2.2) and (2.3) give

    xuε(t,)2L2(R)4a3δRu3εdxbδRu4εdx+β2t02xuε(s,)2L2(R)ds+2εt03xuε(s,)2L2(R)dsC0(1+uε2L((0,T)×R))t0xuε(s,)2L2(R)ds+C(T)uεL((0,T)×R)t+C(T)uε2L((0,T)×R)t+C(T)uε3L((0,T)×R)t+C0εuε2L((0,T)×R)t02xuε(s,)2L2(R)ds+C0t0xuε(s,)2L2(R)dsC(T)(1+uεL((0,T)×R)+uε2L((0,T)×R)+uε3L((0,T)×R)).

    Therefore, by Eq (2.3),

    xuε(t,)2L2(R)+β2t02xuε(s,)2L2(R)ds+2εt03xuε(s,)2L2(R)dsC(T)(1+uεL((0,T)×R)+uε2L((0,T)×R)+uε3L((0,T)×R))+4a3δRu3εdx+bδRu4εdxC(T)(1+uεL((0,T)×R)+uε2L((0,T)×R)+uε3L((0,T)×R))+|4a3δ|R|uε|3dx+|bδ|Ru4εdxC(T)(1+uεL((0,T)×R)+uε2L((0,T)×R)+uε3L((0,T)×R))+|4a3δ|uεL((0,T)×R)uε(t,)2L2(R)+|bδ|uε2L((0,T)×R)uε(t,)2L2(R)C(T)(1+uεL((0,T)×R)+uε2L((0,T)×R)+uε3L((0,T)×R)). (2.12)

    We prove Eq (2.4). Thanks to the Hölder inequality,

    u2ε(t,x)=2xuεxuεdx2R|uε||xuε|dx2uε(t,)L2(R)xuε(t,)L2(R).

    Hence, we have that

    uε(t,)4L(R)4uε(t,)2L2(R)xuε(t,)2L2(R). (2.13)

    Thanks to Eqs (2.3) and (2.12), we have that

    uε4L((0,T)×R)C(T)(1+uεL((0,T)×R)+uε2L((0,T)×R)+uε3L((0,T)×R)). (2.14)

    Due to the Young inequality,

    C(T)uε3L((0,T)×R)12uε4L((0,T)×R)+C(T)uε2L((0,T)×R),C(T)uεL((0,T)×R)C(T)uε2L((0,T)×R)+C(T).

    By Eq (2.14), we have that

    12uε4L((0,T)×R)C(T)uε2L((0,T)×R)C(T)0,

    which gives Eq (2.4).

    Equation (2.5) follows from Eqs (2.4) and (2.12).

    Finally, we prove Eq (2.6). We begin by observing that, from Eqs (2.4) and (2.10), we have

    xuε(t,)4L4(R)C(T)2xuε(t,)2L2(R).

    An integration on (0,t) and Eqs (2.5) give Eq (2.6).

    Lemma 2.3. Fix T>0 and assume (1.6). There exists a constant C(T)>0, independent of ε, such that Eq (2.4) holds. Moreover, we have Eqs (2.5) and (2.6).

    Proof. Let 0tT. Multiplying Eq (2.1) by 22xuε, an integration on R gives

    ddtxuε(t,)2L2(R)=2R2xuεtuεdx=2Rf(uε)xuε2xuεdx2β22xuε(t,)2L2(R)2δR2xuε3xuεdx2κRuε2xuεdx2γ2R|uε|uε2xuεdx+2εR2xuε4xuεdx=2Rf(uε)xuε2xuεdx2β22xuε(t,)2L2(R)+2κxuε(t,)2L2(R)+2γ2R|uε|uε2xuεdx2ε3xuε(t,)2L2(R).

    Therefore, we have that

    ddtxuε(t,)2L2(R)+2β22xuε(t,)2L2(R)+2ε3xuε(t,)2L2(R)=2Rf(uε)xuε2xuεdx+2κxuε(t,)2L2(R)+2γ2R|uε|uε2xuεdx. (2.15)

    Due Eqs (1.6) and (2.3) and the Young inequality,

    2R|f(uε)||xuε||2xuε|dxC0R|xuε2xuε|dx+C0R|uεxuε||2xuε|dx=2R|C03xuε2β||β2xuε3|dx+2R|C03uεxuε2β||3β2xuε|dxC0xuε(t,)2L2(R)+C0Ru2ε(xuε)2dx+2β232xuε(t,)2L2(R)C0xuε(t,)2L2(R)+C0uε2L((0,T)×R)xuε(t,)2L2(R)+2β232xuε(t,)2L2(R)C0(1+uε2L((0,T)×R))xuε(t,)2L2(R)+2β232xuε(t,)2L2(R),2γ2R|uε|uε2xuεdx2γ2Ru2ε|2xuε|dx=2R|3γ2u2εβ||β2xuε3|dx3γ4β2Ru4εdx+β232xuε(t,)2L2(R)3γ4β2uε2L((0,T)×R)uε(t,)2L2(R)+β232xuε(t,)2L2(R)C(T)uε2L((0,T)×R)+β232xuε(t,)2L2(R).

    It follows from Eq (2.15) that

    ddtxuε(t,)2L2(R)+β22xuε(t,)2L2(R)+2ε3xuε(t,)2L2(R)C0(1+uε2L((0,T)×R))xuε(t,)2L2(R)+C(T)uε2L((0,T)×R).

    Integrating on (0,t), by Eq (2.3), we have that

    xuε(t,)2L2(R)+β2t02xuε(s,)2L2(R)ds+2εt03xuε(s,)2L2(R)C0+C0(1+uε2L((0,T)×R))t0xuε(s,)2L2(R)ds+C(T)uε2L((0,T)×R)tC(T)(1+uε2L((0,T)×R)). (2.16)

    Thanks to Eqs (2.3), (2.13), and (2.16), we have that

    uε4L((0,T)×R)C(T)(1+uε2L((0,T)×R)).

    Therefore,

    uε4L((0,T)×R)C(T)uε2L((0,T)×R)C(T)0,

    which gives (2.4).

    Equation (2.5) follows from (2.4) and (2.16), while, arguing as in Lemma 2.2, we have Eq (2.6).

    Lemma 2.4. Fix T>0. There exists a constant C(T)>0, independent of ε, such that

    t0xuε(s,)6L6(R)dsC(T), (2.17)

    for every 0tT.

    Proof. Let 0tT. We begin by observing that,

    R(xuε)6dxxuε(t,)4L(R)xuε(t,)2L2(R). (2.18)

    Thanks to the Hölder inequality,

    (xuε(t,x))2=2xxuε2xuεdy2R|xuε||2xuε|dx2xuε(t,)L2(R)2xuε(t,)2L2(R).

    Hence,

    u(t,)4L(R)4xuε(t,)2L2(R)2xuε(t,)2L2(R).

    It follows from Eq (2.18) that

    R(xuε)6dx4xuε(t,)4L2(R)2xuε(t,)2L2(R).

    Therefore, by Eq (2.5),

    R(xuε)6dxC(T)2xuε(t,)2L2(R).

    An integration on (0,t) and Eq (2.5) gives (2.17).

    This section is devoted to the proof of Theorem 1.1.

    We begin by proving the following result.

    Lemma 3.1. Fix T>0. Then,

    the family {uε}ε>0 is compact in L2loc((0,T)×R). (3.1)

    Consequently, there exist a subsequence {uεk}kN and uL2loc((0,T)×R) such that

    uεku in L2loc((0,T)×R) and a.e. in (0,T)×R. (3.2)

    Moreover, u is a solution of Eq (1.1), satisfying Eq (1.8).

    Proof. We begin by proving Eq (3.1). To prove Eq (3.1), we rely on the Aubin-Lions Lemma (see [5,21]). We recall that

    H1loc(R)↪↪L2loc(R)H1loc(R),

    where the first inclusion is compact and the second one is continuous. Owing to the Aubin-Lions Lemma [21], to prove Eq (3.1), it suffices to show that

    {uε}ε>0 is uniformly bounded in L2(0,T;H1loc(R)), (3.3)
    {tuε}ε>0 is uniformly bounded in L2(0,T;H1loc(R)). (3.4)

    We prove Eq (3.3). Thanks to Lemmas 2.1–2.3,

    uε(t,)2H1(R)=uε(t,)2L2(R)+xuε(t,)2L2(R)C(T).

    Therefore,

    {uε}ε>0 is uniformly bounded in L(0,T;H1(R)),

    which gives Eq (3.3).

    We prove Eq (3.4). Observe that, by Eq (2.1),

    tuε=x(G(uε))f(uε)xuεκuεγ2|uε|uε,

    where

    G(uε)=β2xuεδ2xuεε3xuε. (3.5)

    Since 0<ε<1, thanks to Eq (2.5), we have that

    β2xuε2L2((0,T)×R),δ22xuε2L2((0,T)×R)C(T),ε23xuε2L2((0,T)×R)C(T). (3.6)

    Therefore, by Eqs (3.5) and (3.6), we have that

    {x(G(uε))}ε>0 is bounded in L2(0,T;H1(R)). (3.7)

    We claim that

    T0R(f(uε))2(xuε)2dtdxC(T). (3.8)

    Thanks to Eqs (2.4) and (2.5),

    T0R(f(uε))2(xuε)2dtdxf2L(C(T),C(T))T0xuε(t,)2L2(R)dtC(T).

    Moreover, thanks to Eq (2.3),

    |κ|T0R(uε)2dxC(T). (3.9)

    We have that

    γ2T0R(|uε|uε)2dsdxC(T). (3.10)

    In fact, thanks to Eqs (2.3) and (2.4),

    γ2T0R(|uε|uε)2dsdxγ2uε2L((0,T)×R)T0R(uε)2dsdxC(T)T0R(uε)2dsdxC(T).

    Therefore, Eq (3.4) follows from Eqs (3.7)–(3.10).

    Thanks to the Aubin-Lions Lemma, Eqs (3.1) and (3.2) hold.

    Consequently, arguing as in [5, Theorem 1.1], u is solution of Eq (1.1) and, thanks to Lemmas 2.1–2.3 and Eqs (2.4), (1.8) holds.

    Proof of Theorem 1.1. Lemma 3.1 gives the existence of a solution of Eq (1.1).

    We prove Eq (1.9). Let u1 and u2 be two solutions of Eq (1.1), which verify Eq (1.8), that is,

    {tui+xf(ui)β22xui+δ3xui+κui+γ2|ui|ui=0,0<t<T,xR,ui(0,x)=ui,0(x),xR,i=1,2.

    Then, the function

    ω(t,x)=u1(t,x)u2(t,x), (3.11)

    is the solution of the following Cauchy problem:

    {tω+x(f(u1)f(u2))β22xω+δ2xω+κω+γ2(|u1|u1|u2|u2)=0,0<t<T,xR,ω(0,x)=u1,0(x)u2,0(x),xR. (3.12)

    Fixed T>0, since u1,u2H1(R), for every 0tT, we have that

    u1L((0,T)×R),u2L((0,T)×R)C(T). (3.13)

    We define

    g=f(u1)f(u2)ω (3.14)

    and observe that, by Eq (3.13), we have that

    |g|fL(C(T),C(T))C(T). (3.15)

    Moreover, by Eq (3.11) we have that

    ||u1||u2|||u1u2|=|ω|. (3.16)

    Observe that thanks to Eq (3.11),

    |u1|u1|u2|u2=|u1|u1|u1|u2+|u1|u2|u2|u2=|u1|ω+u2(|u1||u2|). (3.17)

    Thanks to Eqs (3.14) and (3.17), Equation (3.12) is equivalent to the following one:

    tω+x(gω)β22xω+δ3xω+κω+γ2|u1|ω+γ2u2(|u1||u2|)=0. (3.18)

    Multiplying Eq (3.18) by 2ω, an integration on R gives

    dtdtω(t,)2L2(R)=2Rωtω=2Rωx(gω)dx+2β2Rω2xωdx2δRω3xωdx2κω(t,)2L2(R)2γ2R|u1|ω2dx2γ2Ru2(|u1||u2|)ωdx=2Rgωxωdx2β2xω(t,)2L2(R)+2δRxω2xωdx2κω(t,)2L2(R)2γ2R|u1|ω2dx2γ2Ru2(|u1||u2|)ωdx=2Rgωxωdx2β2xω(t,)2L2(R)2κω(t,)2L2(R)2γ2R|u1|ω2dx2γ2Ru2(|u1||u2|)ωdx.

    Therefore, we have that

    ω(t,)2L2(R)+2β2xω(t,)2L2(R)+2γ2R|u1|ω2dx=2Rgωxωdxκω(t,)2L2(R)2γ2Ru2(|u1||u2|)ωdx. (3.19)

    Due to Eqs (3.13), (3.15) and (3.16) and the Young inequality,

    2R|g||ω||xω|dx2C(T)R|ω||xω|dx=2R|C(T)ωβ||βxω|dxC(T)ω(t,)2L2(R)+β2xω(t,)2L2(R),2γ2R|u2||(|u1||u2|)||ω|dx2γ2u2L((0,T)×R)R|(|u1||u2|)||ω|dxC(T)ω(t,)2L2(R).

    It follows from Eq (3.19) that

    ω(t,)2L2(R)+β2xω(t,)2L2(R)+2γ2R|u1|ω2dxC(T)ω(t,)2L2(R).

    The Gronwall Lemma and Eq (3.12) give

    ω(t,)2L2(R)+β2eC(T)tt0eC(T)sxω(s,)2L2(R)ds+2γ2eC(T)tt0ReC(T)s|u1|ω2dsdxeC(T)tω02L2(R). (3.20)

    Equation (1.9) follows from Eqs (3.11) and (3.20).

    Giuseppe Maria Coclite and Lorenzo Di Ruvo equally contributed to the methodologies, typesetting, and the development of the paper.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Giuseppe Maria Coclite is an editorial boardmember for [Networks and Heterogeneous Media] and was not involved inthe editorial review or the decision to publish this article.

    GMC is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). GMC has been partially supported by the Project funded under the National Recovery and Resilience Plan (NRRP), Mission 4 Component 2 Investment 1.4 -Call for tender No. 3138 of 16/12/2021 of Italian Ministry of University and Research funded by the European Union -NextGenerationEUoAward Number: CN000023, Concession Decree No. 1033 of 17/06/2022 adopted by the Italian Ministry of University and Research, CUP: D93C22000410001, Centro Nazionale per la Mobilità Sostenibile, the Italian Ministry of Education, University and Research under the Programme Department of Excellence Legge 232/2016 (Grant No. CUP - D93C23000100001), and the Research Project of National Relevance "Evolution problems involving interacting scales" granted by the Italian Ministry of Education, University and Research (MIUR Prin 2022, project code 2022M9BKBC, Grant No. CUP D53D23005880006). GMC expresses its gratitude to the HIAS - Hamburg Institute for Advanced Study for their warm hospitality.

    The authors declare there is no conflict of interest.



    [1] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, J. Taylor, Freebase: a collaboratively created graph database for structuring human knowledge, in 2008 ACM SIGMOD International Conference on Management of Data (SIGKDD), (2008), 1247–1250. https://doi.org/10.1145/1376616.1376746
    [2] F. M. Suchanek, G. Kasneci, G. Weikum, Yago: a core of semantic knowledge, in 2007 16th International Conference on World Wide Web (WWW), (2007), 697–706. https://doi.org/10.1145/1242572.1242667
    [3] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, et al., Dbpedia–a large-scale, multilingual knowledge base extracted from Wikipedia, Semantic Web, 6 (2015), 167–195. https://doi.org/10.3233/SW-140134 doi: 10.3233/SW-140134
    [4] M. Wang, X. He, Z. Zhang, L. Liu, L. Qing, Y. Liu, Dual-process system based on mixed semantic fusion for Chinese medical knowledge-based question answering, Math. Biosci. Eng., 20 (2023), 4912–4939. https://doi.org/10.3934/mbe.2023228 doi: 10.3934/mbe.2023228
    [5] Z. Zheng, X. Si, F. Li, E. Y. Chang, X. Zhu, Entity disambiguation with freebase, in 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology, (2012), 82–89. https://doi.org/10.1109/WI-IAT.2012.26
    [6] S. Moon, P. Shah, A. Kumar, R. Subba, Opendialkg: Explainable conversational reasoning with attention-based walks over knowledge graphs, in 2019 the 57th Annual Meeting of the Association for Computational Linguistics (ACL), (2019), 845–854. https://doi.org/10.18653/v1/P19-1081
    [7] X. Lu, L. Wang, Z. Jiang, S. Liu, J. Lin, MRE: A translational knowledge graph completion model based on multiple relation embedding, Math. Biosci. Eng., 20 (2023), 5881–5900. https://doi.org/10.3934/mbe.2023253 doi: 10.3934/mbe.2023253
    [8] Q. Wang, Z. Mao, B. Wang, L. Guo, Knowledge graph embedding: A survey of approaches and applications, IEEE Trans. Knowl. Data Eng., 29 (2017), 2724–2743. https://doi.org/10.1109/TKDE.2017.2754499 doi: 10.1109/TKDE.2017.2754499
    [9] J. Xu, X. Qiu, K. Chen, X. Huang, Knowledge graph representation with jointly structural and textual encoding, in 2017 the 26th International Joint Conference on Artificial Intelligence (IJCAI), (2017), 1318–1324. https://doi.org/10.48550/arXiv.1611.08661
    [10] I. Balaˇzevi´c, C. Allen, T. Hospedales, Multi-relational poincar'e graph embeddings, Adv. Neural Inf. Proces. Syst., 32 (2019), 1168–1179. https://doi.org/10.48550/arXiv.1905.09791 doi: 10.48550/arXiv.1905.09791
    [11] S. Vashishth, S. Sanyal, V. Nitin, N. Agrawal, P. Talukdar, Interacte: Improving convolution-based knowledge graph embeddings by increasing feature interactions, in 2020 the 34th AAAI Conference on Artificial Intelligence (AAAI), (2020), 3009–3016. https://doi.org/10.1609/aaai.v34i03.5694
    [12] H. Mousselly-Sergieh, T. Botschen, I. Gurevych, S. Roth, A multimodal translation-based approach for knowledge graph representation learning, in 2018 the Seventh Joint Conference on Lexical and Computational Semantics, (2018), 225–234. https://doi.org/10.18653/v1/S18-2027
    [13] N. Veira, B. Keng, K. Padmanabhan, A. G. Veneris, Unsupervised embedding enhancements of knowledge graphs using textual associations, in 2019 the 28th International Joint Conference on Artificial Intelligence (IJCAI), (2019), 5218–5225. https://doi.org/10.24963/ijcai.2019/725
    [14] L. Yao, C. Mao, Y. Luo, Kg-bert: Bert for knowledge graph completion, preprint, arXiv: 1909.03193.
    [15] J. Devlin, M. W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, in 2019 the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), (2019), 4171–4186. https://doi.org/10.48550/arXiv.1810.04805
    [16] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. Van Den Berg, I. Titov, M. Welling, Modeling relational data with graph convolutional networks, in 2018 European Semantic Web Conference, (2018), 593–607. https://doi.org/10.48550/arXiv.1703.06103
    [17] S. Vashishth, S. Sanyal, V. Nitin, P. Talukdar, Composition-based multi-relational graph convolutional networks, in 2020 the International Conference on Learning Representations (ICLR), (2020), 121–134. https://doi.org/10.48550/arXiv.1911.03082
    [18] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, O. Yakhnenko, Translating embeddings for modeling multi-relational data, Adv. Neural Inf. Process. Syst., 22 (2013), 2787–2795. https://doi.org/10.5555/2999792.2999923 doi: 10.5555/2999792.2999923
    [19] Y. Lin, Z. Liu, M. Sun, Y. Liu, X. Zhu, Learning entity and relation embeddings for knowledge graph completion, in 2015 AAAI Conference on Artificial Intelligence (AAAI), (2015), 2181–2187. https://doi.org/10.1609/aaai.v29i1.9491
    [20] I. Balazevic, C. Allen, T. Hospedales, Tucker: Tensor factorization for knowledge graph completion. In 2019 the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), (2019), 178–189. https://doi.org/10.18653/v1/D19-1522
    [21] M. Nickel, L. Rosasco, T. Poggio, Holographic embeddings of knowledge graphs, in 2016 the 30th AAAI Conference on Artificial Intelligence (AAAI), (2016), 1955–1961. https://doi.org/10.1609/aaai.v30i1.10314
    [22] W. Zhang, B. Paudel, W. Zhang, A. Bernstein, H. Chen, Interaction embeddings for prediction and explanation in knowledge graphs, in 2019 the 12th ACM International Conference on Web Search and Data Mining (WSDM), (2019), 96–104. https://doi.org/10.1145/3289600.3291014
    [23] Y. LeCun, L. Bottou, Y. Bengio, P. Haffffner, Gradient-based learning applied to document recognition, in Proceedings of the IEEE, (1998), 2278–2324. https://doi.org/10.1109/5.726791
    [24] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, S. Y. Philip, A comprehensive survey on graph neural networks, IEEE Trans. Neural Networks Learn. Syst., 6 (2021), 97–109. https://doi.org/10.1109/TNNLS.2020.2978386 doi: 10.1109/TNNLS.2020.2978386
    [25] Z. Xie, G. Zhou, J. Liu, X. Huang, Reinceptione: Relation-aware inception network with joint local-global structural information for knowledge graph embedding, in 2020 the 58th Annual Meeting of the Association for Computational Linguistics (ACL), (2020), 5929–5939. https://doi.org/10.18653/v1/2020.acl-main.526
    [26] D. Q. Nguyen, T. D. Nguyen, D. Q. Nguyen, D. Phung, A novel embedding model for knowledge base completion based on convolutional neural network, in 2018 the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), (2018), 327–333. https://doi.org/10.18653/v1/N18-2053
    [27] I. Balaevicx, C. Allen, T. M. Hospedales, Hypernetwork knowledge graph embeddings, in 2019 the 28th International Conference on Artificial Neural Networks, (2019), 553–565. https://doi.org/10.1007/978-3-030-30493-5_52
    [28] S. Vashishth, S. Sanyal, V. Nitin, P. Talukdar, Composition-based multi-relational graph convolutional networks, in 2020 the International Conference on Learning Representations (ICLR), (2020), 321–334. https://doi.org/10.48550/arXiv.1911.03082
    [29] W. Y. Wang, W. W. Cohen, Learning first-order logic embeddings via matrix factorization, in 2016 the 25th International Joint Conference on Artificial Intelligence (IJCAI), (2016), 2132–2138. https://doi.org/10.5555/3060832.3060919
    [30] B. Jagvaral, W. K. Lee, J. S. Roh, M. S. Kim, Y. T. Park, Path-based reasoning approach for knowledge graph completion using cnn-bilstm with attention mechanism, Expert Syst. Appl., 142 (2020), 112960. https://doi.org/10.1016/j.eswa.2019.112960 doi: 10.1016/j.eswa.2019.112960
    [31] R. Socher, D. Chen, C. D. Manning, A. Ng, Reasoning with neural tensor networks for knowledge base completion, Adv. Neural Inf. Process. Syst., 2013 (2013), 926–934. https://doi.org/10.5555/2999611.2999715 doi: 10.5555/2999611.2999715
    [32] X. Gao, Y. Wang, W. Hou, Z. Liu, X. Ma, Multi-view Clustering for integration of gene expression and methylation data with tensor decomposition and self-representation learning, IEEE/ACM Trans. Comput. Biol. Bioinf., 2022 (2022). https://doi.org/10.1109/TCBB.2022.3229678 doi: 10.1109/TCBB.2022.3229678
    [33] D. Li, S. Zhang, X. Ma, Dynamic module detection in temporal attributed networks of cancers, IEEE/ACM Trans. Comput. Biol. Bioinf., 4 (2022), 2219–2230. https://doi.org/10.1109/TCBB.2021.3069441 doi: 10.1109/TCBB.2021.3069441
    [34] X. Ma, W. Zhao, W. Wu, Layer-specific modules detection in cancer multi-layer networks, IEEE/ACM Trans. Comput. Biol. Bioinf., 2022 (2022). https://doi.org/10.1109/TCBB.2022.3176859 doi: 10.1109/TCBB.2022.3176859
    [35] X. Gao, X. Ma, W. Zhang, J. Huang, H. Li, Y. Li, et al., multi-view clustering with self-representation and structural constraint, IEEE Trans. Big Data, 4 (2022), 882–893. https://doi.org/10.1109/TBDATA.2021.3128906 doi: 10.1109/TBDATA.2021.3128906
    [36] R. Xie, Z. Liu, H. Luan, M. Sun, Image-embodied knowledge representation learning, in 2017 the 26th International Joint Conference on Artificial Intelligence (IJCAI), (2017), 3140–3146. https://doi.org/10.24963/ijcai.2017/438
    [37] P. Pezeshkpour, L. Chen, S. Singh, Embedding multimodal relational data for knowledge base completion, in 2018 the Conference on Empirical Methods in Natural Language Processing (EMNLP), (2018), 3208–3218. https://doi.org/10.18653/v1/D18-1359
    [38] J. Yuan, N. Gao, J. Xiang, Transgate: knowledge graph embedding with shared gate structure, in 2019 the AAAI Conference on Artificial Intelligence (AAAI), (2019), 3100–3107. https://doi.org/10.1609/AAAI.V33I01.33013100
    [39] S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi: 10.1109/TPAMI.2016.2577031
    [40] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, Adv. Neural Inf. Process. Syst., (2017), 5998–6008. https://doi.org/10.48550/arXiv.1706.03762 doi: 10.48550/arXiv.1706.03762
    [41] Y. Kim, Convolutional neural networks for sentence classification, preprint, arXiv: 1408.5882.
    [42] Z. Yu, J. Yu, J. Fan, D. Tao, Multi-modal factorized bilinear pooling with co-attention learning for visual question answering, in 2017 the IEEE International Conference on Computer Vision (ICCV), (2017), 1821–1830. https://doi.org/10.1109/ICCV.2017.202
    [43] T. Dettmers, M. Pasquale, S. Pontus, S. Riedel, Convolutional 2d knowledge graph embeddings, in 2018 the 32th AAAI Conference on Artificial Intelligence (AAAI), (2018), 1811–1818. https://doi.org/10.1609/aaai.v32i1.11573
    [44] K. Toutanova, D. Chen, Observed versus latent features for knowledge base and text inference, in 2015 the 3rd workshop on continuous vector space models and their compositionality, (2015), 57–66. https://doi.org/10.18653/v1/W15-4007
    [45] D. Kingma, J. Ba, Adam: A method for stochastic optimization, Comput. Sci., 34 (2014), 56–67. https://doi.org/10.48550/arXiv.1412.6980 doi: 10.48550/arXiv.1412.6980
    [46] B. Yang, S. W. Yih, X. He, J. Gao, L. Deng, Embedding entities and relations for learning and inference in knowledge bases, in 2015 International Conference on Learning Representations (ICLR), (2015), 345–358. https://doi.org/10.48550/arXiv.1412.6575
    [47] S. Wang, X. Wei, C. N. Santos, Z. Wang, R. Nallapati, A. Arnold, et al., Mixed-curvature multi-relational graph neural network for knowledge graph completion, in 2021 the International World Wide Web Conference (WWW), (2021), 1761–1771. https://doi.org/10.1145/3442381.3450118
    [48] T. Trouillon, J. Welbl, S. Riedel, xE. Gaussier, G. Bouchard, Complex embeddings for simple link prediction, in 2016 the 33rd International Conference on Machine Learning (ICML), (2016), 2071–2080. https://doi.org/10.48550/arXiv.1606.06357
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2513) PDF downloads(146) Cited by(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog