Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A novel adaptive safe semi-supervised learning framework for pattern extraction and classification

  • Junjie Li & Jiachen Sun are co-second authors.
  • Received: 12 September 2024 Revised: 18 October 2024 Accepted: 28 October 2024 Published: 05 November 2024
  • MSC : 68T10, 91C20

  • Manifold regularization semi-supervised learning is a powerful graph-based semi-supervised learning method. However, the performance of semi-supervised learning methods based on manifold regularization depends to some extent on the quality of the manifold graph and unlabeled samples. Intuitively speaking, the quality of the graph directly affects the final classification performance of the model. In response to the above problems, this paper first proposed an adaptive safety semi-supervised learning framework. The framework implements the weight assignment of the self-similarity graph during the model learning process. In order to adapt to the learning needs, accelerate the learning speed, and avoid the impact of the curse of dimensionality, the framework also optimizes the features of each sample point through an automatic weighting mechanism to extract effective features and eliminate redundant information in the learning task. In addition, the framework defines an adaptive risk measurement mechanism for the uncertainty and potential risks of unlabeled samples to determine the degree of risk of unlabeled samples. Finally, a new adaptive safe semi-supervised extreme learning machine was proposed. Comprehensive experimental results across various class imbalance scenarios demonstrated that our proposed method outperforms other methods in terms of classification accuracy, and other critical performance metrics.

    Citation: Jun Ma, Junjie Li, Jiachen Sun. A novel adaptive safe semi-supervised learning framework for pattern extraction and classification[J]. AIMS Mathematics, 2024, 9(11): 31444-31469. doi: 10.3934/math.20241514

    Related Papers:

    [1] Yunxi Guo, Ying Wang . The Cauchy problem to a gkCH equation with peakon solutions. AIMS Mathematics, 2022, 7(7): 12781-12801. doi: 10.3934/math.2022707
    [2] Sen Ming, Jiayi Du, Yaxian Ma . The Cauchy problem for coupled system of the generalized Camassa-Holm equations. AIMS Mathematics, 2022, 7(8): 14738-14755. doi: 10.3934/math.2022810
    [3] Ying Wang, Yunxi Guo . Blow-up solution and analyticity to a generalized Camassa-Holm equation. AIMS Mathematics, 2023, 8(5): 10728-10744. doi: 10.3934/math.2023544
    [4] Yousef Jawarneh, Humaira Yasmin, M. Mossa Al-Sawalha, Rasool Shah, Asfandyar Khan . Fractional comparative analysis of Camassa-Holm and Degasperis-Procesi equations. AIMS Mathematics, 2023, 8(11): 25845-25862. doi: 10.3934/math.20231318
    [5] Md. Nurul Islam, Md. Asaduzzaman, Md. Shajib Ali . Exact wave solutions to the simplified modified Camassa-Holm equation in mathematical physics. AIMS Mathematics, 2020, 5(1): 26-41. doi: 10.3934/math.2020003
    [6] Zhe Ji, Yifan Nie, Lingfei Li, Yingying Xie, Mancang Wang . Rational solutions of an extended (2+1)-dimensional Camassa-Holm- Kadomtsev-Petviashvili equation in liquid drop. AIMS Mathematics, 2023, 8(2): 3163-3184. doi: 10.3934/math.2023162
    [7] Mahmoud A. E. Abdelrahman, S. Z. Hassan, R. A. Alomair, D. M. Alsaleh . Fundamental solutions for the conformable time fractional Phi-4 and space-time fractional simplified MCH equations. AIMS Mathematics, 2021, 6(6): 6555-6568. doi: 10.3934/math.2021386
    [8] Zhiqiang Li . The finite time blow-up for Caputo-Hadamard fractional diffusion equation involving nonlinear memory. AIMS Mathematics, 2022, 7(7): 12913-12934. doi: 10.3934/math.2022715
    [9] Fugeng Zeng, Peng Shi, Min Jiang . Global existence and finite time blow-up for a class of fractional p-Laplacian Kirchhoff type equations with logarithmic nonlinearity. AIMS Mathematics, 2021, 6(3): 2559-2578. doi: 10.3934/math.2021155
    [10] Jincheng Shi, Yan Zhang, Zihan Cai, Yan Liu . Semilinear viscous Moore-Gibson-Thompson equation with the derivative-type nonlinearity: Global existence versus blow-up. AIMS Mathematics, 2022, 7(1): 247-257. doi: 10.3934/math.2022015
  • Manifold regularization semi-supervised learning is a powerful graph-based semi-supervised learning method. However, the performance of semi-supervised learning methods based on manifold regularization depends to some extent on the quality of the manifold graph and unlabeled samples. Intuitively speaking, the quality of the graph directly affects the final classification performance of the model. In response to the above problems, this paper first proposed an adaptive safety semi-supervised learning framework. The framework implements the weight assignment of the self-similarity graph during the model learning process. In order to adapt to the learning needs, accelerate the learning speed, and avoid the impact of the curse of dimensionality, the framework also optimizes the features of each sample point through an automatic weighting mechanism to extract effective features and eliminate redundant information in the learning task. In addition, the framework defines an adaptive risk measurement mechanism for the uncertainty and potential risks of unlabeled samples to determine the degree of risk of unlabeled samples. Finally, a new adaptive safe semi-supervised extreme learning machine was proposed. Comprehensive experimental results across various class imbalance scenarios demonstrated that our proposed method outperforms other methods in terms of classification accuracy, and other critical performance metrics.



    The objective of this work is to deal with the rotation-Camassa-Holm model (RCH)

    mt+Vmx+2Vxm+kVxα0αVxxx+h1β2V2Vx+h2β3V3Vx=0, (1.1)

    where m=VVxx,

    k=1+ϝ2ϝ,β=k21+k2,α0=k(k4+6k21)6(k2+1)2,α=3k4+8k216(k2+1)2h1=3k(k21)(k22)2(1+k2)3,h2=(k22)(k21)2(8k21)2(1+k2)5, (1.2)

    in which constant ϝ is a parameter to depict the Coriolis effect due to the Earth's rotation. Gui et al. [9] derive the nonlinear RCH equation (1.1) (also see [3,10]), depicting the motion of the fluid associated with the Coriolis effect.

    Recently, many works focus on the study of Eq (1.1). Zhang [23] investigates the well-posedness for Eq (1.1) on the torus in the sense of Hadamard if assuming its initial value in the space Hs with the Sobolev index s>32, and gives a Cauchy-Kowalevski type proposition for Eq (1.1) under certain conditions. It is shown in Gui et al. [9] that Eq (1.1) has similar dynamical features with those of Camassa-Holm and irrotational Euler equations. The travelling wave solutions are found and classified in [10]. The well-posedness, geometrical analysis and a more general classification of travelling wave solution for Eq (1.1) are carried out in Silva and Freire [17]. Tu et al. [20] investigate the well-posedness of the global conservative solutions to Eq (1.1).

    If ϝ=0 (implying h1=h2=0), namely, the Coriolis effects disappear, Eq (1.1) becomes the standard Camassa-Holm (CH) model [2], which has been investigated by many scholars [1,6,7,8,16]. For some dynamical characteristics of the CH, we refer the reader to the references [11,12,13,14,15,22].

    Motivated by the works made in [4,21], in which the H1(R) global weak solution to the CH model is studied without restricting that the initial value obeys the sign condition, we investigate the rotation-Camassa-Holm equation (1.1) and utilize the viscous approximation technique to handle the existence of global weak solution in H1(R). As the term Vxxx appears in Eq (1.1), it yields difficulties to establish estimates of solutions for the viscous approximation of Eq (1.1) (In fact, using a change of coordinates, Silva and Freire [18] eliminate the term Vxxx and discuss other dynamical features of Eq (1.1)). The key contribution of this work is that we overcome these difficulties and establish a high order integrable estimate and prove that V(t,x)x possesses upper bound. These two estimates take key roles in proving the existence of the H1(R) global weak solution for Eq (1.1) without the sign condition.

    This work is structured by the following steps. Definition of the H1(R) global weak solutions and several Lemmas are given in Section 2. The main conclusion and its proof are presented in Section 3.

    We rewrite the initial value problem for the RCH equation (1.1)

    {VtVtxx+3VVx+kVx+h1β2V2Vx+h2β3V3Vx=2VxVxx+(V+α0α)Vxxx,V(0,x)=V0(x),xR. (2.1)

    Employing operator Λ2=(12x)1 to multiply Eq (1.1), we have

    {Vt+(V+α0α)Vx+Ax=0,V(0,x)=V0(x), (2.2)

    where

    Ax=Λ2[(kα0α)V+V2+h13β2V3+h24β3V4+12V2x]x.

    It can be found in [9,10,17,18] that

    R(V2+V2x)dx=R(V20+V20x)dx.

    We cite the definition (see [4,21]).

    Definition 2.1. The solution V(t,x):[0,)×RR is called as a global weak solution to system (2.1) or (2.2) if

    (1) VC([0,)×R)L([0,);H1(R));

    (2) V(t,.)H1(R)V0H1(R);

    (3) V=V(t,x) obeys (2.2) in the sense of distribution.

    Define ϕ(x)=e1x21 if |x|<1 and ϕ(x)=0 if |x|1. Set ϕε(x)=ε14ϕ(ε14x) with 0<ε<14. Assume Vε,0=ϕεV0, where represents the convolution, we see that Vε,0C for V0(x)Hs,s>0. To discuss global weak solutions for Eq (1.1), we handle the following viscous approximation problem:

    {Vεt+(Vε+α0α)Vεx+Aεx=ε2Vεx2,V(0,x)=Vε,0(x), (2.3)

    in which

    Aε(t,x)=Λ2[(kα0α)Vε+V2ε+h13β2V3ε+h24β3V4ε+12(Vεx)2].

    Utilizing (2.3) and denoting pε(t,x)=Vε(t,x)x yield

    pεt+(Vε+α0α)pεxε2pεx2+12p2ε=(kα0α)Vε+V2ε+h13β2V3ε+h24β3V4εΛ2(V2ε+(kα0α)Vε+h13β2V3ε+h24β3V4ε+12(Vεx)2)=Bε(t,x). (2.4)

    Simply for writing, let c represent arbitrary positive constants (independent of ε).

    Lemma 2.1. Let V0H1(R). For each number σ2, system (2.3) has a unique solution Vε C([0,);Hσ(R)) and

    R(V2ε+(Vεx)2)dx+2εt0R[(Vεx)2+(2Vεx2)2](s,x)dxds=Vε,02H1(R), (2.5)

    which has the equivalent expression

    Vε(t,.)2H1(R)+2εt0Vεx(s,.)2H1(R)ds=Vε,02H1(R).

    Proof. For parameter σ2, we acquire Vε,0C([0,);H(R)). Employing the conclusion in [5] derives that system (2.3) has a unique solution Vε(t,x)C([0,);Hσ(R)). Using (2.3) arises

    ddtR(V2ε+V2εx)dx=2RVε(VεtVεtxx)dx=2εR(VεVεxxVεVεxxxx)dx=2εR((Vεx)2+(Vεxx)2)dx.

    Integrating about variable t for both sides of the above identity, we obtain (2.5).

    In fact, as ε0, we have

    VεL(R)VεH1(R)Vε,0H1(R)V0H1(R), and Vε,0V0inH1(R). (2.6)

    Lemma 2.2. If V0(x)H1(R), for Aε(t,x) and Bε(t,x), then

    Aε(t,)L(R)c,Aεx(t,)L(R)c, (2.7)
    Aε(t,)L1(R)c,Aεx(t,)L1(R)c, (2.8)
    Aε(t,)L2(R)c,Aεx(t,)L2(R)c, (2.9)

    and

    Bε(t,)L(R)c,Bε(t,)L2(R)c, (2.10)

    where c=c(V0H1(R)).

    Proof. For any function U(x) and the operator Λ2, it holds that

    Λ2U(x)=12Re|xy|U(y)dy for U(x)Lr(R),1r, (2.11)

    and

    |Λ2Ux(x)|=|12Re|xy|U(y)ydy|=|12exxU(y)dy+12exxeyU(y)dy|12e|xy||U(y)|dy. (2.12)

    Utilizing (2.6), (2.11), (2.12), the expression of function Aε(t,x) and the Tonelli theorem, we have

    Λ2((kα0α)Vε+V2ε+h13β2V3ε+h24β3V4ε+12V2εx)L(R)c

    and

    Λ2((kα0α)Vε+V2ε+h13β2V3ε+h24β3V4ε+12V2εx)xL(R)c,

    which derive that (2.7) and (2.8) hold. Utilizing (2.7) and (2.8) yields

    Aε(t,)2L2(R)Aε(t,)L(R)Aε(t,)L1(R)c

    and

    Aε(t,)x2L2(R)Aε(t,)xL(R)Aε(t,)xL1(R)c,

    which complete the proof of (2.9). Furthermore, using (2.4) and (2.6), we have

    BεL(R)c,BεL2(R)c,

    which finishes the proof of (2.10).

    Lemma 2.3. Provided that 0<α1<1, T>0, constants a<b, then

    T0ba|Vε(t,x)x|2+α1dxdtc1, (2.13)

    where constant c1 depends on a,b,α1,T,k and V0H1(R).

    Proof. We utilize the methods in Xin and Zhang [21] to prove this lemma. Let function g(x)C(R) and satisfy

    0g(x)1,g(x)={0,x(,a1][b+1,),1,x[a,b].

    Define function f(η):=η(|η|+1)α1,ηR. We note that the function f belongs to C1(R) except η=0. Here we give the expressions of its first and second derivatives as follows:

    f(η)=((α1+1)|η|+1)(|η|+1)α11,f(η)=α1sign(η)(|η|+1)α12((α1+1)|η|+2)=α1(α1+1)sign(η)(|η|+1)α11+(1α1)α1sign(η)(|η|+1)α12,

    from which we have

    |f(η)||η|α1+1+|η|,|f(η)|(α1+1)|η|+1,|f(η)|2α1, (2.14)

    and

    ηf(η)12η2f(η)=1α12η2(|η|+1)α1+α12η2(|η|+1)α111α12η2(|η|+1)α1. (2.15)

    Note that

    T0Rg(x)f(pε)pεtdxdt=Rg(x)dxT0df(pε)=R[f(pε(T,x))f(pε(0,x))]g(x)dx, (2.16)
    T0Rg(x)f(pε)(Vε+α0α)pεxdxdt=T0dtRg(x)(Vε+α0α)df(pε)=Rf(pε)[g(x)(Vε+α0α)+g(x)pε]dx. (2.17)

    Making use of g(x)f(pε) to multiply (2.4), from (2.16) and (2.17), integrating over ([0,)×R) by parts, we obtain

    T0Rg(x)pεf(pε)dtdx12T0Rp2εg(x)f(pε)dtdx=R[f(pε(T,x))f(pε(0,x))]g(x)dx+T0R(Vε+α0α)g(x)f(pε)dtdx+εT0Rg(x)f(pε)pεxdtdx+εT0Rg(x)f(pε)(pεx)2dtdxT0RBεf(pε)g(x)dtdx. (2.18)

    Applying (2.15) yields

    T0Rg(x)pεf(pε)dtdx12T0Rp2εg(x)f(pε)dtdx=T0Rg(x)(pεf(pε)12p2εf(pε))dtdx(1α1)2T0Rg(x)p2ε(|pε|+1)α1dtdx. (2.19)

    For t0, using 0<α1<1, (2.14) and the Hölder inequality gives rise to

    |Rg(x)f(pε)dx|Rg(x)(|pε|α1+1+|pε|)dxg(x)L2/(1α1)(R)pε(t,)α1+1L2(R)+g(x)L2(R)pε(t,)L2(R)(b+2a)(1α1)/2V0α1+1H1(R)+(b+2a)1/2V0H1(R), (2.20)

    and

    |T0RVεg(x)f(pε)dtdx|T0R|Vεg(x)|(|pε|α1+1+|pε|)dtdxT0RVε(t,)L(R)|g(x)|(|pε|α1+1+|pε|)dtdxcT0(g(x)L2/(1α1)(R)pε(t,)α1+1L2(R)+g(x)L2(R)pε(t,)L2(R))dtcT0(g(x)L2/(1α1)(R)V0α1+1L2(R)+g(x)L2(R)V0L2(R))dt. (2.21)

    Moreover, we have

    εT0Rpεxg(x)f(pε)dtdx=εT0Rf(pε)g(x)dtdx. (2.22)

    Utilizing the Hölder inequality and (2.14) leads to

    |εT0Rg(x)pεxf(pε)dtdx|εT0R|f(pε)g(x)|dtdxεT0R(|pε|α1+1+|pε|)| g(x)|dtdxεT0( gL2/(1α1)(R)pε(t,)α1+1L2(R)+gL2(R)pε(t,)L2(R))dtεT(gL2/(1α1)(R)V0α1+1H1(R)+gL2(R)V0H1(R)). (2.23)

    Using the last part of (2.14), we have

    ε|ΠT(pεx)2g(x)f(pε)dtdx|2α1εΠT(pεx)2dtdxα1V02H1(R). (2.24)

    As shown in Lemma 2.2, there exists a constant c0>0 to ensure that

    BεL(R)c0. (2.25)

    Utilizing the second part in (2.14) arises

    |T0RBεg(x)f(pε)dtdx|c0T0R g(x)[(α1+1)|pε|+1]dtdxc0T0((α1+1) g(x)L2(R)pε(t,)L2(R)+g(x)L1(R))dtc0T. (2.26)

    Applying (2.18)–(2.26) yields

    1α12T0R|pε|2f(x)(1+|pε|α1)dtdxc,

    where c>0 relies only on T>0,a,b,α1 and V0H1(R). Furthermore, we have

    T0ba|Vεx(t,x)|2+α1dxdtT0R|pε|g(x)(|pε|+1)α1+1dtdx2c(1α1).

    The proof of (2.13) is completed.

    Lemma 2.4. For (t,x)(0,)×R, provided that Vε=Vε(t,x) satisfy problem (2.3), then

    Vε(t,x)x2t+c, (2.27)

    in which positive constant c=c(V0H1(R)).

    Proof. Using Lemma 2.2 gives rise to

    pεt+(Vε+α0α)pεxε2pεx2+12p2ε=Bε(t,x)c. (2.28)

    Assume that H=H(t) satisfies the problem

    dHdt+12H2=c,t>0,H(0)=Vε,0xL.

    Due to (2.28), we know that H=H(t) is a supersolution* of parabolic equation (2.4) associated with initial value Vε,0x. Utilizing the comparison principle for parabolic equations arises

    *The supersolution is defined by supxRpε(t,x). If there exists a point (t,x0) such that supxRpε(t,x))=pε(t,x0), then pε(t,x0)x=0 and 2pε(t,x0)x2<0.

    pε(t,x)H(t).

    We choose the function F(t):=2t+2c,t>0. Since dFdt(t)+12F2(t)c= 22ct>0, we conclude

    H(t)F(t),

    which finishes the proof of (2.27).

    Lemma 2.5. There exists a subsequence {εj}jN,εj0 and V(t,x)L([0,);H1(R))H1([0,T]×R), for every T0, such that

    VεjV in H1([0,T]×R),VεjV in L loc ([0,)×R).

    The proof of Lemma 2.5 can be found in Coclite el al. [4].

    Lemma 2.6. Assume V0H1(R). Then {Bε(t,x)}ε is uniformly bounded in W1,1loc([0,)×R). Moreover, there has a sequence {εj}jN, εj0 to guarantee that

    BεjB strongly in Lr loc ([0,T)×R),

    where function BL([0,T);W1,(R)) and 1<r<.

    The standard proof of Lemma 2.6 can be found in [4]. We omit its proof here.

    For conciseness, we use overbars to denote weak limits which are taken in the space Lr[(0,)×R) with 1<r<3.

    Lemma 2.7. There exist a sequence {εj}jN tending to zero and two functions pLrloc([0,)×R),¯p2Lr1loc([0,)×R) such that

    pεjp in Lrloc([0,)×R),pεjp in Lloc([0,);L2(R)), (2.29)
    p2εj¯p2 in Lr1loc([0,)×R) (2.30)

    for each 1<r<3 and 1<r1<32. In addition, it holds that

    p2(t,x)¯p2(t,x), (2.31)
    Vx=p in the sense of distribution . (2.32)

    Proof. Lemmas 2.1 and 2.2 validate (2.29) and (2.30). The weak convergence in (2.30) ensures the reasonableness of (2.31). Using Lemma 2.5 and (2.29) derives that (2.32) holds.

    For conciseness in the following discussion, we denote {pεj}jN, {Vεj}jN and {Bεj}jN by {pε}ε>0, {Vε}ε>0 and {Bε}ε>0. Assume that FC1(R) is an arbitrary convex function with F being bounded, Lipschitz continuous on R. Using (2.29) derives that

    F(pε)¯F(p) in Lrloc([0,)×R),F(pε)¯F(p) in Lloc([0,);L2(R)).

    Multiplying (2.4) by F(pε) yields

    tF(pε)+x((Vε+α0α)F(pε))ε2x2F(pε)+εF(pε)(pεx)2=pεF(pε)12F(pε)p2ε+BεF(pε). (2.33)

    Lemma 2.8. Suppose that FC1(R) is a convex function with F being bounded, Lipschitz continuous on R. In the sense of distribution, then

    ¯F(p)t+x((Vε+α0α)¯F(p))¯pF(p)12¯F(p)p2+B¯F(p), (2.34)

    where ¯pF(p) and ¯F(p)p2 represent the weak limits of pεF(pε) and F(pε)p2ε in Lr1 loc ([0,)×R),1<r1<32, respectively.

    Proof. Applying Lemmas 2.5 and 2.7, letting ε0 in (2.33) and noticing the convexity of function F, we finish the proof of (2.34).

    Lemma 2.9. [4] Almost everywhere in [0,)×R, it has

    p=p++p=¯p++¯p,p2=(p+)2+(p)2,¯p2=¯(p+)2+¯(p)2,

    where η+:=ηχ[0,+)(η),η:=ηχ(,0](η), ηR.

    Using Lemmas 2.4 and 2.7 leads to

    pε,p2t+c,0<t<T.

    Lemma 2.10. For t0,xR, in the sense of distribution, it holds that

    pt+x((Vε+α0α)p)=12¯p2+B(t,x). (2.35)

    Proof. Making use of (2.4), Lemmas 2.5–2.7, we derive that (2.35) holds by letting ε0.

    Lemma 2.11. Provided that FC1(R) is a convex function with FL(R), for every T>0, in the sense of distribution, then

    F(p)t+x((Vε+α0α)F(p))=pF(p)+(12¯p2p2)F(p)+BF(p).

    Proof. Suppose that {wδ}δ is a kind of mollifiers defined in (,). Let pδ(t,x):=(p(t,)wδ)(x) in which denotes the convolution with respect to variable x. Using (2.35) yields

    F(pδ)t=F(pδ)pδt=F(pδ)(x((Vε+α0α)p)wδ+12ˉp2wδ+Bwδ)=F(pδ)[(Vε+α0α)pxwδVp2wδ]+F(pδ)(12V¯q2wδ+Bwδ). (2.36)

    Utilizing the assumptions on F and F and letting δ0 in (2.36), we complete the proof.

    Following the ideas in [21], we hope that the weak convergence of pε should be strong convergence in (2.30). The strong convergence leads to the existence of global weak solution for system (2.1).

    Lemma 2.12. [4] Assume V0H1(R). Then

    limt0Rp2(t,x)dx=limt0R¯p2(t,x)dx=R(V0x)2dx.

    Lemma 2.13. [4] If V0H1(R), L>0, then

    limt0R(¯F±L(p)(t,x)F±L(p)(t,x))dx=0,

    where

    FL(ρ):={12ρ2, if |ρ|L,L|ρ|12L2, if |ρ|>L, (2.37)

    F+L(ρ)=FL(ρ)χ[0,)(ρ) and FL(ρ)=FL(ρ)χ(,0](ρ), ρ(,).

    Lemma 2.14. [4] Let L>0. For FL(ρ) defined in (2.37), then

    {FL(ρ)=12ρ212(L|ρ|)2χ(,L)(L,)(ρ),FL(ρ)=ρ+(L|ρ|)sign(ρ)χ(,L)(L,)(ρ),F+L(ρ)=12(ρ+)212(Lρ)2χ(L,)(ρ),(F+L)(ρ)=ρ++(Lρ)χ(L,)(ρ),FL(ρ)=12(ρ)212(L+ρ)2χ(,L)(ρ),(FL)(ρ)=ρ(L+ρ)χ(,L)(ρ).

    Lemma 2.15. Assume V0H1(R). For almost all t>0, then

    12R(¯(p+)2p2+)(t,x)dxt0RB(s,x)[¯p+(s,x)p+(s,x)]dxds.

    Lemma 2.16. Assume V0H1(R). For almost all t>0, then

    (¯FL(p)FL(p))(t,x)dxL22t0R¯(L+p)χ(,L)(p)dxdsL22t0R(L+p)χ(,L)(p)dxds+Lt0R[¯FL(p)FL(p)]dxds+L2t0R(¯p2+p2+)dxds+t0RB(t,x)(¯(FL)(p)(FL)(p))dxds.

    Using Lemmas 2.8 and 2.11–2.14, the proofs of Lemmas 2.15 and 2.16 are analogous to those of Lemmas 4.4 and 4.5 in Tang et al. [19]. Here we omit their proofs.

    Lemma 2.17. Assume V0H1(R). Almost everywhere in [0,)×(,), it holds that

    ¯p2=p2. (2.38)

    Proof. Using Lemmas 2.15 and 2.16 arises

    R(12[¯(p+)2(p+)2]+[¯FLFL])(t,x)dxL22t0R¯(L+p)χ(,L)(p)dxdsL22t0R(L+p)χ(,L)(p)dxds+Lt0R[¯FL(p)FL(p)]dxds+L2t0R(¯p2+p2+)dxds+t0RB(s,x)([¯p+p+]+[¯(FL)(p)(FL)(p)])dxds. (2.39)

    Applying Lemma 2.6 drives that there has a constant constant N>0 to ensure

    B(t,x)L([0,T)×R)N. (2.40)

    Using Lemmas 2.9 and 2.14 yields

    {p++(FL)(p)=p(L+p)χ(,L),¯p++¯(FL)(p)=p¯(L+p)χ(,L)(p). (2.41)

    Since the map ρρ++(FL)(ρ) is convex, it holds that

    0[¯p+p+]+[¯(FL)(p)(FL)(p)]=(L+p)χ(,L)¯(L+p)χ(,L)(p). (2.42)

    Using (2.40) gives rise to

    B(s,x)([¯p+p+]+[¯(FL)(p)(FL)(p)])N(¯(L+p)χ(,L)(p)(L+p)χ(,L)(p)). (2.43)

    Since ρ(L+ρ)χ(,L)(ρ) is concave, letting L be sufficiently large, we have

    L22¯(L+p)χ(,L)(p)L22(L+p)χ(,L)(p)+B(s,x)([¯p+p+]+[(¯FL)(p)(FL)(p)])(L22N)(¯(L+p)χ(,L)(p)(L+p)χ(,L)(p))0. (2.44)

    Using (2.39)–(2.44) yields

    0R(12[¯(p+)2(p+)2]+[¯FL(p)FL(p)])(t,x)dxLt0R(12[¯(p+)2p2+]+[¯FL(p)FL(p)])dsdx,

    which together with the Gronwall inequality yields

    0R(12[¯(p+)2(p+)2]+[¯FR(p)FR(p)])(t,x)dx0. (2.45)

    Using the Fatou lemma, Lemma 2.9 and (2.45), sending L, it holds that

    0R(¯p2p2)(t,x)dx0,t>0,

    which finishes the proof of (2.38).

    Theorem 3.1. Assume that V0(x)H1(R). Then system (2.1) has at least a global weak solution V(t,x). Furthermore, this weak solution possesses the features:

    (a) For (t,x)[0,)×R, there exists a positive constant c=c(V0H1(R)) such that

    V(t,x)x2t+c. (3.1)

    (b) If a,bR,a<b, for any 0<α1<1 and T>0, it holds that

    T0ba|V(t,x)x|2+α1dxdtc0, (3.2)

    where positive constant c0 relies on α1,k,T,a,b and V0H1(R).

    Proof. Utilizing (2.3), (2.5) and Lemma 2.5, we derive (1) and (2) in Definition 2.1. From Lemma 2.17, we have

    p2εp2 in L1loc([0,)×R).

    Employing Lemmas 2.5 and 2.6 results in that V is a global weak solution to system (2.2). Making use of Lemmas 2.3 and 2.4 gives rise to inequalities (3.1) and (3.2). The proof is finished.

    In this work, we study the rotation-Camassa-Holm (RCH) model (1.1), a nonlinear equation describing the motion of equatorial water waves with the Coriolis effect due to the Earth's rotation. The presence of the term Vxxx in the RCH equation leads to difficulties of establishing estimates of solutions for the viscous approximation. To overcome these difficulties, we establish a high order integrable estimate and show that V(t,x)x possesses an upper bound. Using these two estimates and the viscous approximation technique, we examine the existence of H1(R) global weak solutions to the RCH equation without the sign condition.

    The authors are very grateful to the reviewers for their valuable and meaningful comments of the paper.

    The authors declare no conflicts of interest.



    [1] M. Belkin, P. Niyogi, V. Sindhwani, Manifold regularization: A geometric framework for learning from labeled and unlabeled examples, J. Mach. Learn. Res., 7 (2006), 2399–2434.
    [2] O. Chapelle, B. Schlkopf, A. Zien, Semi-supervised learning, Handbook on Neural Information Processing, Springer Berlin Heidelberg, 2013.
    [3] Y. Wang, Y. Meng, Y. Li, S. Chen, Z. Fu, H. Xue, Semi-supervised manifold regularization with adaptive graph construction, Pattern Recog. Lett., 98 (2017), 90–95. https://doi.org/10.1016/j.patrec.2017.09.004 doi: 10.1016/j.patrec.2017.09.004
    [4] Z. Kang, H. Pan, S. C. Hoi, Z. Xu, Robust graph learning from noisy data, IEEE T. Cybernetics, 50 (2019), 1833–1843. https://doi.org/10.1109/TCYB.2018.2887094 doi: 10.1109/TCYB.2018.2887094
    [5] Z. Kang, C. Peng, Q. Cheng, X. Liu, X. Peng, Z. Xu, et al., Structured graph learning for clustering and semi-supervised classification, Pattern Recog., 110 (2021), 107627. https://doi.org/10.1016/j.patcog.2020.107627 doi: 10.1016/j.patcog.2020.107627
    [6] Z. Kang, X. Lu, Y. Lu, C. Peng, W. Chen, Z. Xu, Structure learning with similarity preserving, Neural Networks, 129 (2020), 138–148. https://doi.org/10.1016/j.neunet.2020.05.030 doi: 10.1016/j.neunet.2020.05.030
    [7] T. Yang, C. E. Priebe, The effect of model misspecification on semi-supervised classification, IEEE T. Pattern Anal., 33 (2011), 2093–2103. https://doi.org/10.1109/TPAMI.2011.45 doi: 10.1109/TPAMI.2011.45
    [8] Y. F. Li, Z. H. Zhou, Towards making unlabeled data never hurt, IEEE T. Pattern Anal., 37 (2015), 175–188. https://doi.org/10.1109/TPAMI.2014.2299812 doi: 10.1109/TPAMI.2014.2299812
    [9] Y. F. Li, Z. H. Zhou, Improving semi-supervised support vector machines through unlabeled instances selection, In: Proceedings of the AAAI Conference on Artificial Intelligence, 25 (2011), 386–391. https://doi.org/10.1609/aaai.v25i1.7920
    [10] Y. Wang, S. Chen, Z. H. Zhou, New semi-supervised classification method based on modified cluster assumption, IEEE T. Neur. Net. Lear., 23 (2012), 689–702. https://doi.org/10.1109/TNNLS.2012.2186825 doi: 10.1109/TNNLS.2012.2186825
    [11] Y. Wang, S. Chen, Safety-aware semi-supervised classification, IEEE T. Neur. Net. Lear., 24 (2013), 1763–1772. https://doi.org/10.1109/TNNLS.2013.2263512 doi: 10.1109/TNNLS.2013.2263512
    [12] M. Kawakita, J. Takeuchi, Safe semi-supervised learning based on weighted likelihood, Neural Networks, 53 (2014), 146–164. https://doi.org/10.1016/j.neunet.2014.01.016 doi: 10.1016/j.neunet.2014.01.016
    [13] H. T. Gan, Z. Z. Luo, M. Meng, Y. Ma, Q. She, A risk degree-based safe semi-supervised learning algorithm, Int. J. Mach. Learn. Cyb., 7 (2015), 1–10. https://doi.org/10.1007/s13042-015-0416-8 doi: 10.1007/s13042-015-0416-8
    [14] H. T. Gan, Z. Luo, Y. Sun, X. Xi, N. Sang, R. Huang, Towards designing risk-based safe Laplacian regularized least squares, Expert Syst. Appl., 45 (2016), 1–7. https://doi.org/10.1016/j.eswa.2015.09.017 doi: 10.1016/j.eswa.2015.09.017
    [15] H. T. Gan, Z. Li, Y. Fan, Z. Luo, Dual learning-based safe semi-supervised learning, IEEE Access, 6 (2017), 2615–2621. https://doi.org/10.1109/ACCESS.2017.2784406 doi: 10.1109/ACCESS.2017.2784406
    [16] H. T. Gan, Z. Li, W. Wu, Z. Luo, R. Huang, Safety-aware graph-based semi-supervised learning, Expert Syst. Appl., 107 (2018), 243–254. https://doi.org/10.1016/j.eswa.2018.04.031 doi: 10.1016/j.eswa.2018.04.031
    [17] N. Sang, H. T. Gan, Y. Fan, W. Wu, Z. Yang, Adaptive safety degree-based safe semi-supervised learning, Int. J. Mach. Learn. Cyb., 10 (2018), 1101–1108. https://doi.org/10.1007/s13042-018-0788-7 doi: 10.1007/s13042-018-0788-7
    [18] Y. Wang, Y. Meng, Z. Fu, H. Xue, Towards safe semi-supervised classification: Adjusted cluster assumption via clustering, Neural Process. Lett., 46 (2017), 1031–1042. https://doi.org/10.1007/s11063-017-9607-5 doi: 10.1007/s11063-017-9607-5
    [19] H. T. Gan, G. Li, S. Xia, T. Wang, A hybrid safe semi-supervised learning method, Expert Syst. Appl., 149 (2020), 1–9. https://doi.org/10.1016/j.eswa.2020.113295 doi: 10.1016/j.eswa.2020.113295
    [20] Y. T. Li, J. T. Kwok, Z. H. Zhou, Towards safe semi-supervised learning for multivariate performance measures, In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI'16), AAAI Press, 30 (2016), 1816–1822. https://doi.org/10.1609/aaai.v30i1.10282
    [21] G. Huang, S. Song, J. N. D. Gupta, C. Wu, Semi-supervised and unsupervised extreme learning machines, IEEE T. Cybernetics, 44 (2017), 2405–2417. https://doi.org/10.1109/TCYB.2014.2307349 doi: 10.1109/TCYB.2014.2307349
    [22] Q. She, B. Hu, H. Gan, Y. Fan, T. Nguyen, T. Potter, et al., Safe semi-supervised extreme learning machine for EEG signal classification, IEEE Access, 6 (2018), 49399–49407. https://doi.org/10.1109/ACCESS.2018.2868713 doi: 10.1109/ACCESS.2018.2868713
    [23] H. Xu, X. Wang, J. Huang, F. Zhang, F. Chu, Semi-supervised multi-sensor information fusion tailored graph embedded low-rank tensor learning machine under extremely low labeled rate, Inform. Fusion, 105 (2024), 102222. https://doi.org/10.1016/j.inffus.2023.102222 doi: 10.1016/j.inffus.2023.102222
    [24] J. Huang, F. Zhang, B. Safaei, Z. Qin, F. Chu, The flexible tensor singular value decomposition and its applications in multisensor signal fusion processing, Mech. Syst. Signal Pr., 220 (2024), 111662. https://doi.org/10.1016/j.ymssp.2024.111662 doi: 10.1016/j.ymssp.2024.111662
    [25] G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: Theory and applications, Neurocomputing, 70 (2006), 489–501. https://doi.org/10.1016/j.neucom.2005.12.126 doi: 10.1016/j.neucom.2005.12.126
    [26] G. B. Huang, X. J. Ding, H. M. Zhou, Optimization method based extreme learning machine for classification, Neurocomputing, 74 (2010), 155–163. https://doi.org/10.1016/j.neucom.2010.02.019 doi: 10.1016/j.neucom.2010.02.019
    [27] Z. Liu, Z. Lai, W. Ou, K. Zhang, R. Zheng, Structured optimal graph based sparse feature extraction for semi-supervised learning, Signal Process., 170 (2020), 107456. https://doi.org/10.1016/j.sigpro.2020.107456 doi: 10.1016/j.sigpro.2020.107456
    [28] M. Luo, F. Nie, X. Chang, Y. Yang, A. G. Hauptmann, Q. Zheng, Adaptive unsupervised feature selection with structure regularization, IEEE T. Neur. Net. Lear., 29 (2018), 944–956. https://doi.org/10.1109/TNNLS.2017.2650978 doi: 10.1109/TNNLS.2017.2650978
    [29] J. S. Wu, M. X. Song, W. Min, J. H. Lai, W. S. Zheng, Joint adaptive manifold and embedding learning for unsupervised feature selection, Pattern Recog., 112 (2020), 107742. https://doi.org/10.1016/j.patcog.2020.107742 doi: 10.1016/j.patcog.2020.107742
    [30] F. Nie, W. Zhu, X. Li, Structured graph optimization for unsupervised feature selection, IEEE T. Knowl. Data En., 33 (2019), 1210–1222. https://doi.org/10.1109/TKDE.2019.2937924 doi: 10.1109/TKDE.2019.2937924
    [31] F. Nie, S. J. Shi, X. Li, Semi-supervised learning with auto-weighting feature and adaptive graph, IEEE T. Knowl. Data En., 32 (2019), 1167–1178. https://doi.org/10.1109/TKDE.2019.2901853 doi: 10.1109/TKDE.2019.2901853
    [32] Q. Li, L. Jing, J. Yu, Adaptive graph constrained NMF for semi-supervised learning, In: Iapr International Workshop on Partially Supervised Learning, Springer, Berlin, Heidelberg, 2013, 36–48. https://doi.org/10.1007/978-3-642-40705-5_4
    [33] Y. Yuan, X. Li, Q. Wang, F. Nie, A semi-supervised learning algorithm via adaptive Laplacian graph, Neurocomputing, 426 (2020), 162–173. https://doi.org/10.1016/j.neucom.2020.09.069 doi: 10.1016/j.neucom.2020.09.069
    [34] Z. Liu, K. Shi, K. Zhang, W. Ou, L. Wang, Discriminative sparse embedding based on adaptive graph for dimension reduction, Eng. Appl. Artif. Intel., 94 (2020), 103758. https://doi.org/10.1016/j.engappai.2020.103758 doi: 10.1016/j.engappai.2020.103758
  • This article has been cited by:

    1. Aseel Qedear, Aldanh AlMatrafy, Athary Al-Sowat, Abrar Saigh, Asmaa Alayed, Real-Time Air-Writing Recognition for Arabic Letters Using Deep Learning, 2024, 24, 1424-8220, 6098, 10.3390/s24186098
    2. Hinase Kawano, Kazuya Murao, 2025, Chapter 17, 978-3-031-78048-6, 192, 10.1007/978-3-031-78049-3_17
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(572) PDF downloads(53) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog