Under investigation in this paper is a reaction-diffusion system, which describes acid-mediated tumor growth. First, in view of Lie group analysis, infinitesimal generators of the considered system are presented. At the same time, some group invariant solutions are computed using reduced equations. In particular, we construct explicit solutions by applying the power-series method. Furthermore, the convergence of the solutions of the power-series is certificated. Finally, the stability behavior of the model can be understood by analyzing the solutions of different parameters.
Citation: Juya Cui, Ben Gao. Symmetry analysis of an acid-mediated cancer invasion model[J]. AIMS Mathematics, 2022, 7(9): 16949-16961. doi: 10.3934/math.2022930
[1] | Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971 |
[2] | Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651 |
[3] | Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276 |
[4] | James Abah Ugboh, Joseph Oboyi, Hossam A. Nabwey, Christiana Friday Igiri, Francis Akutsah, Ojen Kumar Narain . Double inertial extrapolations method for solving split generalized equilibrium, fixed point and variational inequity problems. AIMS Mathematics, 2024, 9(4): 10416-10445. doi: 10.3934/math.2024509 |
[5] | Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149 |
[6] | Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286 |
[7] | Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382 |
[8] | Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309 |
[9] | Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194 |
[10] | Savita Rathee, Monika Swami . Algorithm for split variational inequality, split equilibrium problem and split common fixed point problem. AIMS Mathematics, 2022, 7(5): 9325-9338. doi: 10.3934/math.2022517 |
Under investigation in this paper is a reaction-diffusion system, which describes acid-mediated tumor growth. First, in view of Lie group analysis, infinitesimal generators of the considered system are presented. At the same time, some group invariant solutions are computed using reduced equations. In particular, we construct explicit solutions by applying the power-series method. Furthermore, the convergence of the solutions of the power-series is certificated. Finally, the stability behavior of the model can be understood by analyzing the solutions of different parameters.
Throughout the paper, unless otherwise stated, let C and Q be nonempty closed convex subsets of real Hilbert spaces H1 and H2, and PC and PQ be the orthogonal projection onto C and Q, respectively. Let B:H1→2H1 and D:H2→2H2 be two maximal monotone mappings and A:H1→H2 be a bounded linear operator with its adjoint A∗. Find
x∗∈Csuch thatAx∗∈Q, | (1.1) |
which is called the split feasibility problem (SFP). It was first introduced by Censor and Elfving [1] in finite dimensional Hilbert spaces to model the inverse problem caused by medical image reconstruction. Since then, SFP has received much attention for its applications in signal processing, image reconstruction, approximate theory, control theory, biomedical engineering, communications, and geophysics. For details, the readers can refer to [1,2,3,4,5] and the references therein. To solve SFP, we proposed the recurrent projection algorithm, but in each iteration process, calculating the inverse of the matrix or the maximum eigenvalue of the matrix is needed. In solving the real problem, calculating the inverse of the matrix takes a lot of time and is not easy to solve. To overcome the disadvantage of finding the matrix inverse in the algorithm, in 2002, Byrne [6] presented the following CQ algorithm:
xn+1=PC(xn−γnAT(I−PQ)Axn), |
where A is the matrix operator, AT is the transpose operator of A, γ∈(0,2L) with L the largest eigenvalue of the matrix ATA.
In 2011, Moudafi [7] first introduced the following problem: Find x∗∈H1such that
0∈B(x∗)and0∈D(Ax∗), | (1.2) |
which is called the split variational inclusion problem (for short, denoted by SVIP). It is clear that the SVIP includes the SFP as a special case. We denote the solution set of the SVIP by SVIP(B,D):={x∗∈C|0∈B(x∗),0∈D(Ax∗)}. The SVIP is at the core of modeling of many inverse problems arising from phase retrieval and other real world problems, for instance, in sensor networks in computerized and data compression [8,9]. In recent years, there has been tremendous interest in solving the SVIP, and many researchers have constructed a large number of methods to solve this problem [10,11,12,13,14,15,16].
In 2014, Yang and Zhao [17] defined the following: Find x∗∈H1such that
x∗∈∩∞i=1B−1i(0)andAx∗∈∩∞i=1D−1i(0), | (1.3) |
which is called the generalized split variational inclusion problem (for short, denoted by GSVIP1), where for each i∈N, Bi:H1→2H1 and Di:H2→2H2 are two families of maximal monotone mappings. To solve the GSVIP1, the following algorithm is introduced :
xn+1=anxn+bnf(xn)+∞∑i=1cn,iJBiβn,i(I−γn,iA∗(I−JDiβn,i)A)xn,n≥0, |
where for each i∈N, the sequences{an},{bn},{cn,i}⊂(0,1), an+bn+∑∞i=1cn,i=1, {βn,i}⊂(0,∞),{γn,i}⊂(0,2‖A‖2+1), f is a k− contraction mapping of H1, and the strong convergence of the above algorithm under mild assumptions has been proved.
Ogbuisi et al. [18] introduced a new inertial algorithm to solve the following problem: Find x∗∈H1 such that
x∗∈∩si=1B−1i(0)andAx∗∈∩tj=1D−1j(0), | (1.4) |
which is also called the generalized split variational inclusion problem (for convenience, denoted by GSVIP2), and where for s t∈N, Bi:H1→2H1(i=1,⋯,s), and Dj:H2→2H2(j=1,⋯,t) are two finite families of maximal monotone mappings. We denote the solution set of the GSVIP2 by GSVIP2(Bi,Dj):={x∗∈C|x∗∈∩si=1B−1i(0)andAx∗∈∩tj=1D−1j(0)}. The following algorithm is introduced to solve the GSVIP2: Choose any initial value u0,v1∈H1,λ>0. Assume un−1,unhave been known. Compute
{xn=un+θn(un−un−1).zn=JλBinxn.yn=A∗(I−JλDjn)Axn,n≥1,wherein∈{i|max1≤i≤s‖xn−JλBixn‖},jn∈{j|max1≤j≤t‖Axn−JλDjAxn‖}.If‖xn+yn−zn‖=0,thenstop(xnisthedesiredsolution);otherwise,continuetocompute,un+1=(1−αn)un+αn[xn−τn(xn+yn−zn)], |
where αn∈(0,1),θn∈[0,1],τn=γn‖xn−zn‖2+‖yn‖22‖xn+yn−zn‖2,γn>0, and they show that the sequences generated by the above algorithm weakly converge to the solution of their problem.
In addition, the equilibrium problem (for short, EP) was first proposed by Nikaido and Isoda [19] in 1955, which is described as: Find u∗∈C such that
f(u∗,v)≥0,∀v∈C, | (1.5) |
where H is a real Hilbert space, C is a nonempty closed convex subset of H,f:C×C→R is a bifunction. We denote the solution set of the EP by EP(f):={u∗∈C|f(u∗,v)≥0,∀v∈C}. Noting that after the publication of the paper by Blum and Oettli [20] in 1994, the EP attracted wide attention, and many scholars published a large number of articles on the problem. The EP includes some important problems such as optimization problem, saddle point, variational inequality, and Nash equilibrium as special cases.
For solving the monotone EP, Korpelevich [21] first extended the extragradient method (double projection) of the saddle point problem to the monotone EP, and many algorithms [22,23,24,25,26] have been developed for solving the EP. Santos and Scheimberg [27] proposed an inexact projection subgradient method to solve the EP involving paramonotone bifunctions in finite dimensional space. It is noted that this algorithm needs only one projection per iteration, and its weak convergence was proved under mild assumptions.
In 2016, Yen et. al. [28] studied the SFP involving paramonotone equilibrium problem and convex optimization problem, which is formulated as: Find x∗∈C such that
f(u∗,v)≥0,∀v∈Candg(Au∗)≤g(y),∀y∈H2, | (1.6) |
where g is a properly lower semicontinuous convex function on H2. They introduced the following algorithm:
{yn=PC(xn−αnηn),zn=PC(yn−μnA∗(I−proxλg)Ayn),xn+1=anxn+(1−an)zn, |
for each xn∈C,ηn∈∂εn2f(xn,xn) and αn=βnγn where γn=max{δn,‖gn‖} and
μn={0,if∇h(yn)=0,ρnh(yn)‖∇h(yn)‖2,if∇h(yn)≠0, |
the selection of the sequences {αn},{δn},{βn},{εn} and {ρn} is described in Algorithm 3.1 [28]. Moreover, they proved the strong convergence of the algorithm under mild assumptions.
The problems of finding common solutions of the set of fixed points of nonlinear mappings and the set of solutions of optimization problems with its related problems have been considered by some authors (for instance, see [29,30,31,32,33] and the references therein). The motivation for studying such a common solution problem lies in its potential application to mathematical models whose constraints can be expressed as fixed point problems and optimization problems. This arises in practical problems, such as signal processing, network resource allocation, and image recovery (see, for instance, [34,35] and the references therein).
Tan, Qin and Yao [36] proposed four self-adaptive inertial algorithms with strong convergence to solve the split variational inclusion problem in real Hilbert spaces. Izuchukwu et al. [37] first proposed and studied several strongly convergent versions of the forward-reflected-backward splitting method of Malitsky and Tam for finding a zero of the sum of two monotone operators in a real Hilbert space, which required only one forward evaluation of the single-valued operator and one backward evaluation of the set-valued operator at each iteration. They also developed inertial versions of their methods with strong convergence when the set-valued operator was maximal monotone and the single-valued operator was Lipschitz continuous and monotone. Moreover, they discussed some examples from image restorations and optimal control regarding the implementations of our methods in comparisons with known related methods in the literature. Zhang and Wang [38] suggested a new inertial iterative algorithm for split null point and common fixed point problems. In [39], the authors focused on a inertial-viscosity approximation method for solving a split generalized equilibrium problem and common fixed point problem in real Hilbert spaces, their algorithm was designed such that its strong convergence did not require the norm of the bounded linear operator underlying the split equilibrium problem and under mild conditions. In [40], the authors studied the split variational inclusion and fixed point problems using Bregman weak relatively nonexpansive mappings in the p−uniformly convex smooth Banach spaces, they introduced an inertial shrinking projection self-adaptive iterative scheme for the problem and proved a strong convergence theorem.
Su et al. [41] constructed a multi-step inertial asynchronous sequential algorithm for common fixed point problems. Zheng et al. [42] considered a new fixed-time stability of a neural network to solve split convex feasibility problems.
Motivated and inspired by the above research work, we aim to consider the common element of the paramonotone equilibrium problem and the GSVIP2: Find u∗∈C such that
f(u∗,u)≥0,∀u∈Cand0∈∩si=1Bi(u∗),0∈∩tj=1Dj(Au∗), | (1.7) |
where s,t,Bi,Dj,f:C×C→R are as mentioned above. We denote the set of solutions of Problem (1.7) by
Γ:={x∗∈C|f(x∗,x)≥0,0∈∩∞i=1Bi(x∗),0∈∩∞j=1Dj(Ax∗),∀x∈C}=GSIVP2(Bi,Dj)∩EP(f). |
It is easy to see, if Bi=0,Dj=0, then Problem (1.7) simplifies to the EP (1.5); if f=0, then Problem (1.7) simplifies to the GSVIP2 (1.4); if s=1,t=1, then Problem (1.7) changes into the following problem: Find x∗∈C such that
f(x∗,y)≥0,∀y∈Cand0∈B(x∗),0∈D(Ax∗), | (1.8) |
if for s,t∈N,i∈{1,2,⋅⋅⋅,s},j∈{1,2,⋅⋅⋅,t},Bi=NCi,Dj=NQj in Problem (1.7), where NCi and NQi are the normal cones of nonempty, closed and convex subsets Ci⊆H1 and Qj⊆H2, respectively. Then, we obtain the following multiple-sets split feasibility problem and paramonotone equilibrium problem: Find u∗∈C such that
f(u∗,u)≥0,∀u∈Candu∗∈∩si=1Ci,Au∗∈∩tj=1Qj. | (1.9) |
Thus, it can be seen that Problem (1.7) considered in this paper is more general, and contains many known and new mathematical models about the common element problems, such as Problems (1.1–1.6), Problems (1.8) and (1.9) as special cases. We are committed to establishing strong convergences of a self-adaptive viscosity-type inertial algorithm for the common solutions of Problem (1.7). The advantages of the suggested iterative algorithm are that (1) the design of the algorithm is self-adaptive, the inertial term can speed up its convergence, (2) the strong convergence analysis does not require a prior estimate of the norm of bounded operator, (3) the strong convergence of the iterative algorithm is established under the weak assumption of paramonotonicity of the related mappings. Our results improve and generalize many known results in the literature [18,27].
In this section, we give some basic concepts, properties, and notations that will be used in the sequel. Let C be a nonempty closed and convex subset of a real Hilbert space H embed with the inner product ⟨⋅,⋅⟩ and the induced norm ‖⋅‖. For each u,v∈H and a∈R, we have the following facts:
ⅰ)‖u+v‖2=‖u‖2+2⟨u,v⟩+‖v‖2;
ⅱ)‖u+v‖2≤‖u‖2+2⟨v,u+v⟩;
ⅲ)‖au+(1−a)v‖2=a‖u‖2+(1−a)‖v‖2−a(1−a)‖u−v‖2.
Definition 2.1 [18] (1) F:H→H is nonexpansive, if ‖Fu−Fv‖≤‖u−v‖,∀u,v∈H.
(2) F:H→H is firmly nonexpansive, if ⟨Fu−Fv,u−v⟩≥‖Fu−Fv‖2,∀u,v∈H.
Definition 2.2 [18] A mapping F:C→C is said to be demiclosed, if for any sequence {un}⊂C which weakly converges to u, and the sequence {Fun} strongly converges to v, then F(u)=v.
Lemma 2.1 [43] Let C be a nonempty, closed and convex subset of a real Hilbert spaceA,F:C→C be a nonexpansive mapping. Then, I−F is demiclosed at 0.
Lemma 2.2 [44] Let B be a maximal monotone mapping on a Hilbert space H for any r>0, we define the resolvent JBr=(I+rB)−1, then the following hold:
(1) JBr is a single-valued and firmly nonexpansive mapping.
(2) D(JBr)=H, andFix(JBr)=B−1(0) where Fix(JBr) stands for the fixed point set of JBr.
Lemma 2.3 [18] Let B:H→2H be a maximal monotone mapping, then the associated resolvent JBr for some r>0 has the following characterization:
⟨u−JBr(u),u−v⟩≥‖u−JBr(u)‖2,∀u∈H,v∈Fix(JBr). |
Lemma 2.4 [45] Let {υn} and {δn} be nonnegative sequences of real numbers satisfying υn+1≤υn+δn with ∞∑n=1δn<+∞. Then, the sequence {υn} is convergent.
Lemma 2.5 [46] Let H be a real Hilbert space, {an} be a sequence of real numbers such that 0<a<an<b<1 for all n≥1 and {bn},{dn} be the sequences in H such that limsupn→∞‖bn‖≤c,limsupn→∞‖dn‖≤c, and for some c>0,limsupn→∞‖anbn+(1−an)dn‖=c. Then limn→∞‖bn−dn‖=0.
In this section, in order to prove the convergence of the algorithm, the following conditions are assumed:
(A1) Let H1 and H2 be two real Hilbert spaces, C be a nonempty closed convex subset of H1,A:H1→H2 be a linear and bounded operator.
(A2) For s,t∈N,i∈{1,2,⋅⋅⋅,s},j∈{1,2,⋅⋅⋅,t},Bi:H1→2H1,Dj:H2→2H2 are two families of maximal monotone mappings.
(A3) The bifunctions f:C×C→R satisfies the following:
(B1) For each u∈C,f(u,u)=0, and f(u,⋅) is lower semicontinuous and convex on C, f(⋅,u) is upper semicontinuous and convex on C.
(B2) ∂λ2f(u,u)is nonempty for any λ>0 and u∈C, and it is bounded on any bounded subset of C where ∂λ2f(u,u) denotes λ−subdifferential of the convex function f(u,⋅) at u, that is
∂λ2f(u,u):={η∈H:⟨η,v−u⟩+f(u,u)≤f(u,v)+λ,∀v∈C}. |
(B3) f is pseudo-monotone on C with respect to every solution of the EP, that is f(u,u∗)≤0 for ∀u∈C,u∗∈EP(f). And f satisfies the following condition, which is called the paramonotonicity properly: u∗∈EP(f),∀v∈C,f(u∗,v)=f(v,u∗)=0⇒v∈EP(f).
(A4) Γ≠∅.
Now, we introduce a self-adaptive viscosity-type inertial algorithm to solve Problem (1.7), which is described as follows.
Algorithm 3.1. Initialization. Pick u0,u1∈H1,r>0, let θ∈[0,1), for any n∈N, the sequence {ρn},{an},{βn},{λn},{δn},{εn}⊂[0,∞) satisfying the following conditions:
ρn>ρ>0,0<a<an<b<1,βn>0,λn>0, |
∞∑n=1εn<+∞,limn→∞an=12,∞∑n=1βnρn=+∞,∞∑n=1β2n<+∞, |
∞∑n=1βnλnρn<+∞,0<liminfn→∞δn≤limsupn→∞δn<4. |
Step 1. Assume un−1,un have been known. Choose αn such that 0<αn≤ˉαn, where
ˉαn={min{θ,ε‖un−un−1‖},ifun≠un−1,θ,otherwise. |
Computer
xn=un+αn(un−un−1). | (3.1) |
Step 2. Compute
yn=JBinrxn, | (3.2) |
zn=A∗(I−JDjnr)Axn,∀n≥1, | (3.3) |
where
in∈{i|max1≤i≤s‖xn−JBirxn‖},jn∈{j|max1≤j≤t‖Axn−JDjrAxn‖}, | (3.4) |
if
‖xn+zn−yn‖=0, | (3.5) |
then stop; otherwise, continue and compute vn as follows
vn=PC(xn−ξn(xn+zn−yn)), | (3.6) |
where
{ξn=δn‖xn−yn‖2+‖(I−JDjnλ)Axn‖22‖xn−yn+zn‖2,δn>0.} |
Step 3. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Computer
wn=PC(vn−τnηn). | (3.7) |
Step 4. Compute
un+1=anun+(1−an)wn. | (3.8) |
Several algorithms can be deduced from our Algorithm 3.1 for solving Problem (1.7) as follows: If αn=0 in Algorithm 3.1, we have the following self-adaptive viscosity-type method:
Algorithm 3.2. Choose any initial value u1∈H1.
Step 1. Compute
yn=JBinrun, |
zn=A∗(I−JDjnr)Aun,∀n≥1, |
where
in∈{i|max1≤i≤s‖un−JBirun‖},jn∈{j|max1≤j≤t‖Aun−JDjrAun‖}, |
if ‖xn+zn−yn‖=0, then stop; otherwise, continue to compute vnas follows
vn=PC(xn−ξn(xn+zn−yn)). |
Step 2. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Compute
un+1=PC(vn−τnηn), |
where ξn=δn‖un−yn‖2+‖(I−JDjnλ)Aun‖22‖un−yn+zn‖2,δn>0 and θ,r,{ρn},{an},{βn},{λn},{εn} are updated by Algorithm 3.1.
If s=1,t=1 then Problem (1.7) reduces to Problem (1.8), and we consider the following Algorithm 3.3 corresponding to Algorithm 3.1 for computing the solution of Problem (1.8):
Algorithm 3.3. Choose any initial value u0,u1∈H1.
Step 1. Assume un−1,un have been known. Compute
xn=un+αn(un−un−1). |
Step 2. Compute
yn=JBrxn, |
zn=A∗(I−JDr)Axn,∀n≥1, |
if ‖xn+zn−yn‖=0, then stop; otherwise, continue to compute vn as follows
vn=PC(xn−ξn(xn+zn−yn)). |
Step 3. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Compute
wn=PC(vn−τnηn). |
Step 4. Compute
un+1=anun+(1−an)wn, |
where θ,r,{ρn},{an},{βn},{λn},{εn},{αn},{ξn} are updated by Algorithm 3.1.
If Bi=NCi(i=1,2,⋯,s),Dj=NQj(i=1,2,⋯,t), then Problem (1.7) reduces to Problem (1.9), and Algorithm 3.1 reduces to the following method:
Algorithm 3.4. Choose any initial value u0,u1∈H1.
Step 1. Compute
xn=un+αn(un−un−1). |
Step 2. Compute
yn=PCinxn, |
zn=A∗(I−PQjn)Axn, |
where
in∈{i|max1≤i≤s‖xn−PCinxn‖},jn∈{j|max1≤j≤t‖Axn−PQjnAxn‖}, |
if ‖xn+zn−yn‖=0, then stop; otherwise, continue to compute vn as follows
Step 3. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Compute
wn=PC(vn−τnηn). |
Step 4. Compute
un+1=anun+(1−an)wn, |
where ξn=δn‖xn−yn‖2+‖(I−PQjn)Axn‖22‖xn−yn+zn‖2,δn>0 and θ,{ρn},{an},{βn},{λn},{εn},{αn} are updated by Algorithm 3.1.
If f=0, Problem (1.7) reduces to Problem (1.4), we consider the following inertial Algorithm 3.5 corresponding to Algorithm 3.1 for computing the solution of GSVIP2(1.4):
Algorithm 3.5. Choose any initial value u0,u1∈H1.
Step 1. Assume un−1,un have been known. Compute
xn=un+αn(un−un−1). |
Step 2. Compute
yn=JBinrxn, |
zn=A∗(I−JDjnr)Axn,∀n≥1, |
where
in∈{i|max1≤i≤s‖xn−JBirxn‖},jn∈{j|max1≤j≤t‖Axn−JDjrAxn‖}, |
if ‖xn+zn−yn‖=0, then stop; otherwise, continue to compute vn as follows
vn=xn−ξn(xn+zn−yn). |
Step 3. Compute
un+1=anun+(1−an)vn, |
where θ,r,{an},{αn},{ξn} are updated by Algorithm 3.1.
If Bi=0,Dj=0, Problem (1.7) reduces to Problem (1.5) and hence we consider the following inertial Algorithm 3.6 corresponding to Algorithm 3.1 for computing the solution of EP(1.5):
Algorithm 3.6. Initialization. Choose any initial value u0,u1∈H1.
Step 1. Compute
xn=un+αn(un−un−1). |
Step 2. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Compute
wn=PC(vn−τnηn). |
Step 3. Compute
un+1=anun+(1−an)wn, |
where {αn},{ρn},{an},{βn},{λn} are updated by Algorithm 3.1.
In order to obtain our major results, we also need the following lemmas.
Lemma 3.1. [27] For any n≥1, the following inequalities hold:
(1)τn‖ηn‖≤βn. (2)‖wn−vn‖≤βn.
Lemma 3.2. [18] The equality (3.5) holds if and only if unis a solution of GSVIP(Bi,Dj).
Theorem 3.1. Suppose Assumptions (A1–A4) hold and the sequence {un} generated by Algorithm 3.1 strongly converges to a solution of Problem (1.7).
Proof. We divide the proof into the following several steps.
Step 1. The sequence {‖un−u∗‖2} is convergent for all u∗∈Γ, then the sequence {un} is bounded.
Indeed, for u∗∈Γ, we have
⟨xn+zn−yn,xn−u∗⟩≥‖xn−yn‖2+‖(I−JDjnλ)Axn‖2. | (3.9) |
From (3.9), we get
‖vn−u∗‖2≤‖xn−ξn(xn+zn−yn)−u∗‖2=‖xn−u∗‖2−2ξn⟨xn−u∗,xn+zn−yn⟩+ξn2‖xn+zn−yn‖2≤‖xn−u∗‖2−2ξn(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)+ξn2‖xn+zn−yn‖2≤‖xn−u∗‖2−2δn‖xn−yn‖2+‖(I−JDjnλ)Axn‖22‖xn−yn+zn‖2(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)+δ2n(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)24‖xn−yn+zn‖4‖xn+zn−yn‖2=‖xn−u∗‖2−δn(1−δn4)(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)2‖xn−yn+zn‖2. | (3.10) |
Form 0<liminfn→∞δn≤limsupn→∞δn<4 that (1−δn4)>0 and
‖vn−u∗‖2≤‖xn−u∗‖2. | (3.11) |
From the definition of xn that
‖xn−u∗‖2=‖un+αn(un−un−1)−u∗‖2=‖un−u∗‖2+2αn⟨un−u∗,un−un−1⟩+α2n‖un−un−1‖2=‖un−u∗‖2+αn(‖un−u∗‖2+‖un−un−1‖2−‖un−1−u∗‖2)+α2n‖un−un−1‖2=‖un−u∗‖2+αn(‖un−u∗‖2−‖un−1−u∗‖2)+αn(1+αn)‖un−un−1‖2≤‖un−u∗‖2+αn(‖un−u∗‖2−‖un−1−u∗‖2)+2αn‖un−un−1‖2≤‖un−u∗‖2+αn(‖un−u∗‖+‖un−1−u∗‖)‖un−un−1‖+2αn‖un−un−1‖2=‖un−u∗‖2+αn(‖un−u∗‖+‖un−1−u∗‖+2‖un−un−1‖)‖un−un−1‖=‖un−u∗‖2+αnc1‖un−un−1‖, | (3.12) |
where c1=‖un−u∗‖+‖un−1−u∗‖+2‖un−un−1‖. By (3.11) and (3.12), we have
‖vn−u∗‖2≤‖un−u∗‖2+αnc1‖un−un−1‖. | (3.13) |
Noting that
‖wn−u∗‖2=‖wn−vn+vn−u∗‖2≤‖vn−u∗‖2+2⟨vn−wn,u∗−wn⟩. | (3.14) |
By the definition of wn and the projection property, we have ⟨wn−vn+τnηn,u∗−wn⟩≥0. so ⟨τnηn,u∗−wn⟩≥⟨vn−wn,u∗−wn⟩. From (3.14) that
‖wn−u∗‖2≤‖vn−u∗‖2+2⟨τnηn,u∗−wn⟩=‖vn−u∗‖2+2⟨τnηn,u∗−vn⟩+2⟨τnηn,vn−wn⟩. | (3.15) |
It follows from ηn∈∂λn2f(vn,vn) that f(vn,u∗)−f(vn,vn)≥⟨ηn,u∗−vn⟩−λn. Then
f(vn,u∗)+λn≥⟨ηn,u∗−vn⟩. | (3.16) |
Moreover, by Lemma 3.1, we get
⟨τnηn,vn−wn⟩≤τn‖ηn‖‖vn−wn‖≤β2n. | (3.17) |
By (3.15)–(3.17), we have
‖wn−u∗‖2≤‖vn−u∗‖2+2τnf(vn,u∗)+2τnλn+2β2n. | (3.18) |
Combining (3.11), we get
‖wn−u∗‖2≤‖xn−u∗‖2+2τnf(vn,u∗)+2τnλn+2β2n. | (3.19) |
Since u∗∈Γ, then u∗∈EP(f). And f is pseudomonotone on C with respect to every solution of EP(f), we have f(vn,u∗)≤0. By the definition of un+1,we obtain
‖un+1−u∗‖2=‖anun+(1−an)wn−u∗‖2≤an‖un−u∗‖2+(1−an)‖wn−u∗‖2. | (3.20) |
By the definition of yn and (3.12), we get
‖yn−u∗‖2=‖JBinrxn−u∗‖2≤‖xn−u∗‖2≤‖un−u∗‖2+αnc1‖un−un−1‖. |
From (3.12), (3.19) and (3.20) that
‖un+1−u∗‖2≤an‖un−u∗‖2+(1−an)(‖xn−u∗‖2+2τnf(vn,u∗)+2τnλn+2β2n)≤an‖un−u∗‖2+(1−an)(‖un−u∗‖2+αnc1‖un−un−1‖+2τnf(vn,u∗)+2τnλn+2β2n)=‖un−u∗‖2+(1−an)[αnc1‖un−un−1‖+2τnf(vn,u∗)+2τnλn+2β2n] | (3.21) |
≤‖un−u∗‖2+(1−an)αnc1‖un−un−1‖+2(1−an)τnλn+2(1−an)β2n |
=‖un−u∗‖2+(1−an)[αnc1‖un−un−1‖+2τnf(vn,u∗)+2τnλn+2β2n], | (3.22) |
where Λn=2(1−an)(τnλn+β2n). since τn=βnγn γn=max{ρn,‖ηn‖}, then
∞∑n=1τnλn=∞∑n=1βnγnλn≤∞∑n=1βnρnλn<+∞. |
Noting ∞∑n=1β2n<+∞,0<a<an<b<1, we have ∞∑n=1Λn<2(1−a)∞∑n=1(τnλn+β2n)<+∞. By (3.1), we have αn‖un−un−1‖≤ˉαn‖un−un−1‖≤εn, noting ∞∑n=1εn<∞, so
∞∑n=1αn‖un−un−1‖<+∞. |
From Lemma 2.4 and (3.22), we can see that {‖un−u∗‖2} is convergent for all u∗∈Γ. Hence {un} is bounded, consequently, so are the sequences {xn},{yn},{vn} and {wn}.
Step 2. For any u∗∈Γ, limn→∞supf(vn,u∗)=0. Indeed, from (3.21), we have
−2(1−an)τnf(vn,u∗)≤‖un−u∗‖2−‖un+1−u∗‖2+(1−an)αnc1‖un−un−1‖+Λn. | (3.23) |
Consequently, ∞∑n=1−2(1−an)τnf(vn,u∗)<+∞. It follows from Assumption (B2) and the boundedness of {un}, we get that ‖ηn‖ is bounded. Thus, for each n≥1, there is a constant L>ρ such that ‖ηn‖≤L then γnρn=max{1,‖ηn‖ρn}≤Lρ, so τn=βnγn>ρLβnρn. Since u∗∈Γ, it follows from the pseudomonotonicity of f that −f(vn,u∗)≥0 and combine 0<a<an<b<1, we have ∞∑n=1(1−b)βnρn[−f(vn,u∗)]<+∞. Since ∞∑n=1βnρn=+∞, then limn→∞supf(vn,u∗)=0.
Step 3. For any u∗∈Γ, let {vnk} be a subsequence of {vn}, such that limn→∞supf(vn,u∗)=limj→∞f(vnk,u∗), and v∗ be a weak cluster point of {vnk}, then v∗∈EP(f). Indeed, if vnk⇀v∗(k→∞), since f(⋅,u∗) is upper semi-continuous and by Step2, we have f(v∗,u∗)≥limj→∞supf(vnk,u∗)=0. Since u∗∈Γ, and f is pseudomonotone, we have f(v∗,u∗)≤0, and so f(v∗,u∗)=0. Again, by the pseudomonotonicity of f, f(u∗,v∗)≤0, and f(u∗,v∗)≥0, we obtain f(u∗,v∗)=0. Then, f(v∗,u∗)=f(u∗,v∗)=0. Thus, by the pseudomonotonicity of f we have v∗∈EP(f).
Step 4. Every weak cluster point ˉu of the sequence {un} all belongs to GSVIP(Bi,Dj). Let {unk} be a subsequence of {un} such that unk⇀ˉu. Easy to know that ∞∑n=1‖xn−un‖=∞∑n=1αn‖un−un−1‖<∞. Implying that
limn→∞‖xn−un‖=0. | (3.24) |
Therefore xnk⇀ˉu, where {xnk} be a subsequence of {xn}. It follows from (3.10), (3.12), (3.18) and (3.20) that
‖un+1−u∗‖2=‖anun+(1−an)wn−u∗‖2≤an‖un−u∗‖2+(1−an)‖wn−u∗‖2≤an‖un−u∗‖2+(1−an)(‖vn−u∗‖2+2τnf(vn,u∗)+2τnλn+2β2n)≤an‖un−u∗‖2+(1−an)‖vn−u∗‖2+Λn≤an‖un−u∗‖2+(1−an)(‖xn−u∗‖2−δn(1−δn4)(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)2‖xn−yn+zn‖2)+Λn≤an‖un−u∗‖2+(1−an)(‖un−u∗‖2+αnc1‖un−un−1‖−δn(1−δn4)(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)2‖xn−yn+zn‖2)+Λn |
=‖un−u∗‖2+(1−an)αnc1‖un−un−1‖−(1−an)δn(1−δn4)(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)2‖xn−yn+zn‖2+Λn. | (3.25) |
Implying that
(1−an)δn(1−δn4)(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)2‖wn−un+vn‖2≤‖un−u∗‖2−‖un+1−u∗‖2+(1−an)αnc1‖un−un−1‖+Λn. | (3.26) |
Observe that
(1−b)∞∑n=1δn(1−δn4)(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)2‖wn−un+vn‖2≤‖u0−u∗‖2+(1−a)c1∞∑n=1αn‖un−un−1‖+∞∑n=1Λn<∞. |
Thus
limn→∞δn(1−δn4)(‖xn−yn‖2+‖(I−JDjnλ)Axn‖2)2‖xn−yn+zn‖2=0. |
Since {xn+zn−yn} is bounded, then limn→∞‖xn−yn‖=0 and limn→∞‖(I−JDjnλ)Axn‖=0. Thus limn→∞‖(I−JBinr)xn‖=0 and limn→∞‖(I−JDjnr)Axn‖=0. Note that JBinr and JDjnr are nonexpansive, then (I−JBinr) and (I−JDjnr) are demiclosed at 0. Thus, it follows from xnk⇀ˉu that (I−JBinr)ˉu=0, and (I−JDjnr)Aˉu=0 due to the linearity of A. That is, ˉu∈∩si=1B−1i(0) and Aˉu∈∩tjD−1j(0). So ˉu∈GSVIP(Bi,Dj). Noting that by Step 1, we can assume that limn→∞‖un−ˉu‖=c<+∞. From (3.11) and Lemma 3.1(2), we have
‖wn−ˉu‖≤‖wn−vn‖+‖vn−ˉu‖≤βn+‖xn−ˉu‖=‖un+αn(un−un−1)−ˉu‖+βn≤‖un−ˉu‖+|αn|‖un−un−1‖+βn. |
This means that
limsupn→∞‖wn−ˉu‖≤limsupn→∞(‖un−ˉu‖+αn‖un−un−1‖+βn)=c. |
Since limn→∞‖an(un−ˉu)+(1−an)(wn−ˉu)‖=limn→∞‖un+1−ˉu‖=c. By Lemma 2.5, we have
limn→∞‖wn−un‖=0. | (3.27) |
From Lemma 3.1 (2) and ∞∑n=1β2n<+∞ that limn→∞‖vn−wn‖=0, then limn→∞‖vn−un‖=0. Noting the fact that ˉu is a weak cluster point of the sequence {un}, we are easy to see that ˉu is also a weak cluster point of the sequence{vn}, thus ˉu∈EP(f), then ˉu∈Γ.
Step 5. Finally, we show that the sequence {un} converges strongly to ˉu∈Γ. Indeed, combining (3.27) and the fact that ˉu is a weak cluster point of the sequence {un}, we can see that ˉu is also a weak cluster point of the sequence{wn}. Suppose wnk⇀ˉu, we get
‖unk+1−PΓ(unk+1)‖2≤‖unk+1−PΓ(unk)‖2=‖ankunk+(1−ank)unk−PΓ(unk)‖2≤ank‖unk−PΓ(unk)‖2+(1−ank)‖wnk−PΓ(unk)‖2. | (3.28) |
Observe that
‖wnk−PΓ(unk)‖2=‖wnk−unk+unk−PΓ(unk)‖2=‖wnk−unk‖2−‖unk−PΓ(unk)‖2−2⟨wnk−PΓ(unk),PΓ(unk)−unk⟩. | (3.29) |
By (3.28) and (3.29), we have
‖unk+1−PΓ(unk+1)‖2≤ank‖unk−PΓ(unk)‖2+(1−ank)(‖wnk−unk‖2−‖unk−PΓ(unk)‖2−2⟨wnk−PΓ(unk),PΓ(unk)−unk⟩)=(2ank−1)‖unk−PΓ(unk)‖2+(1−ank)‖wnk−unk‖2−2(1−ank)⟨wnk−PΓ(unk),PΓ(unk)−unk⟩=(2ank−1)‖unk−PΓ(unk)‖2+(1−ank)‖wnk−unk‖2−2(1−ank)⟨wnk−ˉu,PΓ(unk)−unk⟩−2(1−ank)⟨ˉu−PΓ(unk),PΓ(unk)−unk⟩. | (3.30) |
Since ˉu∈Γ, we have ⟨ˉu−PΓ(unk),PΓ(unk)−unk⟩≥0, Also, observe that the sequence {unk} is bounded, then so is {unk−PΓ(unk)}. It follows from limn→∞‖wnk−unk‖=0,limn→∞ank=12 and (3.30), we have
limn→∞‖unk+1−PΓ(unk+1)‖=0. | (3.31) |
Next, we show that {PΓ(unk)} is a Cauchy sequence. Indeed, for any m>k, we obtain
‖PΓ(unm)−PΓ(unk)‖2=‖PΓ(unm)−unm+unm−PΓ(unk)‖2=4‖(12(PΓ(unm)−unm)+12(unm−PΓ(unk)))‖2=2‖PΓ(unm)−unm‖2+2‖unm−PΓ(unk)‖2−4‖unm−12(PΓ(unm)+PΓ(unk))‖2≤2‖PΓ(unm)−unm‖2+2‖unm−PΓ(unk)‖2−4‖unm−PΓ(unm)‖2=2‖PΓ(unk)−unm‖2−2‖unm−PΓ(unm)‖2. | (3.32) |
Set u∗=PΓ(unk) in (3.22), we have
‖unm−PΓ(unk)‖2≤‖unm−1−PΓ(unk)‖2+(1−anm−1)αnm−1c1‖unm−1−unm−2‖+Λnm−1⋮≤‖unk−PΓ(unk)‖2+nm−1∑i=nk(1−ai)αic1‖ui−ui−1‖+nm−1∑i=nkΛi. | (3.33) |
From (3.32) and (3.33) that
‖PΓ(unm)−PΓ(unk)‖2≤2‖unk−PΓ(unk)‖2+2nm−1∑i=nk(1−ai)αic1‖ui−ui−1‖+2nm−1∑i=nkΛi−2‖unm−PΓ(unm)‖2. |
It follows from (3.31) and the fact that limk→∞nm−1∑i=nkΛi=0 and limk→∞nm−1∑i=nk(1−ai)αic1‖ui−ui−1‖=0, we obtain that {PΓ(unk)} is a Cauchy sequence. Hence {PΓ(unk)} strongly converges to some u∈Γ. Noting limk→∞‖unk+1−PΓ(unk+1)‖=0, we know that {unk} also strongly converges to u∈Γ. Thus limn→∞un=ˉu, which completing the proof.
As the consequences of Theorem 3.1 with suitable choices of Bi,Dj(i∈{1,2,⋅⋅⋅,s},j∈{1,2,⋅⋅⋅,t}) and f, we derive several interesting corollaries as follows.
Corollary 3.1. Suppose Assumptions (A1–A4) hold and let s=1,t=1 in (A2), then, the sequence {xn} generated by Algorithm 3.3 strongly converges to a solution of Problem (1.8).
Corollary 3.2. Suppose Assumptions (A1–A4) hold, then the sequence {xn} generated by Algorithm 3.4 strongly converges to a solution of Problem (1.9).
Corollary 3.3. Suppose Assumptions (A1–A4) hold, then the sequence {xn} generated by Algorithm 3.5 strongly converges to a solution of Problem (1.5).
Corollary 3.4. Suppose Assumptions (A1–A4) hold, then the sequence {xn} generated by Algorithm 3.6 strongly converges to a solution of Problem (1.4).
Remark 3.1. (ⅰ) Suppose αn=0 in Algorithm 3.1, then Algorithm 3.1 reduces to self-adaptive viscosity-type Algorithm 3.2 for solving Problem (1.7).
(ⅱ) Suppose s=1,t=1, then Algorithm 3.1 reduces to Algorithm 3.3 for solving Problem (1.8).
(ⅲ) Suppose Bi=NCi(i=1,2,⋯,s),Dj=NQj(i=1,2,⋯,t) in Problem (1.7), then Algorithm 3.1 reduces to Algorithm 3.4 for solving Problem (1.9).
(ⅳ) Suppose f=0 in Problem (1.7), then Problem (1.7) reduces to Problem (1.4) studied by Ogbuisi et al. in [18] and Algorithm 3.1 reduces to Algorithm 3.5 for solving Problem (1.4). So our Algorithm 3.1 and Theorem 3.1 generalize the corresponding results in [18].
(ⅴ) Suppose Bi=0,Dj=0 in Problem (1.7), then Problem (1.7) reduces to Problem (1.5) studied by Santos et al. in [27] and Algorithm 3.1 reduces to Algorithm 3.6 for solving Problem (1.5). So our Algorithm 3.1 and Theorem 3.1 generalize the corresponding results in [27].
At last, we give two examples to illustrate the validity of our considered common solution Problem (1.7). In two examples, we take s=1,t=1 in Problem (1.7).
Example 3.1. Let H1=H2=R2,C={u∈R2|−10e1≤u≤10e1},e1=(1,1). We define the operators B:H1→2H1,D:H2→2H2,A:H1→H2 by
B[uv]=[5002][uv],D[uv]=[3006][uv],A[uv]=[1234][uv], |
respectively. Define mapping f(u,v)=u51(v1−u1)+u32(v2−u2),∀u,v∈C. Let us observe that A1–A4 hold and A is bounded linear mapping. In addition, u∗=(0,0) is the unique solution of SVIP(B,D). Furthermore, EP(f)has a unique solution u∗=(0,0). Since f(v,u∗)=−v61−v42≤0 for all y∈C and f(u∗,ˉu)=0=f(ˉu,u∗)=−ˉu61−ˉu42, which implies ˉu=(0,0)∈EP(f). Hence, Γ=SVIP(B,D)∩EP(f)={(0,0)}.
Example 3.2. Let H1=R2,H2=R3,C={u∈R2+|u1+u2=1}⊂H1. We define B1:R2→R2,B2:R3→R3 be
B1[uv]=[2−4−42][uv]+[11],B2[uvw]=[200020002][uvw]+[222], |
respectively. Let A=[2−4−422−4]. We consider the equilibrium problem with the bifunction f(u,v)=2|v1|−|u1|+2v22−u22,∀v∈C. Suppose A1-A4 hold, the optimal point of EP(f) is u∗=(12,12) and the partial subdifferential of f is given by
∂2f(u,u)={(2,4u2)ifu1>0,([−2,2],4u2)ifu1=0,(−2,4u2)ifu1<0. |
Furthermore, we aim to find u∗=(u∗1,u∗2)∈R2 such that B1(u∗)=(0,0),B2(Au∗)=(0,0,0). Then, we can easy to see that x∗=(12,12)∈SVIP(B1,B2). Hence, Γ=SVIP(B1,B2)∩EP(f)={(12,12)}.
In this paper, a new inertial form algorithm is introduced to approximate the common solutions of the generalized split variational inclusion problem and paramonotone equilibrium problem in real Hilbert spaces. The design of the algorithm is self-adaptive, the inertial term can speed up its convergence, and the strong convergence analysis does not require a prior estimate of the norm of bounded operators. Under the assumption of generalized monotonicity of the correlation mappings, we prove the strong convergence of our iterative algorithms. The results presented here improve and generalize many known results in [18,27].
It should be noted that the way of choosing the inertial parameter αk in our algorithm 3.1 is known as the on-line rule. As part of our future project, following the method in [47], we consider a strong convergence of our proposed algorithm under some conditions on the iterative parameter without on-line rule assumption.
Yali Zhao: Supervision, Conceptualization, Writing-review & editing; Qixin Dong: Writing-review & editing, Project administration; Xiaoqing Huang: Writing-original draft, Formal analysis. All authors have read and agreed to the published version of the manuscript.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research was supported by Liaoning Provincial Department of Education under project No.LJKMZ20221491.
And the authors would like to thank the reviewers for their valuable comments and suggestions, which have helped to improve the quality of our paper.
The authors declare that they have no competing interests.
[1] | D. S. Jones, B. D. Sleeman, Differential equations and mathematical biology, Chapman and Hall/CRC, 2003. |
[2] |
P. Veeresha, E. Ilhan, D. G. Prakasha, H. M. Baskonus, W. Gao, Regarding on the fractional mathematical model of tumour invasion and metastasis, CMES-Comp. Model. Eng., 127 (2021), 1013–1036. https://doi.org/10.32604/cmes.2021.014988 doi: 10.32604/cmes.2021.014988
![]() |
[3] |
A. Bertuzzi, A. Fasano, A. Gandolfi, C. Sinisgalli, ATP production and necrosis formation in a tumour spheroid model, Math. Model. Nat. Phenom., 2 (2007), 30–46. https://doi.org/10.1051/mmnp:2007002 doi: 10.1051/mmnp:2007002
![]() |
[4] |
R. Venkatasubramanian, M. A. Henson, N. S. Forbes, Incorporating energy metabolism into a growth model of multicellular tumor spheroids, J. Theor. Biol., 242 (2006), 440–453. https://doi.org/10.1016/j.jtbi.2006.03.011 doi: 10.1016/j.jtbi.2006.03.011
![]() |
[5] |
T. Telksnys, I. Timofejeva, Z. Navickas, R. Marcinkevicius, R. Mickevicius, M. Ragulskis, Solitary solutions to an androgen-deprivation prostate cancer treatment model, Math. Method. Appl. Sci., 43 (2020), 3995–4006. https://doi.org/10.1002/mma.6168 doi: 10.1002/mma.6168
![]() |
[6] |
N. Bellomo, N. K. Li, P. K. Maini, On the foundations of cancer modelling: selected topics, speculations, and perspectives, Math. Mod. Meth. Appl. S., 18 (2008), 593–646. https://doi.org/10.1142/S0218202508002796 doi: 10.1142/S0218202508002796
![]() |
[7] |
T. Telksnys, Z. Navickas, M. A. F. Sanjuan, R. Marcinkevicius, M. Ragulskis, Kink solitary solutions to a hepatitis C evolution model, Discrete Cont. Dyn. Syst. B, 25 (2020), 4427–4447. https://doi.org/10.3934/dcdsb.2020106 doi: 10.3934/dcdsb.2020106
![]() |
[8] |
T. Telksnys, Z. Navickas, R. Marcinkevicius, M. S. Cao, M. Ragulskis, Homoclinic and heteroclinic solutions to a hepatitis C evolution model, Open Math., 16 (2018), 1537–1555. https://doi.org/10.1515/math-2018-0130 doi: 10.1515/math-2018-0130
![]() |
[9] | R. A. Gatenby, E. T. Gawlinski, A reaction-diffusion model for cancer invasion, Cancer Res., 56 (1996), 5745–5753. |
[10] | R. A. Gatenby, E. T. Gawlinski, The glycolytic phenotype in carcinogenesis and tumour invasion: insights through mathematical modelling, Cancer Res., 63 (2003), 3847–3854. |
[11] |
A. Fasano, M. A. Herrero, M. R. Rodrigo, Slow and fast invasion waves in a model of acid-mediated tumour growth, Math. Biosci., 220 (2009), 45–56. https://doi.org/10.1016/j.mbs.2009.04.001 doi: 10.1016/j.mbs.2009.04.001
![]() |
[12] |
B. Gao, Y. Zhang, Symmetry analysis of the time fractional Gaudrey-Dodd-Gibbon equation, Physica A, 525 (2019), 1058–1062. https://doi.org/10.1016/j.physa.2019.04.023 doi: 10.1016/j.physa.2019.04.023
![]() |
[13] |
Z. G. Wang, Symmetries and solutions of hyperbolic mean curvature flow with a constant forcing term, Appl. Math. Comput., 235 (2014), 560–566. https://doi.org/10.1016/j.amc.2013.12.134 doi: 10.1016/j.amc.2013.12.134
![]() |
[14] |
B. Gao, C. F. He, Analysis of a coupled short pulse system via symmetry method, Nonlinear Dyn., 90 (2017), 2627–2636. https://doi.org/10.1007/s11071-017-3827-0 doi: 10.1007/s11071-017-3827-0
![]() |
[15] |
J. H. Wang, Symmetries and solutions to geometrical flows, Sci. China Math., 56 (2013), 1689–1704. https://doi.org/10.1007/s11425-013-4635-8 doi: 10.1007/s11425-013-4635-8
![]() |
[16] | P. J. Olver, Applications of Lie groups to differential equations, New York: Springer, 1993. https://doi.org/10.1007/978-1-4612-4350-2 |
[17] | G. W. Bluman, S. Kumei, Symmetries and differential equations, New York, NY: Springer, 1989. https://doi.org/10.1007/978-1-4757-4307-4 |
[18] |
B. Gao, Y. X. Wang, Invariant solutions and nonlinear self-adjointness of the two-component Chaplygin gas equation, Discrete. Dyn. Nat. Soc., 2019 (2019), 9609357. https://doi.org/10.1155/2019/9609357 doi: 10.1155/2019/9609357
![]() |
[19] | G. W. Bluman, S. C. Anco, Symmetry and integration methods for differential equations, New York, NY: Springer, 2002. https://doi.org/10.1007/b97380 |
[20] | N. H. Asmar, Partial differential equations with Fourier series and boundary value problems, Beijing: China Machine Press, 2005. |
[21] | W. Rudin, Principles of mathematical analysis, Beijing: China Machine Press, 2004. |