Processing math: 96%
Research article Special Issues

An almost second order uniformly convergent method for a two-parameter singularly perturbed problem with a discontinuous convection coefficient and source term

  • In this paper, we discuss a higher-order convergent numerical method for a two-parameter singularly perturbed differential equation with a discontinuous convection coefficient and a discontinuous source term. The presence of perturbation parameters generates boundary layers, and the discontinuous terms produce interior layers on both sides of the discontinuity. In order to obtain a higher-order convergent solution, a hybrid monotone finite difference scheme is constructed on a piecewise uniform Shishkin mesh, which is adapted inside the boundary and interior layers. On this mesh (including the point of discontinuity), the present method is almost second-order parameter-uniform convergent. The current scheme is compared with the standard upwind scheme, which is used at the point of discontinuity. The numerical experiments based on the proposed scheme show higher-order (almost second-order) accuracy compared to the standard upwind scheme, which provides almost first-order parameter-uniform convergence.

    Citation: M. Chandru, T. Prabha, V. Shanthi, H. Ramos. An almost second order uniformly convergent method for a two-parameter singularly perturbed problem with a discontinuous convection coefficient and source term[J]. AIMS Mathematics, 2024, 9(9): 24998-25027. doi: 10.3934/math.20241219

    Related Papers:

    [1] Ning He, Hongmei Jin, Hong'an Li, Zhanli Li . A global optimization generation method of stitching dental panorama with anti-perspective transformation. Mathematical Biosciences and Engineering, 2023, 20(9): 17356-17383. doi: 10.3934/mbe.2023772
    [2] Duolin Sun, Jianqing Wang, Zhaoyu Zuo, Yixiong Jia, Yimou Wang . STS-TransUNet: Semi-supervised Tooth Segmentation Transformer U-Net for dental panoramic image. Mathematical Biosciences and Engineering, 2024, 21(2): 2366-2384. doi: 10.3934/mbe.2024104
    [3] Shuai Cao, Biao Song . Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991. doi: 10.3934/mbe.2021103
    [4] Fang Zhu, Wei Liu . A novel medical image fusion method based on multi-scale shearing rolling weighted guided image filter. Mathematical Biosciences and Engineering, 2023, 20(8): 15374-15406. doi: 10.3934/mbe.2023687
    [5] Xiao Zou, Jintao Zhai, Shengyou Qian, Ang Li, Feng Tian, Xiaofei Cao, Runmin Wang . Improved breast ultrasound tumor classification using dual-input CNN with GAP-guided attention loss. Mathematical Biosciences and Engineering, 2023, 20(8): 15244-15264. doi: 10.3934/mbe.2023682
    [6] Zhijing Xu, Jingjing Su, Kan Huang . A-RetinaNet: A novel RetinaNet with an asymmetric attention fusion mechanism for dim and small drone detection in infrared images. Mathematical Biosciences and Engineering, 2023, 20(4): 6630-6651. doi: 10.3934/mbe.2023285
    [7] Yong Tian, Tian Zhang, Qingchao Zhang, Yong Li, Zhaodong Wang . Feature fusion–based preprocessing for steel plate surface defect recognition. Mathematical Biosciences and Engineering, 2020, 17(5): 5672-5685. doi: 10.3934/mbe.2020305
    [8] Ziyue Wang, Junjun Guo . Self-adaptive attention fusion for multimodal aspect-based sentiment analysis. Mathematical Biosciences and Engineering, 2024, 21(1): 1305-1320. doi: 10.3934/mbe.2024056
    [9] Xi Lu, Xuedong Zhu . Automatic segmentation of breast cancer histological images based on dual-path feature extraction network. Mathematical Biosciences and Engineering, 2022, 19(11): 11137-11153. doi: 10.3934/mbe.2022519
    [10] Hui Yao, Yuhan Wu, Shuo Liu, Yanhao Liu, Hua Xie . A pavement crack synthesis method based on conditional generative adversarial networks. Mathematical Biosciences and Engineering, 2024, 21(1): 903-923. doi: 10.3934/mbe.2024038
  • In this paper, we discuss a higher-order convergent numerical method for a two-parameter singularly perturbed differential equation with a discontinuous convection coefficient and a discontinuous source term. The presence of perturbation parameters generates boundary layers, and the discontinuous terms produce interior layers on both sides of the discontinuity. In order to obtain a higher-order convergent solution, a hybrid monotone finite difference scheme is constructed on a piecewise uniform Shishkin mesh, which is adapted inside the boundary and interior layers. On this mesh (including the point of discontinuity), the present method is almost second-order parameter-uniform convergent. The current scheme is compared with the standard upwind scheme, which is used at the point of discontinuity. The numerical experiments based on the proposed scheme show higher-order (almost second-order) accuracy compared to the standard upwind scheme, which provides almost first-order parameter-uniform convergence.



    Dedicated to Professor Neil S. Trudinger on occasion of his 80th birthday.

    One classical problem in convex geometry is the Minkowski problem, which is to find convex hypersurfaces in Rn+1 whose Gaussian curvature is prescribed as a function defined on Sn in terms of the inverse Gauss map. It has been settled by the works of Minkowski [23], Alexandrov [1], Fenchel and Jessen [28], Nirenberg [25], Pogorelov [26], Cheng and Yau [3], etc.. In smooth catagory, the Minkowski problem is equivalent to solve following Monge-Ampère equation

    det(2u+ugSn)=fonSn,

    where u is the support function of the convex hypersurface, 2u+ugSn the spherical Hessian matrix of the function u. If we take an orthonormal frame on Sn, the spherical Hessian of u is Wu(x):=uij(x)+u(x)δij, whose eigenvalues are actually the principal radii of the hypersurface.

    The general problem of finding a convex hypersurface, whose k-th symmetric function of the principal radii is the prescribed function on its outer normals for 1k<n, is often called the Christoffel-Minkowski problem. It corresponds to finding convex solutions of the nonlinear Hessian equation

    σk(Wu)=fonSn.

    This problem was settled by Guan et al [14,15]. In [16], Guan and Zhang considered a mixed Hessian equation as follows

    σk(Wu(x))+α(x)σk1(Wu(x))=k2l=0αl(x)σl(Wu(x)),xSn, (1.1)

    where α(x),αl(x)(0lk1) are some functions on Sn. By imposing some group-invariant conditions on those coefficient's functions as in [11], the authors proved the existence of solutions.

    Let M be a hypersurface of Euclidean space Rn+1 and M=graphu in a neighbourhood of some point at which we calculate. Let A be the second fundamental form of M, λ(A)=(λ1,,λn)Rn the eigenvalues of A with respect to the induced metric of MRn+1, i.e., the principle curvatures of M, and σk(λ) the k-th elementary symmetric function, σ0(λ)=1. It is natural to study the prescribing curvature problems on this aspect. In 1980s, Caffarelli, Nirenberg and Spruck studied the prescribing Weingarten curvature problem. The problem is equivalent to solve the following equation

    σk(λ)(X)=f(X),XM.

    When k=n, the problem is just the Minkowski problem; when k=1, it is the prescribing mean curvature problem, c.f. [30,33]. The prescribing Weingarten curvature problem has been studied by many authors, we refer to [2,9,11,12,13,29,37] and references therein for related works. Recently, Zhou [36] generalised above mixed prescribed Weingarten curvature equation. He obtained interior gradient estimates for

    σk(A)+α(x)σk1(A)=k2l=0αl(x)σl(A),xBr(0)Rn (1.2)

    where σk(A):=σk(λ(A)), and the coefficients satisfy αk2>0 and αl0 for 0lk3.

    Mixed Hessian type of equations arise naturally from many important geometric problems. One example is the so-called Fu-Yau equation arising from the study of the Hull-Strominger system in theoretical physics, which is an equation that can be written as the linear combination of the first and the second elementary symmetric functions

    σ1(iˉ(eu+αeu))+ασ2(iˉu)=ϕ (1.3)

    on n-dimensional compact Kähler manifolds. There are a lot of works related to this equation recently, see [6,7,27] for example. Another important example is the special Lagrangian equations introduced by Harvey and Lawson [18], which can be written as the alternative combinations of elementary symmetric functions

    sinθ([n2]k=0(1)kσ2k(D2u))+cosθ([n12]k=0(1)kσ2k+1(D2u))=0.

    This equation is equivalent to

    F(D2u):=arctanλ1++arctanλn=θ

    where λi's are the eigenvalues of D2u. It is called supercritical if θ((n2)π2,nπ2) and hypercritical if θ((n1)π2,nπ2). The Lagrangian phase operator F is concave for the hypercritical case and has convex level sets for the supercritical case, while in general F fails to be concave. For subcritical case, i.e., 0θ<(n2)π2, solutions of the special Lagrangian equation can fail to have interior estimates [24,35]. Jacob-Yau [20] initiated to study the deformed Hermitian Yang-Mills (dHYM) equation on a compact Kähler manifold (M,ω):

    Re(χu+1ω)n=cotθ0Im(χu+1ω)n,

    where χ is a closed real (1,1)-form, χu=χ+1ˉu, and θ0 is the angles of the complex number M(χ+1ω)n, u is the unknown real smooth function on M. Jacob-Yau showed that dHYM equation has an equivalent form of special Lagrangian equation. Collins-Jacob-Yau [5] solved the dHYM equation by continuity method and Fu-Zhang [8] gave an alternative approach by dHYM flow, both of which considered in the supercritical case. For more results concerning about dHYM equation and special Lagrangian equation, one can consult Han-Jin [17], Chu-Lee [4] and the references therein. Note that for n=3 and hypercritical θ(π,3π2), the special Lagrangian equation (1.3) is

    σ3(D2u)+tanθσ2(D2u)=σ1(D2u)+tanθσ0(D2u)

    which is included in (1.1).

    In this paper we derive interior curvature bounds for admissible solutions of a class of curvature equations subject to affine Dirichlet data. Let Ω be a bounded domain in Rn, and let uC4(Ω)C0,1(ˉΩ) be an admissible solution of

    {σk(λ)+g(x,u)σk1(λ)=k2l=0αl(x,u)σl(λ)inΩ,u=ϕonΩ, (1.4)

    where g(x,u) and αl(x,u)>0, l=0,1,,k2, are given smooth functions on ˉΩ×R and ϕ is affine, λ=(λ1,,λn) is the vector of the principal curvatures of graph u. u is the admissible solution in the sense that λΓk for points on the graph of u, with

    Γk={λRn|σ1(λ)>0,,σk(λ)>0}.

    For simplicity we denote F=Gkk2l=0αlGl and Gl=σl(λ)/σk1(λ) for l=0,1,,k2,k. The ellipticity and concavity properties of the operator F have been proved in [16]. Our main result is as follows.

    Theorem 1.1. Assume that for every l (0lk2), αl,gC1,1(ˉΩ×R), αl>0, and g>0 or g<0. ϕ is affine in (1.4). For any fixed β>0, if uC4(Ω)C0,1(ˉΩ) is an admissible solution of (1.4), then there exists a constant C, depending only on n,k,β,||u||C1(ˉΩ),αl,g and their first and second derivatives, such that the second fundamental form A of graph u satisfies

    |A|C(ϕu)β.

    Remark 1.1. Comparing with [16], here we require g>0 or g<0 additionally. Also our curvature estimates still hold if αl0 for some 0lk2. More over, if αl0 for all l=0,1,,k2, Eq (1.4) becomes the Hessian quotient equation and the results can be followed from [29].

    To see that this is an interior curvature estimate, we need to verify that ϕu>0 on Ω. We apply the strong maximum principle for the minimal graph equation. Since ϕ is affine, it satisfies the following minimal graph equation

    Qu:=(1+|Du|2)uuiujuij=nH(1+|Du|2)32=0onΩ.

    Since u is k-admissible solution, and nk2, graph of u is mean-convex and Qu>Qϕ=0. By the comparison principle for quasilinear equations (Theorem 10.1 in [10]), we then have ϕ>u on Ω.

    The main application of the curvature bound of Theorem 1.1 is to extend various existence results for the Dirichlet problem for curvature equations of mixed Hessian type.

    Theorem 1.2. Let Ω be a bounded domain in Rn, let αl,gC1,1(ˉΩ×R) satisfying inf|g|>0, ug(x,u)0, αl>0 and uαl(x,u)0. Suppose there is an admissible function u_C2(Ω)C0,1(ˉΩ) satisfying

    F[u_]g(x,u_)inΩ,u_=0onΩ. (1.5)

    Then the problem

    F[u]=g(x,u)inΩ,u=0onΩ. (1.6)

    has a unique admissible solution uC3,α(Ω)C0,1(ˉΩ) for all α(0,1).

    Remark 1.2. ug0, uαl(x,u)0 and the existence of sub-solutions are required in the C0 estimate. The C1 interior estimate is a slightly modification of the result in Theorem 5.1.1 [36] since the coefficients g, αl of (1.2) are independent of u. We use conditions ug0 and uαl(x,u)0 again to eliminate extra terms in the C1 estimate.

    As a further application of the a priori curvature estimate we also consider a Plateau-type problem for locally convex Weingarten hypersurfaces. Let Σ be a finite collection of disjoint, smooth, closed, codimension 2 submanifolds of Rn+1. Suppose Σ bounds a locally uniformly convex hypersurface M0 with

    f(n)(λ0):=σnσn1(λ0)n2l=0αlσlσn1(λ0)c,

    where λ0=(λ01,,λ0n) are the principal curvatures of M0 and αl's are positive constants, c0 is a constant. Is there a locally convex hypersurface M with boundary Σ and f(n)(λ)=c, where λ=(λ1,,λn) are the principal curvatures of M?

    Theorem 1.3. Let Σ, f(n)(λ) be as above. If Σ bounds a locally uniformly convex hypersurface M0 with f(n)(λ0)c at each point of M0. Then Σ bounds a smooth, locally convex hypersurface M with f(n)(λ)=c at each point of M.

    We compute using a local orthonormal frame field ˆe1,,ˆen defined on M= graph u in a neighbourhood of the point at which we are computing. The standard basis of Rn+1 is denoted by e1,,en+1. Covariant differentiation on M in the direction ˆei is denoted by i. The components of the second fundamental form A of M in the basis ˆe1,,ˆen are denoted by (hij). Thus

    hij=Dˆeiˆej,ν,

    where D and , denote the usual connection and inner product on Rn+1, and ν denotes the upward unit normal

    ν=(Du,1)1+|Du|2.

    The differential equation in (1.4) can then be expressed as

    F(A,X)=g(X). (2.1)

    As usual we denote first and second partial derivatives of F with respect to hij by Fij and Fij,rs. We assume summation from 1 to n over repeated Latin indices unless otherwise indicated. Following two lemmas are similar to the ones in [29] with minor changes, so we omit the proof.

    Lemma 2.1. The second fundamental form hab satisfies

    Fijijhab=Fij,rsahijbhrs+FijhijhaphpbFijhiphpjhababg+k2l=0(aαlbGl+bαlaGl)+k2l=0abαlGl.

    Lemma 2.2. For any α=1,,n+1, we have

    Fijijνα+Fijhiphpjνα=g,eαk2l=0αl,eαGl.

    Lemma 2.3. There is a constant C>0, depending only on n,k,infαl,|g|C0, so that for any l=0,1,,k2,

    |Gl|C.

    Proof. Proof by contradiction. If the result is not true, then for any integer i, there is an admissible solution u(i), a point x(i)Ω and an index 0l(i)k2, so that

    σl(i)σk1(λ[u(i)])>iatx(i).

    By passing to a subsequence, we may assume l(i)l and x(i)xˉΩ as i+. Therefore

    limi+σlσk1(λ[u(i)])(x(i))=+,

    or we may simply write σlσk1+ if no ambiguilty arises. Since αl>0, and g is bounded, by (1.4) we have σkσk1+. For i large enough, σk>0. By Newton-MacLaurin inequalities, we have

    σlσk1=σlσl+1σk2σk1C(σk1σk)k1l0.

    We therefore get a contradiction.

    Proof of Theorem 1.1. Here the argument comes from [29]. Let η=ϕu. η>0 in Ω. For a function Φ to be chosen and a constant β>0 fixed, we consider the function

    ˜W(X,ξ)=ηβ(expΦ(νn+1))hξξ

    for all XM and all unit vector ξTXM. Then ˜W attains its maximum at an interior point X0M, in a direction ξ0TX0M which we may take to be ˆe1. We may assume that (hij) is diagonal at X0 with eigenvalues λ1λ2λn. Without loss of generality we may assume that the ˆe1,,ˆen has been chosen so that iˆej=0 at X0 for all i,j=1,,n. Let τ=ˆe1. Then W(X)=˜W(X,τ) is defined near X0 and has an interior maximum at X0. Let Z:=habτaτb. By the special choice of frame and the fact that hij is diagonal at X0 in this frame, we can see that

    iZ=ih11andijZ=ijh11atX0

    Therefore the scalar function Z satisfies the same equation as the component h11 of the tensor hij. Thus at X0, we have

    iWW=βiηη+Φiνn+1+ih11h11=0 (2.2)

    and

    ijWWiWjWW2=β(ijηηiηjηη2)+Φiνn+1jνn+1+Φijνn+1+ijh11h11ih11jh11h211 (2.3)

    is nonpositive in the sense of matrices at X0. By Lemmas 2.1 and 2.2, we have, at X0,

    0βFij(ijηηiηjηη2)+ΦFijiνn+1jνn+1(Φνn+1+1)Fijhiphpj+Fijhijh1111gh11+Φg,en+11h11Fij,rs1hij1hrsFijih11jh11h211k2l=0Φαl,en+1σlσk1+k2l=01h11(21αl1σlσk1+11αlσlσk1). (2.4)

    Using Gauss's formula

    ijXα=hijνα,

    we have

    11g(X)=n+1α=1gXα11Xα+n+1α,β=12gXαXβ1Xα1Xβ=n+1α=1gXαναh11+n+1α,β=12gXαXβ1Xα1Xβ.

    Consequently,

    |11gh11|C.

    For the same reason, we have for all l=0,,k2,

    |11αlh11|C.

    Taking Lemma 2.3 into count, we estimate the two terms in the last line of (2.4) as

    k2l=0Φαl,en+1σlσk1+k2l=01h1111αlσlσk1C|Φ|C.

    Recall that F=GkαlGl and it is well-known that the operator (σk1σl)1k1l is concave for 0lk2. It follows that

    (1Gl)1k1lisaconcaveoperatorforl=0,1,,k2.

    For any symmetric matrix (Bij)Rn×n, we have

    {(1Gl)1k1l}ij,rsBijBrs0.

    Direct computation shows that

    Gij,rslBijBrs1Glklk1l(GijlBij)2.

    Note that Gk is also a concave operator.

    1h11Fij,rs1hij1hrs+k2l=02h111αl1σlσk1=1h11Gij,rsk1hij1hrs+k2l=0αlh11Gij,rsl1hij1hrs+k2l=02h111αl1σlσk11h11k2l=0G1lαlCl(1Gl+1αlClαlGl)21h11k2l=0(1αl)2ClαlGlCh11

    where Cl=klk1l. By the homogeneity of Gl's, we see that

    Fijhij=Gk+k2l=0αl(k1l)GlGk+k2l=0αlσlσk1inf|g|>0.

    Using Lemma 2.3 again, we have

    FijhijC.

    Next we assume that ϕ has been extended to be constant in the en+1 direction.

    ijη=nα,β=12ϕXαXβiXαjXβ+nα=1ϕXαijXαijXn+1=nα=1ϕXαναhijhijνn+1.

    Consequently,

    Fijijη=(nα=1ϕXανανn+1)Fijhij.

    Using above estimates in (2.4), we have, at X0,

    0CβηβFijiηjηη2+ΦFijiνn+1jνn+1Fijih11jh11h211(Φνn+1+1)Fijhiphpj+inf|g|h11C(1+|Φ|). (2.5)

    Next, using (2.2), we have

    Fijih11jh11h211=Fij(βiηη+Φiνn+1)(βjηη+Φjνn+1)(1+γ1)β2Fijiηjηη2+(1+γ)(Φ)2Fijiνn+1jνn+1

    for any γ>0. Therefore at X0 we have, since |η|C,

    0CβηC[β+(1+γ1)β2]ni=1Fiiη2+[Φ(1+γ)(Φ)2]Fijiνn+1jνn+1[Φνn+1+1]Fijhiphpj+inf|g|h11C(1+|Φ|). (2.6)

    We choose a positive constant a, so that

    a12νn+1=121+|Du|2

    which depends only on supΩ|Du|. Therefore

    1νn+1a1aC.

    We now choose

    Φ(t)=log(ta).

    Then

    Φ(t)=1ta,Φ(t)=1(ta)2,

    and

    (Φt+1)=ata,Φ(1+γ)(Φ)2=γ(ta)2.

    By direct computation, we have iνn+1=hipˆep,en+1, and therefore

    Fijiνn+1jνn+1=Fijhiphjqˆep,en+1ˆeq,en+1Fijhiphpj.

    Next we choose 0<γa22, then we have

    (Φt+1)+[Φ(1+γ)(Φ)2]=ataγ(ta)212a2(ta)2>0.

    Thus we have

    0CβηC(β,a)η2(ni=1Fii)+inf|g|h11C(a). (2.7)

    In the following we show that ni=1FiiC. By the definition of operator F and Lemma 2.3, we have

    ni=1Fii=ni=1(σkσk1)iini=1k2l=0αl(σlσk1)ii=ni=1σk1(λ|i)σk1σkσ2k1ni=1σk2(λ|i)+α0σ2k1ni=1σk2(λ|i)+k2l=1αliσlσk2(λ|i)iσk1σl1(λ|i)σ2k1=nk+1(nk+2)σkσk2σ2k1+(nk+2)α0σk2σ2k1+k2l=1αl(nk+2)σlσk2(nl+1)σk1σl1σ2k1nk+1+(nk+2)|σkσk1Gk2|+(nk+2)|α0|C0|Gk2G0|+(nk+2)|Gk2|k2l=1|αl|C0|Gl|.

    From Eq (1.4) we have |Gk|C, therefore iFiiC. At X0, we get an upper bound

    λ1C(β,a)η2.

    Consequently, W(X0) satisfies an upper bound. Since W(X)W(X0), we get the required upper bound for the maximum principle curvature. Since λΓk and nk2, u is at least mean-convex and

    ni=1λi>0.

    Therefore λn(n1)λ1 and

    |A|=ni=1λ2iC(n)λ1C(ϕu)β.

    In this section we prove Theorem 1.2. By comparison principle, we have 0uu_. For any ΩΩ, infΩu_uc(Ω)<0. First we show the gradient bound of admissible solutions of (1.6). We need following lemmas to prove the gradient estimate.

    Lemma 3.1. Suppose A={aij}n×n satisfies λ(A)Γk1, a11<0 and {aij}2i,jn is diagonal, then

    ni=2Fa1ia1i0. (3.1)

    Proof. Let

    B=(a11000a22000ann),C=(0a12a1na2100an100).

    A(t):=B+tC, f(t):=F(A(t)). Suppose a1i=ai1 for all 2in. Directly we have

    σk(A(t))=σk(B)t2ni=2a21iσk2(B|1i),

    where (B|ij) is the submatrix of B formed by deleting i-th, j-th rows and columns. Easily we see that for t[1,1], λ(A(t))Γk1 and f is concave on [1,1]. f(1)=f(1)=F(A). So f(1)0. While

    f(1)=2ni=2Fa1ia1i.

    Remark 3.1. By the concavity of σkσk1, we can prove following inequality with λ(B)Γk1

    σk2(B|1i)σk1(B)σk3(B|1i)σk(B)02in. (3.2)

    We let f(t)=σkσk1(A(t)).

    f(1)=2(ni=2a1iσk2(B|1i))σk1(A)σk(A)(ni=2a21iσk3(B|1i))σ2k1(A)0.

    Equivalently,

    σk1(B)(ni=2a21iσk2(B|1i))σk(B)(ni=2a21iσk3(B|1i))0. (3.3)

    We can choose a1i>0 small enough and a1j=0 for ji and 2jn, so that λ(A)Γk1. Then (3.3) implies (3.2).

    Lemma 3.2. Let αk2>0 and αl0 for 0lk3. Suppose symmetric matrix A={aij}n×n satisfying

    λ(A)Γk1,a11<0,and{aij}2i,jnisdiagonal.

    Then

    Fa11C0(ni=1Faii) (3.4)

    where C0 depends on n,k,|u|C0,|g|C0,infαk2.

    Proof. Note that

    a11(σlσk1(A))=σl1(A|1)σk1(A)σl(A)σk2(A|1)σ2k1(A)=ni=2a21iσ2k1(A)[σl2(A|1i)σk2(A|1)σl1(A|1)σk3(A|1i)]+σ2k1(A)[σl1(A|1)σk1(A|1)σl(A|1)σk2(A|1)].

    For 0lk2,

    a11(σlσk1(A))Cn,lσl(A|1)σk2(A|1)σ2k1(A).

    As for l=k,

    a11(σkσk1(A))Cn,kσ2k1(A|1)σ2k1(A)Cn,k.

    Therefore

    Fa11Cn,k+Cn,kinfαk2σ2k2(A|1)σ2k1(A).

    Next we compute ni=1Faii as

    ni=1Faii=nk+1(nk+2)σkσk2σ2k1(A)+(nk+2)α0σk2σ2k1(A)+k2l=1αl(nk+2)σl(A)σk2(A)(nl+1)σk1(A)σl1(A)σ2k1(A)nk+1(nk+2)σk2(A)σk1(A)(σkσk1(A)k2l=0αlσlσk1(A))Cn,k+Cn,k|g|C0σk2σk1(A)C(n,k,|g|C0)+C(n,k,|g|C0)(σk2σk1(A))2C(n,k,|g|C0)+C(n,k,|g|C0)σ2k2(A|1)σ2k1(A)C(n,k,|g|C0,infαk2)Fa11.

    Lemma 3.3. For any ΩΩ, there is a constant C depending only on Ω,n,k,αl,g and their first derivatives, such that if u is an admissible solution of (1.6), then

    |Du|C

    on Ω.

    Proof. Since we require that ug0 and uαl0, we only need to modify the equation (5.42) in [36] (i.e., (A.6)), where extra terms k2l=0(αl)ulogu1σl(A)σk1(A)gulogu1 should be included. These terms are all good terms and Zhou's proof will also hold in our case. For reader's convenience, we sketch the proof in the appendix below.

    Now we give the proof of Theorem 1.2.

    Proof of Thorem 1.2. The theorem can be proved by solving uniformlly elliptic approximating problems.

    Fϵ[uϵ]=gϵ(x,uϵ)inΩ,uϵ=0onΩ,

    for ϵ>0 small, and u_ is an admissible subsolution for each of the approximating problems. By the comparison principle and Theorem 1.1, the interior gradient estimates in [36](modified), we have uniform C2 interior estimates for uϵ. Then Evans-Krylov's theory, together with Schauder theory, imply uniform estimates for ||uϵ||C3,α(Ω) for any ΩΩ. Theorem 2 then follows by extracting a suitable subsequence as ϵ0.

    In this section we prove Theorem 1.3. The notion of locally convex hypersurface we use is the same as that in [29].

    Definition 4.1. A compact, connected, locally convex hypersurface M (possibly with boundary) in Rn+1 is an immersion of an n-dimensional, compact, oriented and connected manifold N (possibly with boundary) in Rn+1, that is, a mapping T:NMRn+1, such that for any pN there is a neighbourhood ωpN such that

    T is a homeomorphism from ωp to T(ωp);

    T(ωp) is a convex graph;

    the convexity of T(ωp) agrees with the orientation.

    Since M is immersed, a point xM may be the image of several points in N. Since M and N are compact, T1(x) consists of only finitely many points. Let r>0 and xM. For small enough r, T1(MBn+1r(x)) consists of several disjoint open sets U1,,Us of N such that T|Ui is a homeomorphism of Ui onto T(Ui) for each i=1,,s. By an r-neighbourhood ωr(x) of x in M we mean any one of the sets T(Ui). We say that ωr(x) is convex if ωr(x) lies on the boundary of its convex hull.

    We shall use following lemma (see [32] Theorem A) to prove Theorem 1.3.

    Lemma 4.1. Let M0BR(0) be a locally convex hypersurface with C2-boundary M0. Suppose that on M0, the principal curvatures λ01,,λ0n of M0 satisfy

    C10λ0iC0,i=1,2,,n,

    for some C0>0. Then there exist positive constants r and α, depending only on n,C0,R and M0, such that for any point pM0, each r-neighbourhood ωr(p) of p is convex, and there is a closed cone Cp,α with vertex p and angle α such that ωr(p)Cp,α={p}.

    Note that for any point pM0, if one chooses the axial direction of the cone Cp,α as the xn+1-axis, then each δ-neighbourhood of p can be represented as a graph,

    xn+1=u(x),|x|δ,

    for any δ<rsin(α/2). The cone condition also implies

    |Du(x)|C,|x|<δ,

    where C>0 only depends on α. Lemma 4.1 holds not just for M0, but also for a family of locally convex hypersurfaces, with uniform r and α.

    For 2kn, denote

    f(k)(λ)=σkσk1(λ)n2l=0αlσlσk1(λ).

    αl's are positive constants. With the aid of Lemma 4.1, we use the Perron method to obtain a viscosity solution of the Plateau problem for the curvature function f(n), using the following lemma.

    Lemma 4.2. Let Ω be a bounded domain in Rn with Lipschitz boundary. Let ϕC0,1(ˉΩ) be a k-convex viscosity subsolution of

    f(k)(λ)=σkσk1(λ)k2l=0αlσlσk1(λ)=cinΩ, (4.1)

    where αl>0 and c0 are all constants. Then there is a viscosity solution u of (4.1) such that u=ϕ on Ω.

    Proof. The proof uses the well-known Perron method. Let Ψ denote the set of k-convex subsolutions v of (4.1) with v=ϕ on Ω. Then Ψ is not empty and the required solution u is given by

    u(x)=sup{v(x):vΨ}.

    It is a standard argument. The key ingredient that needs to be mentioned is the solvability of the Dirichlet problem

    f(k)(λ)=cinBr,u=u0onBr, (4.2)

    in small enough balls BrRn, if u0 is any Lipschitz viscosity subsolution of (4.2). This is a consequence of [31] Theorem 6.2 with slight modification.

    Using Lemma 4.2 and the argument of [32], we conclude that there is a locally convex hypersurface M with boundary Σ which satisfies the equation f(n)(λ)=c in the viscosity sense; that is, for any point pM, if M is locally represented as the graph of a convex function u (by Lemma 4.1), then u is a viscosity solution of f(n)(λ)=c.

    Following we discuss the regularity of M. The interior regularity follows in the same way as [29].

    Boundary regularity

    The boundary regularity of M is a local property. The boundary estimates we need are contained in [19,21]. However, they can not be applied directly to M. Since we are working in a neighbourhood of a boundary point p0M, which we may take to be the origin, we may assume that for a smooth bounded domain ΩRn with 0Ω and small enough ρ>0 we have

    M(Bρ×R)=graphu,M0(Bρ×R)=graphu0,

    where uC(Ωρ)C0,1(ˉΩρ), and u0C(ˉΩρ) are k-convex solutions of

    f(k)[u]=cinΩρ,f(k)[u0]cinΩρ,

    with

    uu0inΩρ,u=u0onΩBρ.

    We may choose the coordinate system in Rn in such a way that Ω is uniformly convex, and moreover, so that for some ϵ0>0 we have

    σk1(κ)σk2(κ)ϵ0>0 (4.3)

    on ΩBρ, where κ=(κ1,,κn1) denotes the vector of principal curvatures of Ω. We recall that the principal curvatures of graph(u) are the eigenvalues of the matrix

    (IDuDu1+|Du|2)(D2u1+|Du|2).

    We denote σk(p,r) as the k-th elementary symmetric function of the eigenvalues of the matrix

    (Ipp1+|p|2)r,p=(p1,,pn),r=(rij)n×n.

    Let f(k)(p,r)=σkσk1(p,r)k2l=0αl(1+|p|2)kl2σlσk1(p,r). λ(r) is the vector formed by eigenvalues of r. For any pRn and symmetric matrices r,s with λ(r),λ(s)Γk, we have

    i,jf(k)rij(p,r)sijf(k)(p,s)+k2l=0(kl)αl(1+|p|2)kl2σlσk1(p,r). (4.4)

    For later purposes we note the simple estimate, if r0,

    11+|p|2σk(0,r)σk(p,r)σk(0,r),

    and the development

    σk(p,r)=1+|˜p|21+|p|2rnnσk1(˜p,˜r)+O((|rst|k)(s,t)(n,n)),

    where p=(p1,,pn)Rn, r=(rij)n×n, ˜p=(p1,,pn1)Rn1, ˜r=(rij)i,j=1,,n1.

    We suppose that Ω is the graph of ω:Bn1ρ(0)RnR and u(˜x,ω(˜x))=φ(˜x). Furthermore, ω(0)=0, Dω(0)=0, Dφ(0)=0 and ω is a strictly convex function of ˜x. The curvature equation is equivalent to

    f(k)(Du,D2u)=c1+|Du|2 (4.5)

    defined in some domain ΩRn. We have following boundary estimates for second derivatives of u.

    Lemma 4.3. Let uC3(ˉΩ) be a k-convex solution of (4.5). We assume (4.3) with ϵ>0. Then the estimate

    |D2u(0)|C(n,k,αl,c,ϵ,||ω||C3,||φ||C4,||u||C1,λmin(D2ω(0))) (4.6)

    holds true where λmin denotes the smallest eigenvalue.

    Remark 4.1. On Ω, we have for i,j=1,,n1,

    ui+unωi=φi,uij+uinωj+unjωi+unnωiωj+unωij=φij.

    Therefore |uij(0)|=|φij(0)un(0)ωij(0)|C. It remains to show that |uin(0)|C and |unn(0)|C. We follow [19,21] to obtain mixed second derivative boundary estimates and double normal second derivative boundary estimate.

    Proof. Let

    Ωd,κ={x(˜x,xn)Ω||˜x|<d,ω(˜x)<xn<˜ω(˜x)+κ2d2}

    where 0<d<ρ, ˜ω(˜x):=ω(˜x)κ2|˜x|2, and κ>0 is chosen small enough such that ˜ω is still strictly convex. We decompose Ωd,κ=1Ωd,κ2Ωd,κ3Ωd,κ with

    1Ωd,κ={xΩd,κ|xn=ω(˜x)},2Ωd,κ={xΩd,κ|xn=ω(˜x)+κ2d2},3Ωd,κ={xΩd,κ||˜x|=d}.

    Our lower barrier function v will be of the form

    v(x)=θ(˜x)+h(ρ(x)) (4.7)

    where θ(˜x) is an arbitrary C2-function, h(ρ)=exp{Bρ}exp{κBd2} and ρ(x)=κd2+˜ω(˜x)xn. Denote Fij=f(k)(Du,D2u)uij.

    Mixed second derivative boundary estimates

    By (4.4) and Lemma 2.3, we have

    Fijvijf(k)(Du,D2v)+C

    where C depends only on n,k,αl's, c,||Du||C0. We choose an orthonormal frame {bi}ni=1 with bn=Dρ|Dρ| and denote v(s)=vbs. Directly, we have

    v(s)=θ(s)+hρ(s),(1sn1);v(n)=θ(n)h1+|D˜ω|2;v(st)=θ(st)+h˜ω(st),(s,t)(n,n);v(nn)=θ(nn)+h˜ω(nn)+h(1+|D˜ω|2).

    We may choose d small so that |Du| is also small. Note that |D˜ω| is small since we can choose d,κ small. By choosing large enough B, we caculate

    f(k)(Du,D2v)=σkσk1(Du,D2v)k2l=0αl(1+|Du|2)kl2σlσk1(Du,D2v)(1ϵ)σkσk1(0,D2v)2k2l=0αlσlσk1(0,D2v)(1ϵ)2hσk1σk2(0,˜ω(st))2k2l=1αl(h)lk+1σl1σk2(0,˜ω(st))o(B1)

    where in the last line, 1s,tn1. Finally, we see that for large enough B and small enough d and κ the estimate

    (1δ)h|Dv|(1+δ)h

    is valid for small δ. Therefore

    Fijvij(1ϵ)σk1σk2(0,˜ω(st))|Dv|+C. (4.8)

    Let τ be a C2-smooth vector field which is tangential along Ω. Following [19,21] we then introduce the function

    w=1exp(a˜w)b|x|2

    where ˜w=uτ12n1i=1u2s and a,b are positive constants. Since on 1Ωd,κ, u=φ, and

    w|1Ωd,κaφτc|˜x|2,w(0)=0,w|2Ωd,κ3Ωd,κM

    for suitable constants c,M depending on a,b,||u||C1 and ||φ||C1. By differentiation of Eq (4.5), we obtain

    Fijuijp+Fiuip=cˉvp

    where Fi:=f(k)ui and ˉv:=1+|Du|2.

    Fij˜wij=Fijuijpτp+Fij(upjτpi+upiτpj)+Fijτijpupn1s=1Fij(uisujs+usijus)=c(ˉvpτpn1s=1ˉvsus)Fiuipτp+n1s=1Fiuisus+Fij(upjτpi+upiτpj)+Fijτijpupn1s=1Fijuisujs. (4.9)

    By the definition of ˜w, we have

    c(ˉvpτpn1s=1ˉvsus)=cˉv(D˜w,DuHess(τ)(Du,Du)). (4.10)

    Then we compute Fi. Denote bij=δijuiujˉv2 and cij=bipupj. f(k) can be rewritten as

    f(k)=f(k)(cij,ˉv)=σkσk1(cij)k2l=0αlˉvklσlσk1(cij).

    Directly we have

    Fi=f(k)ui=f(k)cpqcpqui+f(k)ˉvˉvui=1ˉv2fiq(k)uqlul1ˉv2fpq(k)uiqup+2ˉv3fpq(k)upululquik2l=0αl(kl)ˉvkl2σlσk1(cij)ui

    where fpq(k):=f(k)cpq. Therefore

    Fiuipτp+n1s=1Fiuisus=(1ˉv2fiq(k)uqlul1ˉv2fpq(k)uiqup+2ˉv3fpq(k)upululquik2l=0αl(kl)ˉvkl2σlσk1(cij)ui)(˜wi+upτpi). (4.11)

    In order to derive the right hand side of (4.11), we use the same coordinate system as [21], which corresponds to the projection of principal curvature directions of the graph of u onto RnΩ. Fixing a point yΩ, we choose a basis of eigenvectors ˆe1,,ˆen of the matrix (cij) at y, corresponding to the eigenvalues λ1,,λn and orthonormal with respect to the inner product given by the matrix I+DuDu. Using a subscript α to denote differentiation with respect to ˆeα, α=1,,n, so that

    uα=ˆeiαui=Du,ˆeα,uαα=λα=ˆeiαˆejαuij.

    Then we obtain

    1ˉv2fiq(k)uqlul(˜wiupτpi)=1ˉv2f(k)λαλαuα(˜ωαHess(τ)(Du,ˆeα))δf(k)λαλ2α+C(δ)f(k)λα˜w2α+C(δ)nα=1f(k)λα.

    The second term of (4.11) can be estimated in the same way as above. As for the third term of (4.11), we calculate as

    |fpq(k)upululq|=|fpq(k)(upqcpq)|C|Du|2.

    Thus

    Fiuipτp+n1s=1Fiuisus2δf(k)λαλ2α+C(δ)f(k)λα˜w2α+C(δ)nα=1f(k)λα+C|˜wiuiτijuiuj|. (4.12)

    Let (ηαi) denote the inverse matrix to (ˆeiα), we write

    usα=ˆeiαuis=λαηαs.

    Furthermore,

    n1s=1f(k)λαu2sα=f(k)λαλ2αn1s=1(ηαs)2.

    Now we reason similarly to [21]. If for all α=1,,n, we have

    n1s=1(ηαs)2ϵ>0 (4.13)

    where ϵ is a small postive number. Then we clearly have

    n1s=1f(k)λαu2sαϵf(k)λαλ2α. (4.14)

    On the other hand, if (4.13) is not true, then

    n1s=1(ηγs)2<ϵ

    for some γ, which implies

    n1s=1(ηαs)2δ0>0

    for all αγ. Hence

    n1s=1f(k)λαu2sαδ0αγf(k)λαλ2α. (4.15)

    Then we use Theorem 3, 4 in [22] to deduce that

    αγ(σkσk1),αλ2α1C(n,k)(σkσk1),αλ2α,αγ(σlσk1),αλ2α1C(n,k,l)(σlσk1),αλ2α,αγ(1σk1),αλ2α1C(n,k,0)(1σk1),αλ2α1C(n,k,0)σ1σk1

    where subscript ',α' denotes differentiation with respect to λα. Therefore,

    n1s=1f(k)λαu2sαδf(k)λαλ2αC. (4.16)

    Combing (4.9), (4.10), (4.12), (4.16), we have

    Fij˜wijC|D˜w,Du|+CFij˜wi˜wj+Cni=1Fii (4.17)

    where we have chosen δ<<δ, so that f(k)λαλ2α can be discarded. Note that in (4.17), we also have used the fact that ni=1FiiC0>0. By choosing a,b large, we conclude that

    FijwijC|Dw,Du|. (4.18)

    From (4.8), (4.18), by comparison principle, we have at 0,

    uτn(0)=1awn(0)1avn(0).

    Since τ is an arbitrary tangential direction at 0Ω, if we replace τ by τ, we get an upper bound for uτn(0).

    Double normal second derivative boundary estimate

    We turn to estimate |unn(0)|. The idea is to estimate unn in a first step at some optimally chosen point y and in a second step conclude from this the estimate in the given point. We introduce a smooth moving orthonormal frame {b1,,bn} with bn=(ω˜x,1)/1+|ω˜x|2 being the upward normal to Ω. Here ω˜x is the gradient of ω(˜x). Let

    G=σk1σk2(u(˜x),u(˜x˜x))k2l=1αl1+|Du|2klσl1σk2(u(˜x),u(˜x˜x))c1+|Du|2

    on Ω, where u(˜x)=(ub1,,ubn1) and u(˜x˜x)=(2ubibj)1i,jn1. For simplicity, we denote ˜p=u(˜x), ˜r=u(˜x˜x), ˉv=1+|Du|2. First we observe that

    f(k)(p,r)<limrnn+f(k)(p,r)=σk1σk2(˜p,˜r)k2l=1αlˉvklσl1σk2(˜p,˜r)

    from what we see that G>0. Hence the function

    ˜G=G(x)+4|˜x|2ˉρ2ˉG

    with ˉG=max{G(x)|xΩ,|˜x|<ρ} and 0<ˉρ<ρ attains its minimum over ΩBρ(0) at some point yΩBˉρ/2(0). If |unn(y)|<C, then G(y)>C1>0.

    G(0)=˜G(0)˜G(y)>G(y)>C1>0.

    Therefore G(0) is strictly positive and we have

    |unn(0)|<+.

    To check that |unn(y)|<+, we proceed in essentially the same way as in mixed second derivative estimates. The point y plays the role of the origin and the function ˜w is defined as

    ˜w(x)=(un(x)un(y))K|Du(x)Du(y)|2

    where K is a sufficiently big constant. In order to apply the comparison principle, we need to obtain that

    w(x)˜θ(˜x)C|˜x˜y|2(xΩBρ(0))

    where ˜θ is some C2-smooth function. We reason similarly to Lemma 2.5 in [19]. The choice of the moving frame gives

    u(s)=φ(s),u(st)=φ(st)unω(st)(s,t=1,,n1).

    By the concavity of σk1σk2(˜p,˜r), σlσk2(˜p,˜r)(l=0,,k3) in ˜r and the convexity of 1+|˜p|2 in ˜p, we compute

    0˜G(x)˜G(y)g(y,x)(un(y)un(x))+h(y,x) (4.19)

    with

    g(y,x)=(σk1σk2)st(˜p(x),˜r(y))ω(st)(x)k2l=1αlˉvkl(x)(σl1σk2)st(˜p(x),˜r(y))ω(st)(x)+(k2l=1αl(kl)ˉvkl2(y)σl1σk2(˜p(y),˜r(y))+cˉv1(y))(un(y)ui(y)ωi(x))

    and

    h(y,x)=σk1σk2(φ˜x(x),˜r(y))σk1σk2(φ˜x(y),˜r(y))+(σk1σk2)st(φ˜x(x),˜r(y))Ψst(y,x)k2l=1αlˉvkl(σl1σk2)st(φ˜x(x),˜r(y))Ψst(y,x)+k2l=1αlˉvkl(σl1σk2(φ˜x(y),˜r(y))σl1σk2(φ˜x(x),˜r(y)))+[cˉv1k2l=1αl(kl)ˉvkl2σl1σk2(˜p(y),˜r(y))]A+4ˉGˉρ2(|˜x|2|˜y|2)

    where Ψst(y,x)=φ(st)(x)φ(st)(y)un(y)(ω(st)(x)ω(st)(y)), A=[φi(y)φi(x)un(y)(ωi(y)ωi(x))]ui(y). We may take ˜θ(˜x)=hg(y,x) if we can show that g(y,x)>0. This is true since |Du| is small and (σl1σk1)st is semi-positive definite, together with condition (4.3). This completes the proof of the boundary regularity.

    The authors were supported by NSFC, grant nos. 12031017 and 11971424.

    The authors declare no conflict of interest.

    In this appendix, we sketch the proof of Lemma 3.3 for reader's convenience. For the original proof, see [36].

    Without loss of generality, we assume Ω=Br(0). Let ρ=r2|x|2, M=oscBru, ˜g(u)=1M(M+uinfBru), ϕ(x,ξ)=ρ(x)˜g(u)log(uξ(x)). This auxiliary function ϕ comes from [34]. Suppose ϕ attains its maximum at (x0,e1). Furthermore, by rotating e2,,en, we can assume that {uij(x0)}2i,jn is diagonal. Thus φ(x)=logρ(x)+log˜g(u(x))+loglogu1 also attains a local maximum at x0Br(0). At x0, we have

    0=φi=ρiρ+˜gi˜g+u1iu1logu1, (A.1)
    0φij=ρijρρiρjρ2+˜gij˜g˜gi˜gj˜g2+u1iju1logu1(1+1logu1)u1iu1ju21logu1. (A.2)

    Only in this proof we denote that Fij:=Fuij. Fij is positive definite. Taking trace with φij and using (A.1), we have

    0Fijφij=Fij(ρijρ+2ρiρ˜gj˜g+˜gij˜g)+Fij(u1iju1logu1(1+2logu1)u1iu1ju21logu1):=A+B. (A.3)

    It is well-known that the principal curvatures of graph u are the eigenvalues of matrix A=(aij)n×n:

    aij=1W(uijuiululjW(W+1)ujululiW(W+1)+uiujupuqupqW2(W+1)2)

    where W=1+|Du|2. Next we compute Fij at x0.

    aijuij={1W3i=j=1,1W2i=1,j2ori2,j=1,1Wi2,j2.

    For two different sets {p,q}{i,j}, apquij=0. Therefore

    Fij=Faijaijuij={1W3Fa11i=j=1,1W2Faiji=1,j2ori2,j=1,1WFaiji2,j2.

    Direct computation shows that

    A=2ρ(ni=1Fuii)+1M˜g(ni,j=1Fuijuij)+2u1Mρ˜gni=1Fu1iρi,
    ni=1Fuii=Fa111W3+ni=2Faii1W1Wni=1Faii,
    ni,j=1Fuijuij=ni,j=1Faijaij=σkσk1(A)k2l=0αl(lk+1)σlσk1(A).

    By (A.1), suppose that u11, then we have u11<0 and

    2u1Mρ˜gni=1Fu1iρi=2u1Mρ˜g(Fu11ρ1+ni=2Fu1iρi)=2u1Mρ˜g(Fa11ρ1W3ni=2Fa1iu1iρW2u1logu1)4ru1MW3ρ˜gFa112M˜glogu1ni=2Fa1ia1i4ru1MW3ρ˜gni=1Faii

    where we have used (3.1). Therefore

    A(2WρCu1W3)(ni=1Faii)+1M˜g(g+k2l=0αl(kl)σlσk1(A)). (A.4)

    In the following we turn to estimate B. By the definition of aij, we have at x0,

    a11x1=1W3u1113u1W5u2112u1W3(W+1)nk=2u2k1,

    for i2,

    a1ix1=1W2u1i12u1W4u11u1iu1W2(W+1)u1iuiiu1W3(W+1)u11u1i,
    aiix1=1Wuii1u1W3u11uii2u1W2(W+1)u21i,

    for i2,j2,ij,

    aijx1=1Wuij12ui1uj1u1W2(W+1).

    Taking derivatives with respect to x1 on both sides of (1.4), we have

    ni,j=1Faijaijx1k2l=0(αl),1σlσk2=g,1.

    For the first term of B, we calculate as

    ni,j=1Fuijuij1u1logu1=1u1logu1(Fa11u111W3+2i2Fa1iu1i1W2+i,j2Faijuij1W),
    Fijuij1=Fa11(a11x1+3u1W5u211+2u1W3(W+1)k2u2k1)+2i2Fa1i(a1ix1+2u1W4u11u1i+u1u1iuiiW2(W+1)+u1u11u1iW3(W+1))+ij2Faij(aijx1+2u1ui1uj1W2(W+1))+ni=2Faii(aiix1+u1u11uiiW3+2u1u21iW2(W+1))=g,1+k2l=0(αl),1σlσk1(A)+u1u11W2(g+k2l=0(kl)αlσlσk1(A))+Fa11(2u1W5u211+2u1W3(W+1)k2u2k1)+2i2Fa1i(u1W4u11u1i+u1u1iuiiW2(W+1)+u1u11u1iW3(W+1))+2ij2Faiju1ui1uj1W2(W+1)+2ni=2Faiiu1u21iW2(W+1).

    For the second term of \mathcal{B} , we calculate

    \sum\limits_{i,j = 1}^n\frac{\partial F}{\partial u_{ij}}u_{1i}u_{1j} = \frac{\partial F}{\partial a_{11}}\frac{u_{11}^2}{W^3}+2\sum\limits_{i = 2}^n\frac{\partial F}{\partial a_{1i}}\frac{u_{11}u_{1i}}{W^2}+\sum\limits_{2\leq i,j\leq n}\frac{\partial F}{\partial a_{ij}}\frac{u_{1i}u_{1j}}{W}.

    Therefore

    \begin{align*} \mathcal{B} = &\frac{1}{u_1\log u_1}\Big(-g_{,1}+\sum\limits_{l = 0}^{k-2}(\alpha_l)_{,1}\frac{\sigma_{l}}{\sigma_{k-1}}(A)\Big)\\ &+\frac{u_{11}}{W^2\log u_1}\Big(-g+\sum\limits_{l = 0}^{k-2}(k-l)\alpha_l\frac{\sigma_l}{\sigma_{k-1}}(A)\Big)\\ &+\Big(\frac{2}{W^5\log u_1}-(1+\frac{2}{\log u_1})\frac{1}{u_1^2\log u_1W^3}\Big)\\ &\frac{\partial F}{\partial a_{11}}u_{11}^2+\frac{2}{W^3(W+1)\log u_1}\sum\limits_{k\geq 2} \frac{\partial F}{\partial a_{11}}u_{k1}^2\\ &+\Big(\frac{2}{W^4\log u_1}+\frac{2}{W^3(W+1)\log u_1}\\ &-(1+\frac{2}{\log u_1})\frac{2}{W^2u_1^2\log u_1}\Big)\sum\limits_{i = 2}^n\frac{\partial F}{\partial a_{1i}}u_{11}u_{1i}\\ &+\frac{2}{W^2(W+1)\log u_1}\sum\limits_{i = 2}^n\frac{\partial F}{\partial a_{1i}}u_{1i}u_{ii}\\ &+\Big(\frac{2}{W^2(W+1)\log u_1}-\frac{1+2/\log u_1}{Wu_1^2\log u_1}\Big)\times\\ &\sum\limits_{2\leq i,j\leq n}\frac{\partial F}{\partial a_{ij}}u_{1i}u_{1j}. \end{align*}

    Since \{\frac{\partial F}{\partial a_{ij}}\}_{1\leq i, j\leq n} is positive definite, so is \{\frac{\partial F}{\partial a_{ij}}\}_{2\leq i, j\leq n} . W = \sqrt{1+u_1^2}\approx u_1 . Therefore

    \begin{align} \mathcal{B}\geq&\frac{1}{u_1\log u_1}\Big(-g_{,1}+\sum\limits_{l = 0}^{k-2}(\alpha_l)_{,1}\frac{\sigma_{l}}{\sigma_{k-1}}(A)\Big)\\&+\frac{u_{11}}{W^2\log u_1}\Big(-g+\sum\limits_{l = 0}^{k-2}(k-l)\alpha_l\frac{\sigma_l}{\sigma_{k-1}}(A)\Big)\\ &+\frac{1-\delta}{W^5\log u_1}\frac{\partial F}{\partial a_{11}}u_{11}^2+\frac{2}{W^2(W+1)\log u_1}\big(\sum\limits_{i = 2}^n\frac{\partial F}{\partial a_{1i}}u_{1i}u_{ii}\big) \end{align} (A.5)

    where \delta > 0 is a small constant, depending only on u_1 . By (A.3), (A.4), (A.5), we have

    \begin{align} 0\geq& (-\frac{2}{W\rho}-\frac{Cu_1}{W^3})(\sum\limits_{i = 1}^n\frac{\partial F}{\partial a_{ii}})+(\frac{1}{M\tilde g}+\frac{u_{11}}{W^2\log u_1})(-g+\sum\limits_{l = 0}^n\alpha_l(k-l)\frac{\sigma_l}{\sigma_{k-1}})\\ &+\frac{1}{u_1\log u_1}(-g_{,1}+\sum\limits_{l = 0}^{k-2}(\alpha_l)_{,1}\frac{\sigma_l}{\sigma_{k-1}})+\frac{1-\delta}{W^5\log u_1}\frac{\partial F}{\partial a_{11}}u_{11}^2\\ &+\frac{2}{W^2(W+1)\log u_1}\big(\sum\limits_{i = 2}^n\frac{\partial F}{\partial a_{1i}}u_{1i}u_{ii}\big). \end{align} (A.6)

    Since we require that g_u\leq 0 and (\alpha_l)_u\geq 0 ,

    -g_{,1}+\sum\limits_{l = 0}^{k-2}(\alpha_l)_{,1}\frac{\sigma_l}{\sigma_{k-1}}\geq-\frac{\partial g}{\partial x_1}+\sum\limits_{l = 0}^{k-2}\frac{\partial \alpha_l}{\partial x_1}\frac{\sigma_l}{\sigma_{k-1}}.

    We claim that

    \begin{equation} \sum\limits_{i = 2}^n\frac{\partial F}{\partial a_{1i}}u_{1i}u_{ii}\geq -C\frac{u_1^2\log^2 u_1}{W}\frac{|D\rho|^2}{\rho^2}\frac{\partial F}{\partial a_{11}}. \end{equation} (A.7)

    We deter the proof of (A.7). By (A.1), we see that the leading term in (A.6) is \frac{1-\delta}{W^5\log u_1}\frac{\partial F}{\partial a_{11}}u_{11}^2\approx\frac{\log u_1}{W}\frac{\partial F}{\partial a_{11}} > 0 . Other terms have order at most O(W^{-1}) , therefore

    \log u_1\leq C.

    The interior gradient estimate is proved after we check (A.7). Let \Upsilon = \{2\leq j\leq n|a_{jj}\geq 0\} . Note that a_{11} < 0 and \lambda(A)\in\Gamma_k .

    \begin{align*} \sum\limits_{i = 2}^{n} \frac{\partial F}{\partial a_{1 i}} u_{1 i} u_{i i} = &-\sum\limits_{i = 2}^{n}\left[\frac{\sigma_{k-2}(A|1i) \sigma_{k-1}(A)-\sigma_{k-3}(A|1 i) \sigma_{k}(A)}{\sigma_{k-1}^{2}(A)}\right.\\ &\left.+\sum\limits_{l = 1}^{k-2} \alpha_{l}\frac{\sigma_{k-3}(A|1 i) \sigma_{l}(A)-\sigma_{l-2}(A|1 i) \sigma_{k-1}(A)}{\sigma_{k-1}^{2}(A)}\right] a_{i 1} u_{1 i} u_{i i} \\ \geq &-\sum\limits_{i \in \Upsilon}^{n}\left[\frac{\sigma_{k-2}(A|1 i) \sigma_{k-1}(A)-\sigma_{k-3}(A|1 i) \sigma_{k}(A)}{\sigma_{k-1}^{2}(A)}\right.\\ &\left.+\sum\limits_{l = 1}^{k-2} \alpha_{l}\frac{\sigma_{k-3}(A|1 i) \sigma_{l}(A)-\sigma_{l-2}(A|1 i) \sigma_{k-1}(A)}{\sigma_{k-1}^{2}(A)}\right] a_{i i} \frac{u_{1 i}^{2}}{W} \\ \geq &-\sum\limits_{i \in \Upsilon}\left[\frac{a_{i i} \sigma_{k-2}(A|1 i) \sigma_{k-1}(A)}{\sigma_{k-1}^{2}(A)}+\sum\limits_{l = 1}^{k-2} \alpha_{l}\frac{a_{i i} \sigma_{k-3}(A|1 i) \sigma_{l}(A)}{\sigma_{k-1}^{2}(A)}\right] \frac{u_{1 i}^{2}}{W} \\ \geq &-\sum\limits_{i \in \Upsilon}\left[\frac{C_{n,k}\sigma_{k-1}(A|1) \sigma_{k-1}(A)}{\sigma_{k-1}^{2}(A)}+\sum\limits_{l = 1}^{k-2} \alpha_{l}\frac{C_{n,k}\sigma_{k-2}(A|1) \sigma_{l}(A)}{\sigma_{k-1}^{2}(A)}\right] \frac{u_{1 i}^{2}}{W} \\ \geq &-\left[\frac{C_{n,k}\sigma_{k-1}(A|1) \sigma_{k-1}(A)}{\sigma_{k-1}^{2}(A)}+\sum\limits_{l = 1}^{k-2} \alpha_{l}\frac{C_{n,k}\sigma_{k-2}(A|1) \sigma_{l}(A)}{\sigma_{k-1}^{2}(A)}\right] \sum\limits_{i = 2}^{n} \frac{u_{1 i}^{2}}{W} \\ \geq &-\left[C_{n,k}\frac{\sigma_{k-1}(A|1) \sigma_{k-1}(A)-\sigma_{k-2}(A|1) \sigma_{k}(A)}{\sigma_{k-1}^{2}(A)}\right.\\ &+\left.\sum\limits_{l = 0}^{k-2}\alpha_lC_{n,k}\frac{\sigma_{k-2}(A|1)\sigma_l(A)-\sigma_{l-1}(A)\sigma_{k-1}(A)}{\sigma_{k-1}^2(A)}\right]\sum\limits_{i = 2}^n\frac{u_{1i}^2}{W}\\ \geq &-C(n,k)\sum\limits_{i = 2}^n\frac{u_{1i}^2}{W}\frac{\partial F}{\partial a_{11}}\\ \geq &-C(n,k)\frac{u_1^2\log^2u_1}{W}\frac{|D\rho|^2}{\rho^2}\frac{\partial F}{\partial a_{11}}. \end{align*}

    Thus (A.7) holds and the gradient estimate is proved.



    [1] Z. Cen, A hybrid difference scheme for a singularly perturbed convection-diffusion problem with discontinuous convection coefficient, Appl. Math. Comput., 169 (2005), 689–699. https://doi.org/10.1016/j.amc.2004.08.051 doi: 10.1016/j.amc.2004.08.051
    [2] M. Chandru, P. Das, H. Ramos, Numerical treatment of two-parameter singularly perturbed parabolic convection diffusion problems with non-smooth data, Math. Methods Appl. Sci., 41 (2018), 5359–5387. https://doi.org/10.1002/mma.5067 doi: 10.1002/mma.5067
    [3] M. Chandru, T. Prabha, P. Das, V. Shanthi, A numerical method for solving boundary and interior layers dominated parabolic problems with discontinuous convection coefficient and source terms, Differ. Equ. Dyn. Syst., 27 (2019), 91–112. https://doi.org/10.1007/s12591-017-0385-3 doi: 10.1007/s12591-017-0385-3
    [4] M. Chandru, T. Prabha, V. Shanthi, A parameter robust higher order numerical method for singularly perturbed two parameter problems with non-smooth data, J. Comput. Appl. Math., 309 (2017), 11–27. https://doi.org/10.1016/j.cam.2016.06.009 doi: 10.1016/j.cam.2016.06.009
    [5] P. Das, V. Mehrmann, Numerical solution of singularly perturbed convection-diffusion-reaction problems with two small parameters, BIT Numer. Math., 56 (2016), 51–76. https://doi.org/10.1007/s10543-015-0559-8 doi: 10.1007/s10543-015-0559-8
    [6] E. P. Doolan, J. J. H. Miller, W. H. Schilders, Uniform numerical methods for problems with initial and boundary layers, Vol. 1, Boole Press, 1980.
    [7] P. A. Farrell, A. F. Hegarty, J. J. H. Miller, E. O'Riordan, G. I. Shishkin, Singularly perturbed convection–diffusion problems with boundary and weak interior layers, J. Comput. Appl. Math., 166 (2004), 133–151. https://doi.org/10.1016/j.cam.2003.09.033 doi: 10.1016/j.cam.2003.09.033
    [8] P. A. Farrell, J. J. H. Miller, E. O'Riordan, G. I. Shishkin, Singularly perturbed differential equations with discontinuous source terms, In: Analytical and numerical methods for convection-dominated and singularly perturbed problems, Nova Publishers, 2000.
    [9] P. A. Farrell, A. Hegarty, J. J. H. Miller, E. O'Riordan, G. I. Shishkin, Robust computational techniques for boundary layers, CRC Press, 2000. https://doi.org/10.1201/9781482285727
    [10] P. A. Farrell, A. F. Hegarty, J. J. H. Miller, E. O'Riordan, G. I. Shishkin, Global maximum norm parameter-uniform numerical method for a singularly perturbed convection-diffusion problem with discontinuous convection coefficient, Math. Comput. Model., 40 (2004), 1375–1392. https://doi.org/10.1016/j.mcm.2005.01.025 doi: 10.1016/j.mcm.2005.01.025
    [11] J. L. Gracia, E. O'Riordan, M. L. Pickett, A parameter robust second order numerical method for a singularly perturbed two-parameter problem, Appl. Numer. Math., 56 (2006), 962–980. https://doi.org/10.1016/j.apnum.2005.08.002 doi: 10.1016/j.apnum.2005.08.002
    [12] N. Kumari, S. Gowrisankar, A robust B-spline method for two parameter singularly perturbed parabolic differential equations with discontinuous initial condition, J. Appl. Math. Comput., 2024, 1–25. https://doi.org/10.1007/s12190-024-02168-3
    [13] J. Mohapatra, Equidistribution grids for two-parameter convection–diffusion boundary-value problems, J. Math. Model., 2 (2014), 1–21.
    [14] E. O'Riordan, Opposing flows in a one dimensional convection-diffusion problem, Centr. Eur. J. Math., 10 (2012), 85–100. https://doi.org/10.2478/s11533-011-0121-0 doi: 10.2478/s11533-011-0121-0
    [15] E. O'Riordan, M. L. Pickett, G. I. Shishkin, Singularly perturbed problems modeling reaction-convection-diffusion processes, Comput. Methods Appl. Math., 3 (2003), 424–442. https://doi.org/10.2478/cmam-2003-0028 doi: 10.2478/cmam-2003-0028
    [16] K. C. Patidar, A robust fitted operator finite difference method for a two-parameter singular perturbation problem, J. Differ. Equ. Appl., 14 (2008), 1197–1214. https://doi.org/10.1080/10236190701817383 doi: 10.1080/10236190701817383
    [17] T. Prabha, M. Chandru, V. Shanthi, Hybrid difference scheme for singularly perturbed reaction-convection-diffusion problem with boundary and interior layers, Appl. Math. Comput., 314 (2017), 237–256. https://doi.org/10.1016/j.amc.2017.06.029 doi: 10.1016/j.amc.2017.06.029
    [18] T. Prabha, M. Chandru, V. Shanthi, H. Ramos, Discrete approximation for a two-parameter singularly perturbed boundary value problem having discontinuity in convection coefficient and source term, J. Comput. Appl. Math., 359 (2019), 102–118. https://doi.org/10.1016/j.cam.2019.03.040 doi: 10.1016/j.cam.2019.03.040
    [19] S. C. S. Rao, A. K. Chaturvedi, Parameter-uniform numerical method for a two-dimensional singularly perturbed convection–reaction–diffusion problem with interior and boundary layers, Math. Comput. Simul., 187 (2021), 656–686. https://doi.org/10.1016/j.matcom.2021.03.016 doi: 10.1016/j.matcom.2021.03.016
    [20] H. G. Roos, M. Stynes, L. Tobiska, Numerical methods for singularly perturbed differential equations, Springer, 1996.
    [21] N. Roy, A. Jha, A parameter uniform method for two-parameter singularly perturbed boundary value problems with discontinuous data, MethodsX, 10 (2023), 102004. https://doi.org/10.1016/j.mex.2023.102004 doi: 10.1016/j.mex.2023.102004
    [22] H. Schlichting, K. Gersten, Boundary-layer theory, Springer Science & Business Media, 2003.
    [23] V. Shanthi, N. Ramanujam, S. Natesan, Fitted mesh method for singularly perturbed reaction-convection-diffusion problems with boundary and interior layers, J. Appl. Math. Comput., 22 (2006), 49–65. https://doi.org/10.1007/BF02896460 doi: 10.1007/BF02896460
    [24] M. Shivhare, P. Pramod Chakravarthy, D. Kumar, Quadratic B-spline collocation method for two-parameter singularly perturbed problem on exponentially graded mesh, Int. J. Comput. Math., 98 (2021), 2461–2481. https://doi.org/10.1080/00207160.2021.1901277 doi: 10.1080/00207160.2021.1901277
    [25] G. Singh, S. Natesan, Study of the nipg method for two–parameter singular perturbation problems on several layer adapted grids, J. Appl. Math. Comput., 63 (2020), 683–705. https://doi.org/10.1007/s12190-020-01334-7 doi: 10.1007/s12190-020-01334-7
    [26] A. Vasil'Eva, Asymptotic methods in the theory of ordinary differential equations containing small parameters in front of the higher derivatives, USSR Comput. Math. Math. Phys., 3 (1963), 823–863.
    [27] R. Vulanovi\acute{c}, A higher-order scheme for quasilinear boundary value problems with two small parameters, Computing, 67 (2001), 287–303.
  • This article has been cited by:

    1. James C. L. Chow, Computational physics and imaging in medicine, 2025, 22, 1551-0018, 106, 10.3934/mbe.2025005
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(718) PDF downloads(65) Cited by(0)

Figures and Tables

Figures(5)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog