Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Constructing hidden differential equations using a data-driven approach with the alternating direction method of multipliers (ADMM)

  • Received: 04 September 2024 Revised: 02 January 2025 Accepted: 15 January 2025 Published: 14 February 2025
  • This paper adopted the alternating direction method of multipliers (ADMM) which aims to delve into data-driven differential equations. ADMM is an optimization method designed to solve convex optimization problems. This paper attempted to illustrate the conceptual ideas and parameter discovery of the linear coupled first-order ODE. The estimation of the coefficients of the underlying equation utilized a combination of algorithms between physics-informed neural networks (PINNs) and sparse optimization. Both methods underwent a sufficient amount of looping during the search for the best combinations of coefficients. The PINNs method took charge of updating weights and biases. The updated trainable variables were then fetched to the sparse optimization method. During the sparse optimization process, ADMM was used to restructure the constrained optimization problems into unconstrained optimization problems. The unconstrained optimization problem usually consists of smooth (differentiable) and non-smooth (non-differentiable) components. By using the augmented Lagrangian method, both smooth and non-smooth components of the equations can be optimized to suggest the best combinations of coefficients. ADMM has found applications in various fields, such as signal processing, machine learning, and image reconstruction, which involve decomposable structures. The proposed algorithm provides a way to discover sparse approximations of differential equations from data. This data-driven approach provides insights and a step-by-step algorithm guide to allow more research opportunities to explore the possibility of representing any physical phenomenon with differential equations.

    Citation: Jye Ying Sia, Yong Kheng Goh, How Hui Liew, Yun Fah Chang. Constructing hidden differential equations using a data-driven approach with the alternating direction method of multipliers (ADMM)[J]. Electronic Research Archive, 2025, 33(2): 890-906. doi: 10.3934/era.2025040

    Related Papers:

    [1] Fahad Alsharari, Ahmed O. M. Abubaker, Islam M. Taha . On r-fuzzy soft γ-open sets and fuzzy soft γ-continuous functions with some applications. AIMS Mathematics, 2025, 10(3): 5285-5306. doi: 10.3934/math.2025244
    [2] Huantian Xie, Nenghui Kuang . Least squares type estimations for discretely observed nonergodic Gaussian Ornstein-Uhlenbeck processes of the second kind. AIMS Mathematics, 2022, 7(1): 1095-1114. doi: 10.3934/math.2022065
    [3] Xiaofei Cao, Yuyue Huang, Xue Hua, Tingyu Zhao, Sanzhang Xu . Matrix inverses along the core parts of three matrix decompositions. AIMS Mathematics, 2023, 8(12): 30194-30208. doi: 10.3934/math.20231543
    [4] Lina Liu, Huiting Zhang, Yinlan Chen . The generalized inverse eigenvalue problem of Hamiltonian matrices and its approximation. AIMS Mathematics, 2021, 6(9): 9886-9898. doi: 10.3934/math.2021574
    [5] Faiz Muhammad Khan, Tian-Chuan Sun, Asghar Khan, Muhammad Junaid, Anwarud Din . Intersectional soft gamma ideals of ordered gamma semigroups. AIMS Mathematics, 2021, 6(7): 7367-7385. doi: 10.3934/math.2021432
    [6] Yaoqiang Wu . On (fuzzy) pseudo-semi-normed linear spaces. AIMS Mathematics, 2022, 7(1): 467-477. doi: 10.3934/math.2022030
    [7] Yessica Hernández-Eliseo, Josué Ramírez-Ortega, Francisco G. Hernández-Zamora . Toeplitz operators on two poly-Bergman-type spaces of the Siegel domain D2C2 with continuous nilpotent symbols. AIMS Mathematics, 2024, 9(3): 5269-5293. doi: 10.3934/math.2024255
    [8] Junke Kou, Xianmei Chen . Wavelet estimations of a density function in two-class mixture model. AIMS Mathematics, 2024, 9(8): 20588-20611. doi: 10.3934/math.20241000
    [9] Junke Kou, Hao Zhang . Wavelet estimations of the derivatives of variance function in heteroscedastic model. AIMS Mathematics, 2023, 8(6): 14340-14361. doi: 10.3934/math.2023734
    [10] Yinlan Chen, Yawen Lan . The best approximation problems between the least-squares solution manifolds of two matrix equations. AIMS Mathematics, 2024, 9(8): 20939-20955. doi: 10.3934/math.20241019
  • This paper adopted the alternating direction method of multipliers (ADMM) which aims to delve into data-driven differential equations. ADMM is an optimization method designed to solve convex optimization problems. This paper attempted to illustrate the conceptual ideas and parameter discovery of the linear coupled first-order ODE. The estimation of the coefficients of the underlying equation utilized a combination of algorithms between physics-informed neural networks (PINNs) and sparse optimization. Both methods underwent a sufficient amount of looping during the search for the best combinations of coefficients. The PINNs method took charge of updating weights and biases. The updated trainable variables were then fetched to the sparse optimization method. During the sparse optimization process, ADMM was used to restructure the constrained optimization problems into unconstrained optimization problems. The unconstrained optimization problem usually consists of smooth (differentiable) and non-smooth (non-differentiable) components. By using the augmented Lagrangian method, both smooth and non-smooth components of the equations can be optimized to suggest the best combinations of coefficients. ADMM has found applications in various fields, such as signal processing, machine learning, and image reconstruction, which involve decomposable structures. The proposed algorithm provides a way to discover sparse approximations of differential equations from data. This data-driven approach provides insights and a step-by-step algorithm guide to allow more research opportunities to explore the possibility of representing any physical phenomenon with differential equations.



    This paper is the continuation of [7], where the authors proposed and analyzed a finite element scheme for the computation of fractional minimal graphs of order s(0,1/2) over bounded domains. That problem can be interpreted as a nonhomogeneous Dirichlet problem involving a nonlocal, nonlinear, degenerate operator of order s+1/2. In this paper, we discuss computational aspects of such a formulation and perform several numerical experiments illustrating interesting phenomena arising in fractional Plateau problems and prescribed nonlocal mean curvature problems.

    The notion of fractional perimeter was introduced in the seminal papers by Imbert [22] and by Caffarelli, Roquejoffre and Savin [12]. These works were motivated by the study of interphases that arise in classical phase field models when very long space correlations are present. On the one hand, [22] was motivated by stochastic Ising models with Kač potentials with slow decay at infinity, that give rise (after a suitable rescaling) to problems closely related to fractional reaction-diffusion equations such as

    tuε+(Δ)suε+f(uε)ε1+2s=0,

    where (Δ)s denotes the fractional Laplacian of order s(0,1/2) and f is a bistable nonlinearity. On the other hand, reference [12] showed that certain threshold dynamics-type algorithms, in the spirit of [26] but corresponding to the fractional Laplacian of order s(0,1/2) converge (again, after rescaling) to motion by fractional mean curvature. Fractional minimal sets also arise in the Γ-limit of nonlocal Ginzburg-Landau energies [28].

    We now make the definition of fractional perimeter precise. Let s(0,1/2) and two sets A,BRd, d1. Then, the fractional perimeter of order s of A in B is

    Ps(A;B):=12QB|χA(x)χA(y)||xy|d+2sdydx,

    where QB=(Rd×Rd)(Bc×Bc) and Bc=RdB. Given some set A0RdB, a natural problem is how to extend A0 into B while minimizing the s-perimeter of the resulting set. This is the fractional Plateau problem, and it is known that, if B is a bounded set, it admits a unique solution. Interestingly, in such a case it may happen that either the minimizing set A is either empty in B or that it completely fills B. This is known as a stickiness phenomenon [15].

    In this work, we analyze finite element methods to compute fractional minimal graphs on bounded domains. Thus, we consider s-minimal sets on a cylinder B=Ω×R, where Ω is a bounded and sufficiently smooth domain, with exterior data being a subgraph,

    A0={(x,xd+1):xd+1<g(x),xRdΩ},

    for some continuous function g:RdΩR. We briefly remark some key features of this problem:

    A technical difficulty arises immediately: all sets A that coincide with A0 in Rd+1B have infinite s-perimeter in B. To remedy this issue, one needs to introduce the notion of locally minimal sets [24].

    There exists a unique locally s-minimal set, and it is given by the subgraph of a certain function u, cf. [14,25]. Thus, one can restrict the minimization problem to the class of subgraphs of functions that coincide with g on Ωc.

    If the exterior datum g is a bounded function, then one can replace the infinite cylinder B=Ω×R by a truncated cylinder BM=Ω×(M,M) for some M>0 sufficiently large [25,Proposition 2.5].

    Let A be the subgraph of a certain function v that coincides with g on Ωc. One can rewrite Ps(A,BM) as

    Ps(A,BM)=Is[v]+C(M,d,s,Ω,g),

    where Is is the nonlocal energy functional defined in (1.1) below [25,Proposition 4.2.8], [7,Proposition 2.3].

    Therefore, an equivalent formulation to the Plateau problem for nonlocal minimal graphs consists in finding a function u:RdR, with the constraint u=g in Ωc, such that it minimizes the strictly convex energy

    Is[u]:=QΩFs(u(x)u(y)|xy|)1|xy|d+2s1dxdy, (1.1)

    where Fs is defined as

    Fs(ρ):=ρ0ρr(1+r2)(d+1+2s)/2dr. (1.2)

    A remarkable difference between nonlocal minimal surface problems and their local counterparts is the emergence of stickiness phenomena [15]. In the setting of this paper, this means that the minimizer may be discontinuous across Ω. As shown by Dipierro, Savin and Valdinoci [17], stickiness is indeed the typical behavior of nonlocal minimal graphs in case ΩR. When ΩR2, reference [16] proves that, at any boundary points at which stickiness does not happen, the tangent planes of the traces from the interior necessarily coincide with those of the exterior datum. Such a hard geometric constraint is in sharp contrast with the case of classical minimal graphs. In spite of their boundary behavior, fractional minimal graphs are smooth in the interior of the domain. Indeed, with the notation and assumptions from above it holds that uC(Ω); see [11,Theorem 1.1], and [5,19].

    Our previous work [7] introduced and studied a finite element scheme for the computation of fractional minimal graphs. We proved convergence of the discrete minimizers as the mesh size tends to zero, both in suitable Sobolev norms and with respect to a novel geometric notion of error [7]. Stickiness phenomena was apparent in the experiments displayed in [7], even though the finite element spaces consisted of continuous, piecewise linear functions. We also refer the reader to [8] for further numerical examples and discussion on computational aspects of fractional minimal graph problems.

    This article expands on our previous research [7,8] by discussing the convergence properties of a semi-implicit gradient flow and a damped Newton algorithm for computing discrete minimizers. Moreover, we prove convergence of our finite element scheme in W2r1(Ω), r[0,s), for unboundedly supported data and graphs with prescribed non-zero nonlocal mean curvature. We perform various numerical experiments illustrating special –and sometimes counterintuitive– behaviors of solutions, especially near Ω. We hope these experiments will help understand the peculiarities of graphs with prescribed nonlocal mean curvature.

    This paper is organized as follows. Section 2 gives the formulation of the minimization problem we aim to solve, and compares it with the classical minimal graph problem. Afterwards, in Section 3 we introduce our finite element method and review theoretical results from [7] regarding its convergence. Section 4 discusses computational aspects of the discrete problem, including the evaluation of the nonlocal form that it gives rise to, and the solution of the resulting discrete nonlinear equation via a semi-implicit gradient flow and a damped Newton method. Because the Dirichlet data may have unbounded support, we discuss the effect of data truncation and derive explicit bounds on the error decay with respect to the diameter of the computational domain in Section 5. Section 6 is concerned with the prescribed nonlocal mean curvature problem. Finally, Section 7 presents a number of computational experiments that explore qualitative and quantitative features of nonlocal minimal graphs and functions of prescribed fractional mean curvature, the conditioning of the discrete problems and the effect of exterior data truncation.

    We now specify the problem we aim to solve in this paper and pose its variational formulation. Let s(0,1/2) and gL(Ωc) be given. We consider the space

    Vg:={v:RdR:v|ΩW2s1(Ω), v=g in Ωc},

    equipped with the norm

    vVg:=vL1(Ω)+|v|Vg,

    where

    |v|Vg:=QΩ|v(x)v(y)||xy|d+2sdxdy,

    where QΩ=(Rd×Rd)(Ω×Ω). The space Vg can be understood as that of functions in W2s1(Ω) with 'boundary value' g. The seminorm in Vg does not take into account interactions over Ωc×Ωc, because these are fixed for the class of functions we consider; therefore, we do not need to assume g to be a function in W2s1(Ωc). In particular, g may not decay at infinity. In case g is the zero function, the space Vg coincides with the standard zero-extension Sobolev space ˜W2s1(Ω); for consistency of notation, we denote such a space by V0.

    For convenience, we introduce the following notation: given a function uVg, the form au:Vg×V0R is

    au(w,v):=QΩ˜Gs(u(x)u(y)|xy|)(w(x)w(y))(v(x)v(y))|xy|d+1+2sdxdy, (2.1)

    where

    ˜Gs(ρ):=10(1+ρ2r2)(d+1+2s)/2dr. (2.2)

    It is worth noticing that ˜Gs(ρ)0 as |ρ|. Thus, the weight in (2.1) degenerates whenever the difference quotient |u(x)u(y)||xy| blows up.

    The weak formulation of the fractional minimal graph problem can be obtained by the taking first variation of Is[u] in (1.1) in the direction v. As described in [7], that problem reads: find uVg such that

    au(u,v)=0vV0. (2.3)

    In light of the previous considerations, Eq (2.3) can be regarded as a fractional diffusion problem of order s+1/2 in Rd with weights depending on the solution u and fixed nonhomogeneous boundary data g.

    Remark 2.1 (comparison with local problems). Roughly, in the classical minimal graph problem, given some boundary data g, one seeks for a function ug+H10(Ω) such that

    Ω11+|u|2uvdx=0,vH10(Ω).

    The integral above can be interpreted as a weighted H1-form, where the weight depends on u and degenerates as |u|.

    In a similar way, problem (2.3) involves a weighted Hs+1/2-form, in which the weight depends on u and degenerates as |u(x)u(y)||xy|. In this sense, it is not surprising that the fractional-order problems converge to the local ones as s1/2. We refer to [7,Section 5] for a discussion on this matter.

    In this section we first introduce the finite element spaces and the discrete formulation of problem (2.3). Afterwards, we briefly outline the key ingredients in the convergence analysis for this scheme. For the moment, we shall assume that g has bounded support:

    supp(g)Λ,for some bounded set Λ.

    The discussion of approximations in case of unboundedly supported data is postponed to Section 5.

    We consider a family {Th}h>0 of conforming and simplicial triangulations of Λ, and we assume that all triangulations in {Th}h>0 mesh Ω exactly. Moreover, we assume {Th}h>0 to be shape-regular, namely:

    σ=suph>0maxTThhTρT<,

    where hT=diam(T) and ρT is the diameter of the largest ball contained in the element TTh. The vertices of Th will be denoted by {xi}, and the star or patch of {xi} is defined as

    Si:=supp(φi),

    where φi is the nodal basis function corresponding to the node xi.

    To impose the condition u=g in Ωc at the discrete level, we introduce the exterior interpolation operator

    Πchg:=xiNch(Πxihg)(xi)φi, (3.1)

    where Πxihg is the L2-projection of g|SiΩc onto P1(SiΩc). Thus, Πchg(xi) coincides with the standard Clément interpolation of g on xi for all nodes xi such that SiRd¯Ω. On the other hand, for nodes xiΩ, Πch only averages over the elements in Si that lie in Ωc.

    We consider discrete spaces consisting of piecewise linear functions over Th,

    Vh:={vC(Λ):v|TP1TTh}.

    To account for the exterior data, we define the discrete counterpart of Vg,

    Vgh:={vVh: v|ΛΩ=Πchg}.

    With the same convention as before, we denote by V0h the corresponding space in case g0. Therefore, the discrete weak formulation reads: find uhVgh such that

    auh(uh,vh)=0for all vhV0h. (3.2)

    Remark 3.1 (well-posedness of discrete problem). Existence and uniqueness of solutions to the discrete problem (3.2) is an immediate corollary of our assumption gL(Ωc). Indeed, from this condition it follows that uh is a solution of (3.2) if and only if uh minimizes the strictly convex energy Is[uh] over the discrete space Vgh.

    In [7], we have proved that solutions to (3.2) converge to the fractional minimal graph as the maximum element diameter tends to 0. An important tool in that proof is a quasi-interpolation operator Ih:VgVgh that combines the exterior Clément interpolation (3.1) with an interior interpolation operator. More precisely, we set

    Ihv:=Πh(v|Ω)+Πchg, (3.3)

    where Πh involves averaging over element stars contained in Ω. Because the minimizer u is smooth in the interior of Ω, but we have no control on its boundary behavior other than the global bound uW2s1(Ω), we can only assert convergence of the interpolation operator in a W2s1-type seminorm without rates.

    Proposition 3.2 (interpolation error). Let s(0,1/2), Ω be a bounded domain, gC(Ωc), and u be the solution to (2.3). Then, the interpolation operator (3.3) satisfies

    QΩ|(Ihuu)(x)(Ihuu)(y)||xy|d+2sdxdy0ash0.

    Once we have proved the convergence of Ihu to u, energy consistency follows immediately. Since the energy dominates the W2s1(Ω)-norm [7,Lemma 2.5], we can prove convergence in W2r1(Ω) for all r[0,s) by arguing by compactness.

    Theorem 3.3 (convergence). Assume s(0,1/2), ΩRd is a bounded Lipschitz domain, and gCc(Rd). Let u be the minimizer of Is on Vg and uh be the minimizer of Is on Vgh. Then, it holds that

    limh0uuhW2r1(Ω)=0,r[0,s).

    We finally point out that [7,Section 5] introduces a geometric notion of error that mimics a weighted L2 discrepancy between the normal vectors to the graph of u and uh. We refer to that paper for further details.

    As mentioned in the introduction, fractional minimal surfaces are smooth in the interior of Ω. The main challenge in their approximation arises from their boundary behavior and concretely, from the genericity of stickiness phenomena, i.e. discontinuity of the solution u across Ω. Thus, it is convenient to a priori adapt meshes to better capture the jump of u near Ω.

    In our discretizations, we use the following construction [21], that gives rise to shape-regular meshes. Let h>0 be a mesh-size parameter and μ1. Then, we consider meshes Th such that every element TTh satisfies

    hT{C(σ)hμ,¯TΩC(σ)hdist(T,Ω)(μ1)/μ,¯TΩ=. (3.4)

    These meshes, typically with μ=2, give rise to optimal convergence rates for homogeneous problems involving the fractional Laplacian in 2d [2,6,9,10]. We point out that in our problem the computational domain strictly contains Ω, because we need to impose the exterior condition u=g on Ωc. As shown in [4,9], the construction (3.4) leads to

    dimVgh{h(1d)μ,μ>dd1,hd|logh|,μ=dd1,hd,μ<dd1.

    In our applications, because Theorem 3.3 gives no theoretical convergence rates, we are not restricted to the choice μ=2 in two dimensions: a higher μ allows a better resolution of stickiness. However, our numerical experiments indicate that the condition number of the resulting matrix at the last step of the Newton iteration deteriorates as μ increases, cf. Section 7.2.

    Having at hand a finite element formulation of the nonlocal minimal graph problem and proven its convergence as the mesh size tends to zero, we now address the issue of how to compute discrete minimizers in 1d and in 2d. In first place, we discuss the computation of matrices associated to either the bilinear form auh(,), or related computations. We propose two schemes for the solution of the nonlinear discrete problems (3.2): a semi-implicit gradient flow and a damped Newton method. In this section we also discuss the convergence of these two algorithms.

    We now consider the evaluation of the forms auh(,) appearing in (3.2). We point out that, following the implementation techniques from [1,2], if we are given uhVgh and vhV0h, then we can compute auh(uh,vh). Indeed, since auh(uh,vh) is linear in vh and the latter function can be written in the form vh(x)=xiNhviφi(x), we only need to evaluate

    ai:=auh(uh,φi)=QΩ˜Gs(uh(x)uh(y)|xy|)(uh(x)uh(y))(φi(x)φi(y))|xy|d+1+2sdxdy.

    We split QΩ=(Ω×Ω)(Ω×Ωc)(Ωc×Ω) and, because ˜Gs is an even function (cf. (2.2)), we can take advantage that the integrand is symmetric with respect to x and y to obtain

    ai=Ω×Ω˜Gs(uh(x)uh(y)|xy|)(uh(x)uh(y))(φi(x)φi(y))|xy|d+1+2sdxdy+2Ω×Ωc˜Gs(uh(x)gh(y)|xy|)(uh(x)gh(y))φi(x)|xy|d+1+2sdxdy=:ai,1+2ai,2.

    We assume that the elements are sorted in such a way that the first NΩ elements mesh Ω, while the remaining NΛNΩ mesh ΛΩ, that is,

    1iNΩ¯Ti=¯ΩNΩ+1iNΛ¯Ti=¯ΛΩ.

    By doing a loop over the elements of the triangulation, the integrals ai,1 and ai,2 can be written as:

    ai,1=NΩl,m=1Tl×Tm˜Gs(uh(x)uh(y)|xy|)(uh(x)uh(y))(φi(x)φi(y))|xy|d+1+2sdxdy,ai,2=NΩl=1NΛm=NΩ+1Tl×Tm˜Gs(uh(x)gh(y)|xy|)(uh(x)gh(y))φi(x)|xy|d+1+2sdxdy+NΩl=1Tl×Λc˜Gs(uh(x)|xy|)uh(x)φi(x)|xy|d+1+2sdxdy.

    For the double integrals on Tl×Tm appearing in the definitions of ai,1 and ai,2, we apply the same type of transformations described in [1,13,27] to convert the integral into an integral over [0,1]2d, in which variables can be separated and the singular part can be computed analytically. The integrals over Tl×Λc are of the form

    Tlφi(x)ω(x)dx,

    where the weight function ω is defined as

    ω(x):=ΛcGs(uh(x)|xy|)1|xy|d+2sdy,Gs(ρ):=ρ0(1+r2)(d+1+2s)/2dr=ρ˜Gs(ρ). (4.1)

    Since the only restriction on the set Λ is that supp(g)Λ, without loss of generality we assume that Λ=BR is a d-dimensional ball with radius R. In such a case, the integral over Λc can be transformed using polar coordinates into:

    w(x)=B1dS(e)ρ0(e,x)Gs(uh(x)ρ)ρ12sdρ,

    where ρ0(e,x) is the distance from x to BR in the direction of e, which is given by the formula

    ρ0(e,x)=R2|x|2+(ex)2ex.

    The integral over (ρ0(e,x),) can be transformed to an integral over (0,1) by means of the change of variable ρ=ρ0(e,x)˜ρ1/(2s), and then approximated by Gaussian quadrature. Combining this approach with suitable quadrature over B1 and Tl, we numerically compute the integral over Tl×Λc for a given uh.

    Although we can compute auh(uh,vh) for any given uhVgh, vhV0h, the nonlinearity of auh(uh,vh) with respect to uh still brings difficulties in finding the discrete solution to (3.2). Since auh(uh,vh)=δIs[uh]δuh(vh) and uh minimizes the convex functional Is[uh] in the space Vgh, a gradient flow is a feasible approach to solve for the unique minimizer uh.

    Given α[0,1), and with the convention that H0=L2, we first consider a time-continuous Hα-gradient flow for uh(t), namely

    tuh,vhHα(Ω)=δIsδuh(vh)=auh(uh,vh),vhV0h, (4.2)

    where uh(0)=u0hVgh (and thus Is[u0h]<). Writing uh(t)=xjNhuj(t)φi, local existence and uniqueness of solutions in time for (4.2) follow from the fact that auh(uh,φi) is Lipschitz with respect to uj for any φi. Noticing that the gradient flow (4.2) satisfies the energy decay property

    ddtIs[uh]=δIs[uh]δuh(tuh)=auh(uh,tuh)=tuh,tuhHα(Ω)0,

    global existence and uniqueness of solutions in time can also be proved.

    Similarly to the classical mean curvature flow of surfaces [18], there are three standard ways to discretize (4.2) in time: fully implicit, semi-implicit and fully explicit. Like in the classical case, the fully implicit scheme requires solving a nonlinear equation at every time step, which is not efficient in practice, while the fully explicit scheme is conditionally stable, and hence requires the choice of very small time steps. We thus focus on a semi-implicit scheme: given the step size τ>0 and iteration counter k0, find uk+1hVgh that solves

    1τuk+1hukh , vhHα(Ω)=aukh(uk+1h , vh),vhV0h. (4.3)

    The linearity of aukh(uk+1h,vh) with respect to uk+1h makes (4.3) amenable for its computational solution. The following proposition proves the stability of the semi-implicit scheme. Its proof mimics the one of classical mean curvature flow [18].

    Proposition 4.1 (stability of Hα-gradient flow). Assume uk+1h,ukhVgh satisfy (4.3). Then,

    Is[uk+1h]+1τuk+1hukh2Hα(Ω)Is[ukh].

    Proof. Choose vh=uk+1hukhV0h in (4.3) to obtain

    1τuk+1hukh2Hα(Ω)=aukh(uk+1h,uk+1hukh). (4.4)

    Next, we claim that for every pair of real numbers r0,r1, it holds that

    (r21r1r0) ˜Gs(r0)Fs(r1)Fs(r0). (4.5)

    We recall that Fs is defined according to (1.2), that ˜Gs satisfies ˜Gs(r)=1rGs(r), and that Gs=Fs. Since Fs is a convex and even function, we deduce

    Fs(r1)Fs(r0)=Fs(|r1|)Fs(|r0|)Fs(|r1|)[Fs(|r1|)+(|r0||r1|) Gs(|r1|)]=(|r1||r0|) |r1| ˜Gs(|r1|).

    We add and subtract (|r1||r0|)|r1|˜Gs(|r0|) above and use that ˜Gs is even, decreasing on [0,) and non-negative, to obtain

    Fs(r1)Fs(r0)(|r1||r0|) |r1| ˜Gs(|r0|)+|r1| (|r1||r0|)(˜Gs(|r1|)˜Gs(|r0|))(|r1||r0|) |r1| ˜Gs(|r0|)=(r21|r0| |r1|) ˜Gs(|r0|)(r21r0r1) ˜Gs(r0).

    This proves (4.5). Finally, define dk(x,y):=ukh(x)ukh(y)|xy| and set r0=dk and r1=dk+1 in (4.5) to deduce that

    aukh(uk+1h , uk+1hukh)=QΩ˜Gs(dk(x,y))dk+1(x,y)(dk+1(x,y)dk(x,y))|xy|d1+2sdxdyQΩFs(dk+1(x,y))Fs(dk(x,y))|xy|d1+2sdxdy=Is[uk+1h]Is[ukh].

    Combining this with (4.4) finishes the proof.

    Upon writing wkh:=uk+1hukh, the semi-implicit scheme (4.3) becomes (4.6), which is the crucial step of Algorithm 1 to solve (3.2).

    Algorithm 1 Semi-implicit gradient flow
    1: Select an arbitrary initial u0hVgh, let k=0, and set w0hHα(Ω)=Inf. Choose a time step τ>0 and a small number ε>0.
    2: while wkhHα(Ω)>ε do
    3:  Find wk+1hV0h such that
                wk+1h,vhHα(Ω)+τaukh(wk+1h,vh)=aukh(ukh,vh)vhV0h.(4.6)
    4:  Set uk+1h=ukh+τwk+1h and k=k+1.
    5: end while

     | Show Table
    DownLoad: CSV

    Equation (4.6) boils down to solving the linear system (M+τKk)Wk=Fk. In case α=0, the matrix M=(Mij) is just a mass matrix, while if α>0, M is the stiffness matrix for the linear fractional diffusion problem of order α, given by

    Mij:=QΩ(φi(x)φi(y))(φj(x)φj(y))|xy|d+2αdxdy(α>0).

    The matrix Kk=(Kkij) is the stiffness matrix for a weighted linear fractional diffusion of order s+12, whose elements Kkij:=aukh(φi,φj) are given by

    Kkij=QΩ˜Gs(ukh(x)ukh(y)|xy|)(φi(x)φi(y))(φj(x)φj(y))|xy|d+1+2sdxdy,

    and can be computed as described in Section 4.1. The right hand side vector is Fk=KkUk, where Uk=(Uki) is the vector Uki=ukh(xi), i.e., fki=aukh(ukh,φi).

    Because of Proposition 4.1 (stability of Hα-gradient flow), the loop in Algorithm 1 terminates in finite steps. Moreover, {using the continuity of aukh(,) in [H12+s(Ω)]2, which is uniform in ukh, together with an inverse estimate and 0α12+s gives

    |aukh(wk+1h,vh)||wk+1h|H12+s(Ω)|vh|H12+s(Ω)h12s+2αmin|wk+1h|Hα(Ω)|vh|Hα(Ω),

    where the hidden constant depends on the mesh shape-regularity and hmin is the minimum element size. Therefore, the last iterate ukh of Algorithm 1 satisfies the residual estimate

    maxvhV0h|aukh(ukh,vh)|vhHα(Ω)ε(1+τh12s+2αmin).

    Since the semi-implicit gradient flow is a first order method to find the minimizer of the discrete energy, it may converge slowly in practice. Therefore, it is worth having an alternative algorithm to solve (3.2) faster. With that goal in mind, we present in the following a damped Newton scheme, which is a second order method and thus improves the speed of computation.

    Algorithm 2 Damped Newton Algorithm
    1: Select an arbitrary initial u0hVgh and let k=0. Choose a small number ε>0.
    2: while {a(ukh,φi)}mi=1l2>ε do
    3:  Find wkhV0h such that
            δauh(ukh,vh)δukh(wkh)=aukh(ukh,vh),vhV0h.(4.7)
    4:  Determine the minimum nN such that uk,nh:=ukh+2nwkh satisfies
            {aukh(uk,nh,φi)}mi=1l2(12n1){aukh(ukh,φi)}mi=1l2
    5:  Let uk+1h=uk,nh and k=k+1.
    6: end while

     | Show Table
    DownLoad: CSV

    To compute the first variation of au(u,v) in (2.1) with respect to u, which is also the second variation of Is[u], we make use of r˜Gs(s)=Gs(r) and obtain

    δau(u,v)δu(w)=QΩGs(u(x)u(y)|xy|)(w(x)w(y))(v(x)v(y))|xy|d+1+2sdxdy.

    The identity Gs(a)=(1+a2)(d+1+2s)/2 can be easily determined from (4.1). Even though this first variation is not well-defined for an arbitrary uVg and v,wV0, its discrete counterpart δauh(uh,vh)δuh(wh) is well-defined for all uhVgh, vh,whV0h because they are Lipschitz. Our damped Newton algorithm for (3.2) is presented in Algorithm 2.

    Lemma 4.2 (convergence of Algorithm 2). The iterates ukh of Algorithm 2 converge quadratically to the unique solution of (3.2) from any initial condition.

    Proof. Since Is[uh] is strictly convex, the convergence of ukh to the solution of discrete problem (3.2) is guaranteed by the theory of numerical optimization in finite dimensional spaces (see [23], for example).

    The critical step in Algorithm 2 is to solve the equation (4.7). Due to the linearity of δaukh(ukh,vh)δukh(wkh) with respect to vh and wkh, we just need to solve a linear system ˜KkWk=Fk, where the right hand side Fk=(fki) is the same as the one in solving (4.6), namely, fki=aukh(ukh,φi). The matrix ˜Kk=(˜Kkij), given by

    ˜Kkij=QΩGs(ukh(x)ukh(y)|xy|)(φi(x)φi(y))(φj(x)φj(y))|xy|d+1+2sdxdy,

    is the stiffness matrix for a weighted linear fractional diffusion of order s+12. Since the only difference with the semi-implicit gradient flow algorithm is the weight, the elements in ˜Kk can be computed by using the same techniques as for Kk.

    Thus far, we have taken for granted that g has bounded support, and that the computational domain covers supp(g). We point out that most of the theoretical estimates only require g to be locally bounded. Naturally, in case g does not have compact support, one could simply multiply g by a cutoff function and consider discretizations using this truncated exterior condition. Here we quantify the consistency error arising in this approach. More precisely, given H>0, we consider ΩH to be a bounded open domain containing Ω and such that d(x,¯Ω)H for all xΩH, and choose a cutoff function ηHC(Ωc) satisfying

    0ηH1,supp(ηH)¯ΩH+1Ω,ηH(x)=1inΩHΩ.

    We replace g by gH:=gηH, and consider problem (2.3) using gH as Dirichlet condition. Let uHVgH be the solution of such a problem, and uHh be the solution of its discrete counterpart over a certain mesh with element size h. Because of Theorem 3.3 we know that, for all r[0,s),

    uHhuHin W2r1(Ω)as h0.

    Therefore we only need to show that, in turn, the minimizers of the truncated problems satisfy uHu as H in the same norm. As a first step, we compare the differences in the energy between truncated and extended functions. For that purpose, we define the following truncation and extension operators:

    TH:VgVgH,THv=vηH,EH:VgHVg,EHw=w+(1ηH)g.

    Proposition 5.1 (truncation and extension). The following estimates hold for every vVgL(Rd), and wVgHL(Rd):

    |Is[v]Is[THv]|H12s,|Is[w]Is[EHw]|H12s.

    Proof. We prove only the first estimate, as the second one follows in the same fashion. Because v=THv in ΩH, we have

    |Is[v]Is[THv]|2ΩΩcH|Fs(v(x)v(y)|xy|)Fs(v(x)THv(y)|xy|)|1|xy|d+2s1dydx.

    From definition (1.2), it follows immediately that Fs(0)=Fs(0)=0, and thus Fs(ρ)Cρ2 if ρ1. Combining this with the fact that |v(x)v(y)|2vL(Rd) and |v(x)THv(y)|2vL(Rd) for a.e. xΩ,yΩc, and integrating in polar coordinates, we conclude

    |Is[v]Is[THv]|v2L(Ωc)ΩΩcH1|xy|d+2s+1dxdyH12s.

    This concludes the proof.

    The previous result leads immediately to an energy consistency estimate for the truncated problem.

    Corollary 5.2 (energy consistency). The minimizers of the original and truncated problem satisfy

    |Is[u]Is[uH]|H12s.

    Proof. Since uH is the minimizer over VgH and THuVgH, we deduce

    Is[uH]Is[u]Is[THu]Is[u]H12s.

    Conversely, using that u is the minimzer over Vg and EuHVg, we obtain

    Is[u]Is[uH]Is[EuH]Is[uH]H12s,

    and thus conclude the proof.

    The energy Is is closely related to the W2s1(Ω)-norm, in the sense that one is finite if and only if the other one is finite [7,Lemma 2.5]. Thus, in the same way as in Theorem 3.3 (convergence), energy consistency yields convergence in W2r1(Ω) for all r[0,s).

    Proposition 5.3 (convergence). Let ΩRd be a bounded Lipschitz domain and gL(Rd). Let u and uH be minimizers of Is over Vg and VgH, respectively. Then for all r[0,s), it holds that

    limHuuHW2r1(Ω)=0.

    Proof. The proof proceeds using the same arguments as in [7,Theorem 4.3]. In fact, from Corollary 5.2 we deduce that {Is[uH]} is uniformly bounded and therefore {uH} is bounded in W2s1(Ω). It follows that, up to a subsequence, uH converges in L1(Ω) to a limit ˜u. Also, because uH=g in ΩH, we can extend ˜u by g on Ωc, and have uHu a.e in Rd. We then can invoke Fatou's lemma and Corollary 5.2 to deduce that

    Is[˜u]lim infHIs[uH]lim infHIs[u]+H12s=Is[u].

    Because ˜uVg, we deduce that ˜u=u whence uHu in L1(Ω) as H0. By interpolation, we conclude that convergence in W2r1(Ω) holds for all r[0,s).

    In this section, we briefly introduce the problem of computing graphs with prescribed nonlocal mean curvature. More specifically, we address the computation of a function u such that for a.e. xΩ, a certain nonlocal mean curvature at (x,u(x)) is equal to a given function f(x). For a set ERd+1 and ˜xE, such nonlocal mean curvature operator is defined as [12]

    Hs[E](˜x):=P.V.Rd+1χEc(˜y)χE(˜y)|˜x˜y|d+1+2sd˜y.

    In turn, for ˜x=(x,u(x)) on the graph of u, this can be written as [25,Chapter 4]

    Hs[u](x)=P.V.RdGs(u(x)u(y)|xy|)dy|xy|d+2s.

    To recover the classical mean curvature in the limit s12, it is necessary to normalize the operator Hs accordingly. Let αd denote the volume of the d-dimensional unit ball, and consider the prescribed nonlocal mean curvature problem

    {12sdαdHs[u](x)=f(x),xΩ,u(x)=g(x),xRdΩ. (6.1)

    The scaling factor 12sdαd yields [7,Lemma 5.8]

    lims1212sdαdHs[E](x)=H[E](x), (6.2)

    where H[E] denotes the classical mean curvature operator. Therefore, in the limit s12, formula (6.1) formally becomes the following Dirichlet problem for graphs of prescribed classical mean curvature:

    {1ddiv(u(x)(1+|u(x)|2)1/2)=f(x),xΩ,u(x)=g(x),xΩ. (6.3)

    An alternative formulation of the prescribed nonlocal mean curvature problem for graphs is to find uVg minimizing the functional

    Ks[u;f]:=Is[u]dαd12sΩf(x)u(x)dx. (6.4)

    Because Is[u] is convex and the second term in the right hand side above is linear, it follows that this functional is also convex. Then, by taking the first variation of (6.4), we see that uVg is the minimizer of Ks[;f] if and only if it satisfies

    0=au(u,v)dαd12sΩf(x)v(x)dx=QΩGs(u(x)u(y)|xy|)v(x)v(y)|xy|d+2sdxdydαd12sΩf(x)v(x)dx (6.5)

    for every vV0. Formally, (6.1) coincides with (6.5) because one can multiply (6.1) by a test function v, integrate by parts and take advantage of the fact that Gs is an odd function to arrive at (6.5) up to a constant factor.

    One intriguing question regarding the energy Ks[u;f] in (6.4) is what conditions on f are needed to guarantee that it is bounded below. In fact, for the variational formulation of the classical mean curvature problem (6.3), Giaquinta [20] proves the following necessary and sufficient condition for well posedness: there exists some ε0>0 such that for every measurable set AΩ,

    |Af(x)dx|(1ε0)dHd1(A), (6.6)

    where Hd1 denotes the (d1)dimensional Hausdorff measure. In some sense, this condition ensures that the function f be suitably small.

    Although we are not aware of such a characterization for prescribed nonlocal mean curvature problems, a related sufficient condition for Ks[u;f] to have a lower bound can be easily derived. We discuss this next.

    Proposition 6.1 (convergence). Let s(0,1/2), ΩRd be a bounded Lipschitz domain, fLd/(2s)(Ω) be sufficiently small, and gCc(Rd). Then, for all h>0 there exist unique minimizers uVg and uhVgh of Ks[;f], and they satisfy

    limh0uuhW2r1(Ω)=0,r[0,s).

    Proof. Exploiting [7,Lemma 2.5 and Proposition 2.7], we deduce that

    Is[u]+C0C1uW2s1(Ω),

    for suitable constants C0=C0(d,Ω,s,gL(Ωc)) and C1. On the other hand, Hölder's inequality and the Sobolev embedding W2s1(Ω)Ld/(d2s)(Ω) give

    Ωf(x)u(x)dxuLd/(d2s)(Ω)fLd/(2s)(Ω)C2uW2s1(Ω)fLd/(2s)(Ω),

    whence Ks[u;f] is bounded from below provided fLd/(2s)(Ω) is suitably small,

    Ks[u;f]uW2s1(Ω)(C1C2fLd/(2s)(Ω))C0. (6.7)

    Thus, the existence of minimizers of Ks[;f] on either Vg or Vgh follows by standard arguments. To prove the convergence of discrete minimizers uh, it suffices to extend the arguments of Section 3.2 to f0 following [7,Theorem 4.2].

    Remark 6.2 (consistency with local problems). The requirement that fLd/(2s)(Ω) be sufficiently small and estimate (6.7) are to some extent consistent with (6.6), because it holds that

    |Af(x)dx|(A1dx)d1d(A|f(x)|ddx)1dHd1(A)fLd(Ω),

    due to Hölder's inequality and the isoperimetric inequality, and formally the case 2s=1 corresponds to the classical prescribed mean curvature problem (cf. (6.2)).

    This section presents a variety of numerical experiments that illustrate some of the main features of fractional minimal graphs discussed in this paper. From a quantitative perspective, we explore stickiness and the effect of truncating the computational domain. Moreover, we report on the conditioning of the matrices arising in the iterative resolution of the nonlinear discrete equations. Our experiments also illustrate that nonlocal minimal graphs may change their concavity inside the domain Ω, and we show that graphs with prescribed fractional mean curvature may be discontinuous in Ω.

    In all the experiments displayed in this section we use the damped Newton algorithm from §4.3. We refer to [7] for experiments involving the semi-implicit gradient flow algorithm and illustrating its energy-decrease property.

    We first consider the example studied in [15,Theorem 1.2]. We solve (3.2) for Ω=(1,1)R and g(x)=Msign(x), where M>0. Reference [15] proves that, for every s(0,1/2), stickiness (i.e., the solution being discontinuous at Ω) occurs if M is big enough and, denoting the corresponding solution by uM, that there exists an optimal constant c0 such that

    supxΩuM(x)<c0M1+2s2+2s,infxΩuM(x)>c0M1+2s2+2s. (7.1)

    In our experiments, we consider s=0.1,0.25,0.4 and use graded meshes (cf. Section 3.3) with parameter μ=2,h=103 to better resolve the boundary discontinuity. The mesh size h here is taken in such a way that the resulting mesh partitions Ω=(1,1) into |Ω|1/μh subintervals and the smallest ones have size hμ. Moreover, since this is an example in one dimension and the unboundedly supported data g is piecewise constant, we can use quadrature to approximate the integrals over Ωc rather than directly truncating g. The left panel in Figure 1 shows the computed solutions with M=16.

    Figure 1.  Stickiness in 1d. In the setting of Section 7.1, the left panel displays the finite element solutions uMh for M=16 computed over graded meshes with parameters μ=2,h=103 and s{0,1,0.25,0.4}. The right panel shows the value of uMh(x1) as a function of M for s{0.1,0.25,0.4}, which is expected to behave according to (7.1).

    In all cases we observe that the discrete solutions uh are monotonically increasing in Ω, so we let x1 be the free node closest to 1 and use uMh(x1) as an approximation of supxΩuM(x). The right panel in Figure 1 shows how uMh(x1) varies with respect to M for different values of s.

    For s=0.1 and s=0.25 the slopes of the curves are slightly larger than the theoretical rate M1+2s2+2s whenever M is small. However, as M increases, we see a good agreement with theory. Comparing results for M=27.5 and M=28, we observe approximate rates 0.553 for s=0.1 and 0.602 for s=0.25, where the expected rates are 6/110.545 and 3/5=0.600, respectively. However, the situation is different for s=0.4: the plotted curve does not correspond to a flat line, and the last two nodes plotted, with M=27.5 and M=28, show a relative slope of about 0.57, which is off the expected 9/140.643.

    We believe this issue is due to the mesh size h not being small enough to resolve the boundary behavior. We run the same experiment on a finer mesh, namely with h=104,μ=2, and report our findings for s=0.4 and compare them with the ones for the coarser mesh on Table 1. The results are closer to the predicted rate.

    Table 1.  Comparison between computational results for the problem described in Section 7.1 over two different meshes for s=0.4. Let Mi be the value of M in the i-th row. In this table, by the slope at Mi we refer to log(uMih(x1))log(uMi1h(x1))log(Mi)log(Mi1) that, according to (7.1), is expected to be equal to 9/140.643.
    Example with h=103 Example with h=104
    log2(M) uMh(x1) Slope uMh(x1) Slope
    6.0 26.1545 N/A 26.7488 N/A
    6.5 32.4687 0.624 33.4057 0.641
    7.0 40.0845 0.608 41.5497 0.629
    7.5 49.1873 0.590 51.4627 0.617
    8.0 59.9410 0.571 63.4528 0.604

     | Show Table
    DownLoad: CSV

    For the solutions of the linear systems arising in our discrete formulations, we use a conjugate gradient method. Therefore, the number of iterations needed for a fixed tolerance scales like κ(K), where κ(K) is the condition number of the stiffness matrix K. For linear problems of order s involving the fractional Laplacian (Δ)s, the condition number of K satisfies [3]

    κ(K)=O(N2s/d(hmaxhmin)d2s).

    Reference [3] also shows that diagonal preconditioning yields κ(K)=O(N2s/d), where N is the dimension of the finite element space.

    Using the Matlab function condest, we estimate the condition number of the Jacobian matrix in the last Newton iteration in the example from Section 7.1 with M=1, with and without diagonal preconditioning. Figure 2 summarizes our findings.

    Figure 2.  Condition numbers of the Jacobian matrix ˜K at the last step of Algorithm 2 for the problem described in Section 7.1 with s=0.1 (left), s=0.4 (right) and meshes with grading parameters μ{1,2,3}. The condition number on quasi-uniform meshes (μ=1) scales as N2s+1, in agreement with the s-fractional mean curvature operator being an operator of order s+1/2 (cf. (2.3)). While the conditioning for graded meshes is significantly poorer, when μ=2 diagonal preconditioning recovers condition numbers comparable to the ones on quasi-uniform meshes.

    Let N=dimVgh be the number of degrees of freedom. For a fixed s and using uniform meshes, we observe that the condition number behaves like N2s+1h2s1: this is consistent with the s-fractional mean curvature operator being an operator of order s+1/2. For graded meshes (with μ=2,μ=3), the behavior is less clear. When using diagonal preconditioning for μ=2, we observe that the condition number also behaves like N2s+1.

    In Section 5, we studied the effect of truncating unboundedly supported data and proved the convergence of the discrete solutions of the truncated problems uHh towards u as h0, H.

    Here, we study numerically the effect of data truncation by running experiments on a simple two-dimensional problem. Consider Ω=B1R2 and g1; then, the nonlocal minimal graph u is a constant function. For H>0, we set ΩH=BH+1. and compute nonlocal minimal graphs on Ω with Dirichlet data gH=χΩH, which is a truncation of g1. Clearly, if there was no truncation, then uh should be constantly 1; the effect of the truncation of g is that the minimum value of uHh inside Ω is strictly less than 1. For s=0.25, we plot the L1(Ω) and L(Ω) norms of uhuHh as a function of H in Figure 3. The slope of the curve is close to 1.5 for large H, which is in agreement with the O(H12s) consistency error for the energy Is we proved in Corollary 5.2.

    Figure 3.  Effects of truncation in 2d for s=0.25: for gH=χΩH, we compute the L1 and L discrepancies between uh1 and uHh as a function of H. For both norms we observe a discrepancy of order H12s, in agreement with Corollary 5.2.

    This is a peculiar behavior of fractional minimal graphs. We consider Ω=(1,1), s=0.02, g(x)=1 for a|x|2 and g(x)=0 otherwise, and denote by ua the solution of (3.2). For a=1, it is apparent from Figure 4 (left panel) that the solution u1 is convex in Ω and has stickiness on the boundary. In addition, the figure confirms that limx1ua(x)=, which is asserted in [17,Corollary 1.3]. On the contrary, for 1<a<2, as can be seen from Figure 4 (right panel), [17,Corollary 1.3] implies that limx1ua(x)= since g(x)=0 near the boundary of Ω. This fact implies that u(x) cannot be convex near x=1 for 1<a<2. Furthermore, as a1+ one expects that ua(x)u1(x) and thus that ua be convex in the interior of Ω=(1,1) for a close to 1. Therefore it is natural that for some values of a>1 sufficiently close to 1, the solution ua changes the sign of its second derivative inside Ω. In fact, we see from the right panel in Figure 4 that the nonlocal minimal graph u in Ω continuously changes from a convex curve into a concave one as a varies from 1 to 1.5.

    Figure 4.  Change of convexity: one-dimensional experiment for s=0.02 with a=1 (left panel) and a=1.0001,1.01,1.05,1.1,1.2,1.5 (right panel). The solutions ua exhibit a transition from being convex in Ω for a=1 to being concave for a=1.5.

    This change of convexity is not restricted to one-dimensional problems. Let ΩR2 be the unit ball, s=0.25, and g(x)=1 for 129128|x|1.5 and g(x)=0 otherwise. Figure 5 (right panel) shows a radial slice of the discrete minimal graph, which is a convex function near the origin but concave near Ω. An argument analogous to the one we discussed in the previous paragraph also explains this behavior in a two-dimensional experiment.

    Figure 5.  Change of convexity: one-dimensional experiment with s=0.02 (left panel) and two-dimensional experiment with s=0.25 (right panel). The piecewise constant boundary data vanish near the boundary of Ω and at infinity and are equal to 1 on an intermediate annulus.

    Stickiness is one of the intrinsic and distintive features of nonlocal minimal graphs. It can be delicate especially in dimension more than one. We now analyze a problem studied in [16] that illustrates the fact that for ΩR2, if nonlocal minimal graphs are continuous at some point xΩ then they must also have continuous tangential derivatives at such a point. This geometric rigidity stands in sharp contrast with the case of either fractional-order linear problems and classical minimal graphs.

    Specifically, we consider Ω=(0,1)×(1,1) and the Dirichlet data

    g(x,y)=γ(χ(1,a)×(0,1)(x,y)χ(1,a)×(1,0)(x,y))

    where a[0,1] and γ>0 are parameters to be chosen. We construct graded meshes with μ=2 and smallest mesh size hμ=27; see Section 3.3. Figure 6 (left panel) displays the numerical solution uh associated with s=0.25,γ=2 and a=1/8.

    Figure 6.  Plot of uh in Section 7.5 for γ=2,a=1/8 and s=0.25. Left panel: top view of the solution. Right panel: slices at x=23,26 and 29. The fractional minimal graph flattens as x0+, in agreement with the fact that for such a minimizer being continuous at some point xΩ implies having continuous tangential derivatives at such a point.

    If one defines the function u0(y)=limx0+u(x,y), then according to [16,Theorem 1.4], one has u0(0)=0 for a>0. We run a sequence of experiments to computationally verify this theoretical result. For meshes with μ=2 and hμ=27,28,29, the slopes of uh in the y-direction at (x,0) for x=26,27,28,29, are recorded in Table 2 below for s=0.1,0.25,0.4. Because computing the slope of uh at (x,0) would be meaningless when x is smaller than hμ, we write a N/A symbol in those cases. Our experiments show that the slopes decrease as x approaches 0.

    Table 2.  Example of Section 7.5: experimental slopes yuh(x,0) for x=2k and k=6,,9. As x0+, these slopes become smaller; this geometric rigidity is easier to capture for larger s.
    s=0.10
    hμ x=29 x=28 x=27 x=26
    27 N/A N/A 8.546×102 1.1945×101
    28 N/A 5.856×102 8.406×102 1.2140×101
    29 3.940×102 5.730×102 8.572×102 1.2332×101
     
    s=0.25
    hμ x=29 x=28 x=27 x=26
    27 N/A N/A 3.466×102 5.473×102
    28 N/A 2.135×102 3.469×102 5.551×102
    29 1.289×102 2.126×102 3.543×102 5.640×102
     
    s=0.40
    hμ x=29 x=28 x=27 x=26
    27 N/A N/A 8.605×103 1.509×102
    28 N/A 4.763×103 8.613×103 1.540×102
    29 2.578×103 4.739×103 8.886×103 1.574×102

     | Show Table
    DownLoad: CSV

    To further illustrate this behavior, in Figure 6 (right panel) we display the computed solutions uh(x,y) at x=23,26,29, for s=0.25 over a mesh with hμ=29. The flattening of the curves as x0+ is apparent.

    This section presents experiments involving graphs with nonzero prescribed mean curvature. We run experiments that indicate the need of a compatibility condition such as (6.6), the fact that solutions may develop discontinuities in the interior of the domain, and point to the relation between stickiness and the nonlocal mean curvature of the domain.

    As discussed in Section 6, the prescribed nonlocal mean curvature problem (6.5) may not have solutions for some functions f. To verify this, in Figure 7 we consider Ω=B(0,1)R2, s=0.25, g=0 and two choices of f. For the picture on the right (f=10), the residue does not converge to 0, and the energy Ks[u;f] goes from 0 initially down to 6.6×106 after 16 Newton iterations.

    Figure 7.  Compatibility of data: plots of uh for s=0.25,f=1 in Ω (left) and after 16 Newton iterations for f=10 in Ω (right). The right hand side f=10 turns out to be incompatible for the prescribed nonlocal mean curvature problem in Ω=B(0,1).

    Another interesting phenomenon we observe is that, for a discontinuous f, the solution u may also develop discontinuities inside Ω. We present the following two examples for d=1 and d=2.

    In first place, let Ω=(1,1)R, s=0.01, g=0 and consider f(x)=1.5sign(x). We use a mesh graded toward x=0,±1 with N=2000 degrees of freedom and plot the numerical solution uh in Figure 8. The behavior of uh indicates that the solution u has discontinuities both at x=±1 and x=0.

    Figure 8.  Nonlocal minimal graph with prescribed discontinuous nonlocal mean curvature. Left: plot of uh in [1.5,1.5], right: plot of uh near origin.

    As a second illustration of interior discontinuities, let Ω=(1,1)2R2, s=0.01, g=0 and consider f(x,y)=4sign(xy). We use a mesh graded toward the axis and boundary with N=4145 degrees of freedom and plot the numerical solution uh in Figure 9. The behavior of uh shows that the solution u has discontinuities near the boundary and across the edges inside Ω where f is discontinuous.

    Figure 9.  A graph with prescribed discontinuous nonlocal mean curvature in the square Ω=(1,1)2. The left panel displays a top view, while the right panel shows a side view along the slice {y=1/2}. The solution to (6.1) is discontinuous inside Ω.

    Next, we numerically address the effect of boundary curvature over nonlocal minimal graphs. For this purpose, we present examples of graphs with prescribed nonlocal mean curvature in several two-dimensional domains, in which we fix g=0 and f=1.

    Consider the annulus Ω=B(0,1)B(0,1/4) and s=0.25. The top row in Figure 10 offers a top view of the discrete solution uh and a radial slice of it. We observe that the discrete solution is about three times stickier in the inner boundary than in the outer one. The middle and bottom row in Figure 10 display different views of the solution in the square Ω=(1,1)2 for s=0.01. Near the boundary of the domain Ω, we observe a steep slope in the middle of the edges; however, stickiness is not observed at the convex corners of Ω.

    Figure 10.  Top and side views of functions with prescribed fractional mean curvature f=1 in Ω that vanish in Ωc. Here, Ω is either an annulus (top row) or a square (middle and bottom row). The plot in the top-right panel corresponds to a radial slice (y=0,0.25x1) of the annulus, while the ones in the bottom-left and bottom-right show slices along the diagonal (0y=x1) and perpendicular to an edge of the square (y=0.5,0x1), respectively. We observe that stickiness is larger near the concave portions of the boundary than near the convex ones, and that it is absent in the corners of the square.

    We finally investigate stickiness at the boundary of the L-shaped domain Ω=(1,1)2(0,1)×(1,0) with s=0.25,g=0,f=1. We observe in Figure 11 that stickiness is most pronounced at the reentrant corner but absent at the convex corners of Ω.

    Figure 11.  Stickiness on the L-shaped domain Ω=(1,1)2(0,1)×(1,0) with prescribed fractional mean curvature f=1 in Ω and Dirichlet condition g=0 in Ωc. The plots in the middle correspond to slices along y=x and y=x respectively, while the ones in the bottom are slices along x=0 or y=0.5 respectively. We see that the largest stickiness takes place at the reentrant corner while there is no stickiness at the convex corners.

    From these examples we conjecture that there is a relation between the amount of stickiness on Ω and the nonlocal mean curvature of Ω. Heuristically, let us assume that the Euler-Lagrange equation is satisfied at some point xΩ:

    Hs[u](x)=P.V.RdGs(u(x)u(y)|xy|)dy|xy|d+2s=f(x),

    where we recall that Gs is defined in (4.1). This fact is not necessarily true, because (6.1) guarantees this identity to hold on Ω only. Above, we assume that the minimizer is continuous in ¯Ω, so that we can set u(x):=limΩyxu(y). Thus, we can define the stickiness at xΩ as

    Ms(x):=limΩcyxu(y)u(x).

    We point out that in these examples, because the minimizer u attains its maximum on Ωc and is constant in that region, we have Ms0. Let r>0 be small, and let us assume that the prescribed curvature is f(x)=0, that we can split the principal value integral in the definition of Hs and that the contribution of the integral on RdBr(x) is negligible compared with that on Br(x). Then, we must have

    ΩBr(x)Gs(u(x)u(y)|xy|)dy|xy|d+2sΩcBr(x)Gs(u(y)u(x)|xy|)dy|xy|d+2s.

    If the solution is sticky at x, namely Ms>0, then we can approximate

    ΩcBr(x)Gs(u(y)u(x)|xy|)dy|xy|d+2sΩcBr(x)Gs(Ms|xy|)dy|xy|d+2s.

    Due to the fact that Gs(Ms|xy|) is strictly increasing with respect to Ms, we can heuristically argue that stickiness Ms(x) grows with the increase of the ratio

    R(x):=|ΩBr(x)||ΩcBr(x)|

    in order to maintain the balance between the integral in ΩBr(x) with the one in ΩcBr(x). Actually, if R(x)<1, as happens at convex corners xΩ, it might not be possible for these integrals to balance unless Ms(x)=0. This supports the conjecture that the minimizers are not sticky at convex corners.

    This paper discusses finite element discretizations of the fractional Plateau and the prescribed fractional mean curvature problems of order s(0,1/2) on bounded domains Ω subject to exterior data being a subgraph. Both of these can be interpreted as energy minimization problems in spaces closely related to W2s1(Ω).

    We discuss two converging approaches for computing discrete minimizers: a semi-implicit gradient flow scheme and a damped Newton method. Both of these algorithms require the computation of a matrix related to weighted linear fractional diffusion problems of order s+12. We employ the latter for computations.

    A salient feature of nonlocal minimal graphs is their stickiness, namely that they are generically discontinuous across the domain boundary. Because our theoretical results do not require meshes to be quasi-uniform, we resort to graded meshes to better capture this phenomenon. Although the discrete spaces consist of continuous functions, our experiments in Section 7.1 show the method's capability of accurately estimating the jump of solutions across the boundary. In Section 7.5 we illustrate a geometric rigidity result: wherever the nonlocal minimal graphs are continuous in the boundary of the domain, they must also match the slope of the exterior data. Fractional minimal graphs may change their convexity within Ω, as indicated by our experiments in Section 7.4.

    The use of graded meshes gives rise to poor conditioning, which in turn affects the performance of iterative solvers. Our experimental findings reveal that using diagonal preconditioning alleviates this issue, particularly when the grading is not too strong. Preconditioning of the resulting linear systems is an open problem.

    Because in practice it is not always feasible to exactly impose the Dirichlet condition on RdΩ, we study the effect of data truncation, and show that the finite element minimizers uHh computed on meshes Th over computational domains ΩH converge to the minimal graphs as h0, H0 in W2r1(Ω) for r[0,s). This is confirmed in our numerical experiments.

    Our results extend to prescribed minimal curvature problems, in which one needs some assumptions on the given curvature f in order to guarantee the existence of solutions. We present an example of an ill-posed problem due to data incompatibility. Furthermore, our computational results indicate that graphs with discontinuous prescribed mean curvature may be discontinuous in the interior of the domain. We explore the relation between the curvature of the domain and the amount of stickiness, observe that discrete solutions are stickier on concave boundaries than convex ones, and conjecture that they are continuous on convex corners.

    JPB has been supported in part by NSF grant DMS-1411808 and Fondo Vaz Ferreira grant 2019-068. WL has been supported in part by NSF grant DMS-1411808 and the Patrick and Marguerite Sung Fellowship in Mathematics of the University of Maryland. RHN has been supported in part by NSF grants DMS-1411808 and DMS-1908267.

    The authors declare no conflict of interest.



    [1] L. Zheng, X. Zhang, Modelling and Analysis of Modern Fluid Problems, Elsevier INC., 2017.
    [2] J. Y. Sia, Y. K. Goh, H. H. Liew, Y. F. Chang, Error analysis of physics-informed neural networks (PINNs) in typical dynamical systems, J. Fiz. Malays., 44 (2023), 10044–10051.
    [3] H. R. Samuel, L. B. Steven, L. P. Joshua, K. J. Nathan, Data-driven discovery of partial differential equations, Sci. Adv., 3 (2017), e1602614. https://doi.org/10.1126/sciadv.1602614 doi: 10.1126/sciadv.1602614
    [4] S. Kamyab, Z. Azimifar, R. Sabzi, P. Fieguth, Survey of deep learning methods for inverse problems, preprint, arXiv: 2111.04731v2. https://doi.org/10.48550/arXiv.2111.04731
    [5] Z. Chen, Y. Liu, H. Sun, Physics-informed learning of governing equations from scarce data, Nat. Commun., 12 (2021), 1–13. https://doi.org/10.1038/s41467-021-26434-1 doi: 10.1038/s41467-021-26434-1
    [6] J. F. Cai, K. S. Wei, Exploiting the structure effectively and efficiently in low rank matrix recovery, in Handbook of Numerical Analysis, 19 (2018), 21–51. https://doi.org/10.1016/bs.hna.2018.09.001
    [7] A. Haya, Deep learning-based model architecture for time-frequency images analysis, Int. J. Adv. Comput. Sci. Appl., 9 (2018). https://doi.org/10.14569/IJACSA.2018.091268 doi: 10.14569/IJACSA.2018.091268
    [8] I. E. Lagaris, A. Likas, D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Networks, 9 (1998), 987–1000. https://doi.org/10.1109/72.712178 doi: 10.1109/72.712178
    [9] A. Ahmad, T. M. Umadevi, J. C. Y. Ng, J. E. Toh, S. Koo, A. A. S. Zailan, A comparison review of optimizers and activation functions for convolutional neural networks, J. Appl. Technol. Innovation, 7 (2023), 37–45.
    [10] Y. Shin, J. Darbon, G. E. Karniadakis, On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs, Commun. Comput. Phys., 28 (2020), 2042–2074. https://doi.org/10.4208/cicp.oa-2020-0193 doi: 10.4208/cicp.oa-2020-0193
    [11] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686–707. https://doi.org/10.1016/j.jcp.2018.10.045 doi: 10.1016/j.jcp.2018.10.045
    [12] S. Mishra, R. Molinaro, Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs, IMA J. Numer. Anal., 42 (2021), 981–1022. https://doi.org/10.1093/imanum/drab032 doi: 10.1093/imanum/drab032
    [13] J. Stiasny, S. Chevalier, S. Chatzivasileiadis, Learning without Data: Physics-Informed Neural Networks for fast time-domain simulation, in 2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), (2021), 438–443. https://doi.org/10.1109/SmartGridComm51999.2021.9631995
    [14] P. H. W. Hoffmann, A Hitchhiker's guide to automatic differentiation, Numer. Algorithms, 72 (2016), 775–811. https://doi.org/10.1007/s11075-015-0067-6 doi: 10.1007/s11075-015-0067-6
    [15] H. Schaeffer, Learning partial differential equations via data discovery and sparse optimization, Proc. R. Soc. A: Math., Phys. Eng. Sci., 473 (2017). https://doi.org/10.1098/rspa.2016.0446 doi: 10.1098/rspa.2016.0446
    [16] X. Li, Y. Wang, R. Ruiz, A survey on sparse learning models for feature selection, IEEE Trans. Cybern., 52 (2022), 1642–1660. https://doi.org/10.1109/TCYB.2020.2982445 doi: 10.1109/TCYB.2020.2982445
    [17] K. Kampa, S. Mehta, C. A. Chou, W. A. Chaovalitwongse, T. J. Grabowski, Sparse optimization in feature selection: application in neuroimaging, J. Global Optim., 59 (2014), 439–457. https://doi.org/10.1007/s10898-013-0134-2 doi: 10.1007/s10898-013-0134-2
    [18] K. Huang, H. Samani, C. Yang, J. Chen, Alternating direction method of multipliers for convex optimization in machine learning - interpretation and implementation, in 2022 2nd International Conference on Image Processing and Robotics (ICIPRob), (2022), 1–5. https://doi.org/10.1109/ICIPRob54042.2022.9798720
    [19] J. Wang, H. Li, L. Zhao, A convergent ADMM framework for efficient neural network training, preprint, arXiv: 2112.11619. https://doi.org/10.48550/arXiv.2112.11619
    [20] R. Nishihara, L. Lessard, B. Recht, A. Packard, M. Jordan, A general analysis of the convergence of ADMM, in Proceedings of the 32nd International Conference on Machine Learning, 37 (2015), 343–352.
    [21] G. Y. Yuan, S. Li, W. S. Zheng, A block decomposition algorithm for sparse optimization, in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery; Data Mining, (2020), 275–285. https://doi.org/10.1145/3394486.3403070
    [22] B. Stephen, P. Neal, E. Chu, P. Borja, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3 (2020), 1–122. https://doi.org/10.1561/2200000016 doi: 10.1561/2200000016
    [23] O. Ruben, M. Michael, M. Lucas, M. Rudy, J. A. Fruzsina, B. Miguel, et al., The Well: a large-scale collection of diverse physics simulations for machine learning, preprint, arXiv: 2412.00568. https://doi.org/10.48550/arXiv.2412.00568
  • This article has been cited by:

    1. Arif Mehmood, Samer Al Ghour, Farkhanda Afzal, Giorgio Nordo, Najma Saleem, Comprehensive note on characterization of neutrosophic soft P-open sets in neutrosophic soft quad-topological space, 2022, 43, 10641246, 1519, 10.3233/JIFS-212547
    2. Tareq M. Al-shami, Abdelwaheb Mhemdi, Radwan Abu-Gdairi, Mohammed E. El-Shafei, Compactness and connectedness via the class of soft somewhat open sets, 2023, 8, 2473-6988, 815, 10.3934/math.2023040
    3. Sagvan Younis Musa, Baravan Abdulmuhsen Asaad, Connectedness on bipolar hypersoft topological spaces, 2022, 43, 10641246, 4095, 10.3233/JIFS-213009
    4. Samer Al Ghour, Zanyar A. Ameen, On soft submaximal spaces, 2022, 8, 24058440, e10574, 10.1016/j.heliyon.2022.e10574
    5. Baravan A. Asaad, Sagvan Y. Musa, A novel class of bipolar soft separation axioms concerning crisp points, 2023, 56, 2391-4661, 10.1515/dema-2022-0189
    6. A. A. Azzam, Zanyar A. Ameen, Tareq M. Al-shami, Mohammed E. El-Shafei, Generating Soft Topologies via Soft Set Operators, 2022, 14, 2073-8994, 914, 10.3390/sym14050914
    7. Tareq M. Al-shami, Zanyar A. Ameen, A. A. Azzam, Mohammed E. El-Shafei, Soft separation axioms via soft topological operators, 2022, 7, 2473-6988, 15107, 10.3934/math.2022828
    8. Dina Abuzaid, Samer Al Ghour, Monia Naghi, Praveen Kumar Donta, Soft super-continuity and soft delta-closed graphs, 2024, 19, 1932-6203, e0301705, 10.1371/journal.pone.0301705
    9. Tareq M. Al-shami, Abdelwaheb Mhemdi, On soft parametric somewhat-open sets and applications via soft topologies, 2023, 9, 24058440, e21472, 10.1016/j.heliyon.2023.e21472
    10. Tareq M. Al-shami, Radwan Abu-Gdairi, 2023, Chapter 35, 978-981-99-0446-4, 391, 10.1007/978-981-99-0447-1_35
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(327) PDF downloads(32) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog