Research article Special Issues

An explicit numerical method for the conservative Allen–Cahn equation on a cubic surface

  • We introduced a fully explicit finite difference method (FDM) designed for numerically solving the conservative Allen–Cahn equation (CAC) on a cubic surface. In this context, the cubic surface refers to the combined areas of the six square faces that enclose the volume of a cube. The proposed numerical solution approach is structured into two sequential steps. First, the Allen–Cahn (AC) equation was solved by applying the fully explicit FDM, which is computationally efficient. Following this, the conservation term is resolved using the updated solution from the AC equation to ensure consistency with the underlying conservation principles. To evaluate the effectiveness of the proposed scheme, computational tests are performed to verify that the resulting numerical solution of the CAC equation successfully conserves the discrete mass. Additionally, the solution is examined for its ability to exhibit the property of constrained motion by mass conserving mean curvature, a critical characteristic of the CAC equation. These two properties are fundamental to the integrity and accuracy of the CAC equation.

    Citation: Youngjin Hwang, Jyoti, Soobin Kwak, Hyundong Kim, Junseok Kim. An explicit numerical method for the conservative Allen–Cahn equation on a cubic surface[J]. AIMS Mathematics, 2024, 9(12): 34447-34465. doi: 10.3934/math.20241641

    Related Papers:

    [1] Belén Ariño-Morera, Angélica Benito, Álvaro Nolla, Tomás Recio, Emilio Seoane . Looking at Okuda's artwork through GeoGebra: A Citizen Science experience. AIMS Mathematics, 2023, 8(8): 17433-17447. doi: 10.3934/math.2023890
    [2] José Luis Díaz Palencia, Abraham Otero . Instability analysis and geometric perturbation theory to a mutual beneficial interaction between species with a higher order operator. AIMS Mathematics, 2022, 7(9): 17210-17224. doi: 10.3934/math.2022947
    [3] Yutong Zhou . Idempotent completion of right suspended categories. AIMS Mathematics, 2022, 7(7): 13442-13453. doi: 10.3934/math.2022743
    [4] Naveed Iqbal, Meshari Alesemi . Soliton dynamics in the (2+1)-dimensional Nizhnik-Novikov-Veselov system via the Riccati modified extended simple equation method. AIMS Mathematics, 2025, 10(2): 3306-3333. doi: 10.3934/math.2025154
    [5] Muhammad Imran Asjad, Naeem Ullah, Asma Taskeen, Fahd Jarad . Study of power law non-linearity in solitonic solutions using extended hyperbolic function method. AIMS Mathematics, 2022, 7(10): 18603-18615. doi: 10.3934/math.20221023
    [6] Mahmoud A. E. Abdelrahman, Wael W. Mohammed, Meshari Alesemi, Sahar Albosaily . The effect of multiplicative noise on the exact solutions of nonlinear Schrödinger equation. AIMS Mathematics, 2021, 6(3): 2970-2980. doi: 10.3934/math.2021180
    [7] Chun Huang, Zhao Li . Soliton solutions of conformable time-fractional perturbed Radhakrishnan-Kundu-Lakshmanan equation. AIMS Mathematics, 2022, 7(8): 14460-14473. doi: 10.3934/math.2022797
    [8] Shahbaz Ali, Muhammad Khalid Mahmmod, Raúl M. Falcón . A paradigmatic approach to investigate restricted hyper totient graphs. AIMS Mathematics, 2021, 6(4): 3761-3771. doi: 10.3934/math.2021223
    [9] Imrana Kousar, Saima Nazeer, Abid Mahboob, Sana Shahid, Yu-Pei Lv . Numerous graph energies of regular subdivision graph and complete graph. AIMS Mathematics, 2021, 6(8): 8466-8476. doi: 10.3934/math.2021491
    [10] Muhammad Zeeshan Hanif, Naveed Yaqoob, Muhammad Riaz, Muhammad Aslam . Linear Diophantine fuzzy graphs with new decision-making approach. AIMS Mathematics, 2022, 7(8): 14532-14556. doi: 10.3934/math.2022801
  • We introduced a fully explicit finite difference method (FDM) designed for numerically solving the conservative Allen–Cahn equation (CAC) on a cubic surface. In this context, the cubic surface refers to the combined areas of the six square faces that enclose the volume of a cube. The proposed numerical solution approach is structured into two sequential steps. First, the Allen–Cahn (AC) equation was solved by applying the fully explicit FDM, which is computationally efficient. Following this, the conservation term is resolved using the updated solution from the AC equation to ensure consistency with the underlying conservation principles. To evaluate the effectiveness of the proposed scheme, computational tests are performed to verify that the resulting numerical solution of the CAC equation successfully conserves the discrete mass. Additionally, the solution is examined for its ability to exhibit the property of constrained motion by mass conserving mean curvature, a critical characteristic of the CAC equation. These two properties are fundamental to the integrity and accuracy of the CAC equation.



    Working with large amounts of data is one of the main challenges we face today. With the rise of social networks and rapid technological advances, we must develop tools that allow us to work with so much information. At this point the use of tensor products comes into play, since their use reduces the number of and speed up the operations to be carried out. Proof of this is the recent article [1], where tensor products are used to speed up the calculation of matrix products. Other articles that exemplify the goodness of this operation include [2], where the solution of 2, 3-dimensional optimal control problems with spectral fractional Laplacian-type operators is studied, and [3], where high-order problems are studied through the use of proper generalized decomposition methods.

    When we try to solve a linear system of the form Ax=b, in addition to the classical methods, there are methods based on tensors that can be more efficient [4], since the classical methods face the problem of the curse of dimensionality, which makes them lose effectiveness as the size of the problem increases. The tensor methods look for the solution in separated form, that is, as the tensor combination

    x=j=1xj1xjd,

    where xjiRNi, d is the dimension of the problem, and is the Kronecker product as reviewed in the next Section. The main family of methods that solves this problem is proper generalized decomposition family [5], and it is based on the greedy rank-one update (GROU) algorithm [6,7]. This algorithm calculates the solution of the linear system Ax=b in separated form and, for this, in each iteration, it updates the approximation of the solution with the term resulting from minimizing the remaining residue. Furthermore, there are certain square matrices for which the GROU algorithm improves their convergence i.e., matrices of the form

    A=di=1idN1idNi1AiidNi+1idNd,

    where idNk is the identity matrix of size Nk×Nk, and AkRNk×Nk, for 1kd. These matrices are called Laplacian-like matrices, due to their relationship with the Laplace operator written as

    di=12x2i=di=10x010x0i1(2x2i)0x0i+10x0d.

    It is not easy to decide when a given matrix A can be represented in that form. To do this, we can use some of the previously results obtained by the authors of [8]. In this paper, we prove that the set of Laplacian-like matrices is a linear subspace for the space of square matrices with a particular decomposition of its dimension. Moreover, we provide a greedy algorithm that provides the best Laplacian approximation LA, for a given matrix A, as well its residue, RA=ALA. However, an iterative algorithm it is not useful enough against a direct solution algorithm. The main goal of this paper is to provide a direct algorithm that allows one to construct the best Laplacian-like approximation by using only a particular block decomposition of the matrix A. It can be considered as a pre-processing procedure that allows one to represent a given matrix in its best Laplacian-like form, and if the residual is equal to zero, we definitively have its Laplacian-like representation form. Hence, we efficiently use the GROU algorithm to solve the high-dimensional linear system associated with the matrix A.

    We remark that, by using the decomposition A=LA+RA, we can rewrite the linear system as (LA+RA)x=b, and when the value of the remainder is small, we can approximate the solution of the system x by using the solution of the Laplacian system xL. This fact is specially interesting in the case of the discretization of some partial differential equations. We also study the Laplacian decomposition of the matrix that comes from the discretization of a general second order partial differential equation of the form

    α2ux2+β2uy2+γux+δuy+μu=f,

    with homogeneous boundary conditions. Besides, to compare different numerical methods to solve partial differential equations, we consider two particular cases: the Helmholtz equation, which solves an eigenvalue problem for the Laplace operator. Furthermore, to illustrate that it is not necessary to be limited to the second order, we also consider the 4th order Swift-Hohenberg equation

    ut=ε(1+2x2)2u.

    This equation is noted for its pattern-forming behavior, and it was derived from the equations for thermal convection [9].

    The paper is organized as follows. We begin by recalling some preliminary definition and results used throughout the paper in Section 2. Section 3 is devoted to the statement and the proof of the main result of this paper, which allow one to construct explicitly the best approximation of a given matrix to the linear space of Laplacian-like matrices. After that, in Section 4, we discuss how we applied this result to compute the best Laplacian approximation for the discretization of a second order partial differential equations without mixing derivatives. Finally, some numerical examples are given in Section 5.

    First at all we introduce some notations that we use throughout the paper. We denote by RN×M, the set of N×M-matrices and by AT the transpose of a given matrix A. As usual we use

    x,y2=x,yRN=xTy=yTx

    to denote the Euclidean inner product in RN, and its corresponding 2-norm, by x2=xRN=x,x1/22.

    Given a sequence {uj}j=0RN, we say that a vector uRN can be written as

    u=j=0uj

    if and only if

    limnnj=0uj=u

    in the 2-topology.

    The Kronecker product of two matrices ARN1×M1 and BRN2×M2 is defined by

    AB=(A1,1BA1,2BA1,M1BA2,1BA2,2BA2,M1BAN1,1BAN1,2BAN1,M1B)RN1N2×M1M2.

    We can see some of the well-known properties of the Kronecker product in [7].

    As we already said, we are interested solving the high-dimensional linear system Ax=b obtained from a discretization of a partial differential equation. We are interested in solving it by using a tensor-based algorithm; so, we are going to look for an approximation of the solution in separated form. To see this, we assume that the coefficient matrix A is a (N1Nd)×(N1Nd)-dimensional invertible matrix for some N1,,NdN. Next, we look for an approximation (of rank n) of A1b of the form

    A1bnj=1xj1xjd. (2.1)

    To do this, given xRN1Nd, we say that xR1=R1(N1,N2,,Nd) if x=x1x2xd, where xiRNi for i=1,,d. For n2, we define, inductively, that Rn=Rn(N1,N2,,Nd)=Rn1+R1, that is,

    Rn={x:x=ki=1x(i),x(i)R1 for 1ikn}.

    Note that RnRn+1 for all n1.

    To perform (2.1), what we will do is minimize the difference

    bA(nj=1xjdxjd)2,

    that is, solve the problem

    argminuRnbAu2. (2.2)

    Here, 2 is the 2-norm, or the Frobenius norm, defined by

    A2=mi=1nj=1|ai,j|2=tr(AA),forARm×n.

    Unfortunately, from Proposition 4.1(a) of [10], we have that the set Rn is not necessarily (or even usually) closed for each n2. In consequence, no best rank-n approximation exists, that is, (2.2) has no solution. However, from Proposition 4.2 of [10] it follows that R1 is a closed set in any norm-topology. This fact allows us to introduce the following algorithm.

    The GROU algorithm is an iterative method to solve linear systems of the form Ax=b by using only rank-one updates. Thus, given AGL(RN×N) with N=N1Nd, and bRN, we can obtain an approximation of the form

    A1bun=nj=1xj1xjd

    for some n1, and xjiRNi for i=1,2,,d and j=1,2,,n [7]. We proceed with the following iterative procedure (see algorithm 1 below): let u0=y0=0, and, for each n1, take

    rn1=bAun1, (2.3)
    un=un1+yn,whereynargminuR1rn1Au2. (2.4)

    Since unA1b, we can define the rank for A1b obtained by the GROU Algorithm as

    rank(A1b)={if{j1:yj=0}=,min{j1:yj=0}1otherwise.

    The next result, presented in [7], gives the convergence of the sequence {un}n0 to the solution A1b of the linear system.

    Theorem 2.1. Let bRN1Nd and ARN1Nd×N1Nd be an invertible matrix. Then, by using the iterative scheme described by (2.3) and (2.4), we obtain that the sequence {rn2}rank(A1b)n=0 is strictly decreasing and

    A1b=limnun=rank(A1b)j=0yj. (2.5)

    Note that the updates in the previous scheme works under the assumption that, in line 5 of algorithm 1, we have a way to obtain

    yargminxR1riAx22. (2.6)

    To compute y, we can use an alternating least squares (ALS) approach (see [7,11]).

    Table  .   .
    Algorithm 1 GROU algorithm
    1: procedure GROU(f,A,ε,tol,rank_max)
    2:   r0=f
    3:   u=0
    4:   for i=0,1,2,,rank_max do
    5:     y=procedure(minxR1riAx22)
    6:     ri+1=riAy
    7:     uu+y
    8:     if ri+12<ε or |ri+12ri2|<tol then goto 13
    9:     end if
    10:   end for
    11:   return u and rrank_max2.
    12:   break
    13:   return u and ri+12
    14: end procedure

     | Show Table
    DownLoad: CSV

    The idea below the ALS strategy to solve (2.6) is as follows: for each 1kd, we proceed as follows. Assume that the values x1,,xk1,xk+1,,xd are given. Then, we look for the unknown xk, satisfying

    xkargminzkRNk×NkbA(x1xk1zkxk+1xd)2,

    where we can write

    A(x1xk1zkxk+1xd)=A(x1xk1idNkxk+1xd)zk.

    In consequence, by using a least squares approach [11], we can obtain xk by solving the following Nk×Nk-dimensional linear system:

    Zkzk=b, (2.7)

    where

    Zk:=(xT1xTk1idNkxTk+1xTd)ATA(x1xk1idkxk+1xd)

    and

    bk:=(xT1xTk1idNkxTk+1xTd)ATb.

    Clearly,

    bA(x1xk1zkxk+1xd)2bA(x1xk1xkxk+1xd)2

    holds for all zkRNk×Nk. However, it is well known (see Section 4 in [11]) that the performance of the ALS strategy can be improved (see Algorithm 2 below) when the shape of the matrix ATARN×N, with N=N1Nd, can be written in the form

    ATA=ri=1dj=1A(i)j, (2.8)

    where dj=1A(i)j=A(i)1A(i)d; here, A(i)jRNj×Nj for 1jd and 1ir. In particular, when the matrix A is given by

    A=di=1idN1idNi1AiidNi+1idNd,

    then the matrix ATA can be easily written in the form of (2.8). These matrices were introduced in [8] as Laplacian-like matrices since they can be easily related to the classical Laplacian operator [2,12]. The next section will be devoted to the study of this class of matrices.

    As we said in the introduction, the proper orthogonal decomposition is a popular numerical strategy in the engineering process to solve high-dimensional problems. It is based on the GROU algorithms (2.3) and (2.4), and it can be considered as a tensor-based decomposition algorithm.

    There is a particular type of matrices to solve high-dimensional linear systems for which these methods work particularly well, i.e., those that satisfy the property (2.8). To this end, we introduce the following definition.

    Table  .   .
    Algorithm 2 An ALS algorithm for matrices in the form of (2.8) [11, Algorithm 2]
    1: Given ATA=ri=1dj=1A(i)jRN×N and bRN.
    2: Initialize x(0)iRNi for i=1,2,d.
    3: Introduce ε>0 and itermax,iter = 1.
    4: while distance > ε and iter < itermax do
    5:   for k=1,2,,d do
    6:     x(1)k=x(0)k
    7:     for i=1,2,,r do
    8:       α(i)k=(k1j=1(x(0)j)TA(i)jx(0)j)(dj=k+1(x(1)j)TA(i)jx(1)j)
    9:     end for
    10:     x(0)k solves (ri=1α(i)kA(i)k)xk=(x(0)1x(0)k1idNkx(0)kx(0)d)Tb
    11:   end for
    12:   iter = iter + 1.
    13:   distance =max1idx(0)ix(1)i2.
    14: end while

     | Show Table
    DownLoad: CSV

    Definition 3.1. Given a matrix ARN×N, where N=N1Nd, we say that A is a Laplacian-like matrix if there exist matrices AiRNi×Ni for 1id be such that

    A=di=1Aiid[Ni]di=1idN1idNi1AiidNi+1idNd, (3.1)

    where idNj is the identity matrix of size Nj×Nj.

    It is not difficult to see that the set of Laplacian-like matrices is a linear subspace RN×N of matrices satisfying the property (2.8). From now on, we will denote by L(RN×N) the subspace of Laplacian-like matrices in RN×N for a fixed decomposition of N=N1Nd.

    Now, given a matrix ARN×N, our goal is to solve the following optimization problem:

    minLL(RN×N)AL2. (3.2)

    Clearly, if we denote by ΠL(RN×N) the orthogonal projection onto the linear subspace L(RN×N), then LA:=ΠL(RN×N)(A) is the solution of (3.2). Observe that ALA2=0 if and only if AL(RN×N).

    We are interested in trying to achieve a structure similar to (3.1) to study the matrices of large-dimensional problems. We search an algorithm that allows one to construct, for a given matrix A, its Laplacian-like best approximation LA.

    To do this, we will use the following theorem, which describes a particular decomposition of the space of matrices RN×N. Observe that the linear subspace span{idN} in RN×N has, as the orthogonal space, the following null trace matrices:

    span{idn}={ARn×n:tr(A)=0},

    with respect to the inner product A,BRN×N=tr(ATB).

    Theorem 3.2. Consider (RN×N,2) as a Hilbert space where N=N1Nd. Then, there exists a decomposition

    RN×N=span{idN}hN=L(RN×N)L(RN×N),

    where hN=span{idN} is the orthogonal complement of the linear subspace generated by the identity matrix. Moreover,

    L(RN×N)=span{idN}Δ, (3.3)

    where Δ=hNL(RN×N). Furthermore, L(RN×N) is a subspace of hN and

    Δ=di=1span{idN1}span{idNi1}span{idNi}span{idNi+1}span{idNd}.

    Proof. It follows from Lemma 3.1, Theorem 3.1 and Theorem 3.2 in [8].

    The above theorem allows us to compute the projection of matrix A onto L(RN×N) as follows. Denote by Πi the orthogonal projection of RN×N onto the linear subspace

    span{idN1}span{idNi1}span{idNi}span{idNi+1}span{idNd}

    for 1id. Thus, ki=1Πi is the orthogonal projection of RN×N onto the linear subspace Δ. In consequence, by using (3.3), we have

    tr(A)NidN+di=1Πi(A)=argminLL(RN×N)AL2. (3.4)

    If we further analyze (3.4), we observe that the second term on the left is of the form

    di=1Πi(A)=di=1idN1idNi1XiidNi+1idNd,

    and that it has only (N21++N2dd)-degrees of freedom (recall that dimspan{idNi}=N2i1). In addition, due to the tensor structure of the products, the unknowns xl of Xk are distributed in the form of a block so that we can calculate which will be the entries of the matrix A that we can approximate. Therefore, to obtain the value of each xl, we only need to calculate which is the value that best approximates the entries (i,j) of the original matrix that are in the same position as xl.

    In our next result, we will see how to carry out this procedure. To do this, we make the following observation. Given a matrix A=(ai,j)RKL×KL for some integers K,L>1, we can write A as a matrix block:

    A=(A(K,L)1,1A(K,L)1,2A(K,L)1,LA(K,L)2,1A(K,L)2,2A(K,L)2,LA(K,L)L,1A(K,L)L,2A(K,L)L,L), (3.5)

    where the block A(K,L)i,jRK×K for 1i,jL is given by

    A(K,L)i,j=(a(i1)K+1,(j1)K+1a(i1)K+1,jKaiK,(j1)K+1aiK,jK).

    Moreover,

    A2RKL×KL=KLi=1KLj=1a2i,j=Lr=1Ls=1A(K,L)r,s2RK×K.

    Observe that K and L can be easily interchanged. To simplify the notation, from now on given N=N1N2Nd, we denote it by N[k]=N1Nk1Nk+1Nd for each 1kd.

    Theorem 3.3. Let ARN×N, with N=N1Nd. For each fixed 1kd, consider the linear function Pk:RNk×NkRN×N given by

    Pk(Xk):=idN1idNk1XkidNk+1idNd.

    Then, the solution of the minimization problem

    minXkRNk×NkAPk(Xk)2 (3.6)

    is given by

    (Xk)i,j={1N[1]N[1]n=1a(i1)N[1]+n,(j1)N[1]+nifk=1,1N[k]Nk+1Ndm=1(N1Nk1n=1A(NkNd,N1Nk1)n,n)(i1)Nk+1Nd+m,(j1)Nk+1Nd+mif1<k<d,1N[d](N[d]n=1A(Nd,N[d])n,n)i,jifk=d.

    Proof. First, let us observe that idN1idNk=idN1Nk; so, we can find three different situations in the calculation of the projections:

    (1). P1(A)=X1idN[1]; in this case,

    P1(X1)=((X1)1,1idN[1](X1)1,2idN[1](X1)1,N1idN[1](X1)2,1idN[1](X1)2,2idN[1](X1)2,N1idN[1](X1)N1,1idN[1](X1)N1,2idN[1](X1)N1,N1idN[1])RN[1]N1×N[1]N1.

    (2). Pd(Xd)=idN[d]Xd; in this case,

    Pd(Xd)=(XdOdOdOdXdOdOdOdXd)RNdN[d]×NdN[d],

    where Od denotes the zero matrix in RNd×Nd.

    (3). Pi(Xi)=idN1Ni1XiidNi+1Nd for i=2,,d1; in this case, for a fixed 2id1, we write N=N1Ni1 and Nr=Ni+1Nd. Thus,

    Pi(Xi)=idNXiidNr=idN((Xi)1,1idNr(Xi)1,2idNr(Xi)1,N1idNr(Xi)2,1idNr(Xi)2,2idNr(Xi)2,N1idNr(Xi)N1,1idNr(Xi)N1,2idNr(Xi)N1,N1idNr)=(XiidNrOiidNrOiidNrOiidNrXiidNrOiidNrOiidNrOiidNrXiidNr)R(NiNr)N×(NiNr)N.

    In either case, a difference of the form

    minXkRNk×NkAPk(A)2

    must be minimized. To this end, we will consider each case A as a block matrix ARKL×KL in the form of (3.5).

    Case 1: For P1(X1), we take K=N[1] and L=N1; hence,

    AP1(X1)=(A(K,L)1,1(X1)1,1idN[1]A(K,L)1,2(X1)1,2idN[1]A(K,L)1,N1(X1)1,N1idN[1]A(K,L)2,1(X1)2,1idN[1]A(K,L)2,2(X1)2,2idN[1]A(K,L)2,N1(X1)2,N1idN[1]A(K,L)N1,1(X1)N1,1idN[1]A(K,L)N1,2(X1)N1,2idN[1]A(K,L)N1,N1(X1)N1,N1idN[1]).

    In this situation, we have

    AP1(X1)2RN×N=N1i=1N1j=1A(K,L)i,j(X1)i,jidN[1]2RN[1]×N[1];

    hence, we wish for each 1i and jN1 to yield

    (X1)i,j=xargminxRA(K,L)i,jxidN[1]2RN[1]×N[1]=argminxRN[1]n=1(a(i1)N[1]+n,(j1)N[1]+nx)2.

    Thus, it is not difficult to see that

    (X1)i,j=1N[1]N[1]n=1a(i1)N[1]+n,(j1)N[1]+n

    for 1i,jN1.

    Case 2: For Pd(Xd), we take K=Nd and L=N[d]; hence,

    APd(Xd)=(A(K,L)1,1XdA(K,L)1,2OdA(K,L)1,N[d]OdA(K,L)2,1OdA(K,L)2,2XdA(K,L)2,N[d]OdA(K,L)N[d],1OdA(K,L)N[d],2OdA(K,L)N[d],N[d]Xd).

    Now, we have

    APd(Xd)2RN×N=N[d]i=1A(K,L)i,iXd2RNd×Nd+N[d]i=1,j=1,ijA(K,L)i,i2RNd×Nd.

    Thus, XdRNd×Nd minimizes APd(Xd)2RN×N if and only if

    XdargminXRNd×NdN[d]i=1A(K,L)i,iX2RNd×Nd.

    In consequence,

    Xd=1N[d]N[d]i=1A(K,L)i,i.

    Case 3: For Pi(Xi), we take K=NiNr and L=N; hence,

    APi(Xi)=(A(K,L)1,1XiidNrA(K,L)1,2OiidNrA(K,L)1,NOiidNrA(K,L)2,1OiidNrA(K,L)2,2XiidNrA(K,L)1,NOiidNrA(K,L)N,1OiidNrA(K,L)N,2OiidNrA(K,L)N,NXiidNr).

    In this case,

    APi(Xi)2RN×N=Nn=1A(K,L)n,nXiidNr2RNiNr×NiNr+Nn=1,j=1,njA(K,L)n,j2RNiNr×NiNr,

    so we need to solve the following problem:

    minXRNi×NiNn=1A(K,L)n,nXidNr2RNiNr×NiNr. (3.7)

    Since XidNrRNi×Nispan{idNr}, we can write (3.7) as

    minZRNi×Nispan{idNr}Nn=1A(K,L)n,nZ2RNiNr×NiNr. (3.8)

    Observe that

    A=(au,v)=1NNn=1A(K,L)n,n=argminURNiNr×NiNrNn=1A(K,L)n,nU2RNiNr×NiNr.

    To simplify the notation, we write U:=RNi×Nispan{idNr}. Then, we have the following orthogonal decomposition, RNiNr×NiNr=UU. Denote by ΠU the orthogonal projection onto the linear subspace U. Then, for each ZU, we have

    A(K,L)n,nZ2=(idΠU)(A(K,L)n,n)+ΠU(A(K,L)n,n)Z2=(idΠU)(A(K,L)n,n)2+ΠU(A(K,L)n,n)Z2,

    because (idΠU)(A(K,L)n,n)U and ΠU(A(K,L)n,n)ZU. In consequence, the process of solving (3.8) is equivalent that for solving the following optimization problem:

    minZUNn=1ΠU(A(K,L)n,n)Z2RNiNr×NiNr. (3.9)

    Thus,

    Z=1NNn=1ΠU(A(K,L)n,n)=argminZUNn=1ΠU(A(K,L)n,n)Z2RNiNr×NiNr,

    that is, Z=ΠU(A); hence,

    Z=argminZUAZ2=XiidNr=argminXRNi×NiAXidNr2RNiNr×NiNr.

    Proceeding in a similar way as in Case 1, we obtain

    (Xi)u,v=1NrNrm=1a(u1)Nr+m,(v1)Nr+m=1Nr1NlNrm=1(Nln=1A(K,L)n,n)(u1)Nr+m,(v1)Nr+m

    for 1u,vNi. This concludes the proof of the theorem.

    To conclude, we obtain the following useful corollary.

    Corollary 3.4. Let ARN×N, with N=N1Nd. For each fixed 1kd, consider the linear function Pk:RNk×NkRN×N given by

    Pk(Xk):=idN1idNk1XkidNk+1idNd.

    For each 1kd, let XkRNk×Nk be the solution of the optimization problem (3.6). Then,

    LA=tr(A)NidN+dk=1Pk(Xktr(Xk)NkidNk)=argminLL(RN×N)AL2. (3.10)

    Proof. Observe that, for 1kd, the matrix Xk satisfies

    Pk(Xk)=argminZh(k)AZ2,

    where

    h(k):=span{idN1}span{idNk1}RNk×Nkspan{idNk+1}span{idNd}

    is a linear subspace of RN×N that is linearly isomorphic to RNk×NK. Since RNk×Nk=span{idNk}span{idNk}, then

    Xk=tr(Xk)NkidNk+(Xktr(Xk)NkidNk);

    hence

    Pk(Xk)=Pk(tr(Xk)NkidNk)+Pk(Xktr(Xk)NkidNk)=tr(Xk)NkidN+Pk(Xktr(Xk)NkidNk).

    We can conclude that Πk(A)=Pk(Xktr(Xk)NkidNk); recall that Πk is the orthogonal projection of RN×N onto the linear subspace

    span{idN1}span{idNk1}span{idNk}span{idNk+1}span{idNd}.

    From (3.4), the corollary is proved.

    In this section, we consider the general equation of a generic second-order partial differential equation without mixing derivatives with homogeneous boundary conditions. More precisely, let

    αuxx+βuyy+γux+δuy+μu=f for (x,y)(0,1)×(0,1), (4.1)
    u(x,0)=u(x,1)=u(0,y)=u(1,y)=0 for all 0x1 and 0y1. (4.2)

    We discretize (4.1) with the help of the following derivative approximations:

    ux(x,y)u(xi+1,yj)u(xi1,yj)2h,uy(x,y)u(xi,yj+1)u(xi,yj1)2k,

    and

    uxx(x,y)u(xi+1,yj)2u(xi,yj)+u(xi1,yj)h2,uyy(x,y)u(xi,yj+1)2u(xi,yj)+u(xi,yj1)k2

    for i=1,,N and j=1,,M. From (4.2), we have that u(x,y0)=u(x,yM+1)=u(x0,y)=u(xN+1,y)=0 for all 0x1 and 0y1.

    Next, in order to obtain a linear system, we put u:=u(xi,yj) and f:=f(xi,yj), where :=(i1)M+j for 1iN and 1jM. In this way, the represented mesh is traversed as shown in Figure 1, and the elements U=(u)MN=1 and F={f}MN=1 are column vectors. It allows us to represent (4.1) and (4.2) as the linear system AF=U, where A is the MN×MN-block matrix

    A=(TD1D2TD1D2TD1D2T) (4.3)
    Figure 1.  Proceeding from (1,1) to (1,M); (2,1),,(2,M); and, ending at (N,1),,(N,M)..

    for TRM×M, given by

    T=(2μh2k24αk24βh22βh2+δh2k002βh2δh2k2μh2k24αk24βh22βh2+δh2k0002βh2δh2k2μh2k24αk24βh2),

    and D1,D2RM×M are the diagonal matrices:

    D1=(2αk2+γhk2)idM,D2=(2αk2γhk2)idM.

    In this case, tr(A)=NM(2μh2k24αk24βh2); so, instead of looking for LA, as in (3.10), we will look for LˆA, where

    ˆA=(Atr(A)NMidNM)

    has a null trace. Proceeding according to Theorem 3.3 for sizes N1=N and N2=M, we obtain the following decomposition:

    X1=(02αk2+γhk2002αk2γhk202αk2+γhk2002αk2γhk202αk2+γhk2002αk2γhk20)RN×N

    and

    X2=(02βh2+δh2k002βh2δh2k02βh2+δh2k002βh2δh2k02βh2+δh2k002βh2δh2k0)RM×M.

    We remark that tr(X1)=tr(X2)=0. Moreover, the residual of the approximation LˆA of ˆA is ˆALˆA=0. In consequence, we can write the original matrix A as

    A=tr(A)NMidNM+X1idM+idNX2.

    Recall that the first term is

    tr(A)NMidNM=(2μh2k24αk24βh2)idNM=(2μh2k24αk24βh2)idNidM;

    hence, A can be written as

    A=Z1idM+idNZ2,

    where

    Z1=(μh2k22αk22βh22αk2+γhk2002αk2γhk2μh2k22αk22βh22αk2+γhk2002αk2γhk2μh2k22αk22βh22αk2+γhk2002αk2γhk2μh2k22αk22βh2)

    is an N×N-matrix and

    Z2=(μh2k22αk22βh22βh2+δh2k002βh2δh2kμh2k22αk22βh22βh2+δh2k002βh2δh2kμh2k22αk22βh22βh2+δh2k002βh2δh2kμh2k22αk22βh2)

    is a M×M-matrix.

    Now, we can use this representation of A to implement the GROU Algorithm 1, together with the ALS strategy given by Algorithm 2, to solve the following linear system:

    AU=(Z1idM+idNZ2)U=F.

    This study can be extended to high-dimensional equations, as occurs in [8] with the three-dimensional Poisson equation.

    Next, we are going to consider some particular equations to analyze their numerical behavior. In all cases, the characteristics of the computer used are as follows: 11th Gen Intel(R) Core(TM) i7-11370H @ 3.30GHz, RAM 16 GB, 64-bit operating system; and, Matlab version R2021b [13].

    Let us consider the particular case of the second-order partial differential equation with α=β=1, μ=c2 and f=0, that is,

    uxx+uyy+c2u=0.

    This is the 2D-Helmholtz equation. To obtain the linear system associated with the discrete problem, we need some boundary conditions; for example,

    {u(x,0)=sin(ωx)+cos(ωx)for0x,u(0,y)=sin(ωy)+cos(ωy)for0yT,

    and

    {u(x,T)=sin(ω(x+T))+cos(ω(x+T))for0x,u(L,y)=sin(ω(y+L))+cos(ω(y+L))for0yT.

    This initial value problem has a closed solution for ω=c2,

    u(x,y)=sin(ω(x+y))+cos(ω(x+y)).

    From the above operations, and by taking h=k for simplicity, we can write the matrix of the discrete linear system associated with the equation of Helmholtz as

    A=(2c2h48h22h2002h22c2h48h22h20002h22c2h48h2)idM+idN(02h2002h202h20002h20)

    or, equivalently,

    A=(c2h44h22h2002h2c2h44h22h20002h2c2h44h2)idM+idN(c2h44h22h2002h2c2h44h22h20002h2c2h44h2).

    If we solve this linear system Aul=ˆfl for the case c=2, L=T=, and with N=M, we obtain the temporary results shown in Figure 2. To carry out this experiment, we have used the following parameter values: for the GROU Algorithm 1: tol=2.2204e16; ε=2.2204e16; rank_max=10; (iter-max=5 and ε=2.2204e16 were used to perform Algorithm 2); and, the number of nodes in (0,1)2 (that is, the number of rows or columns of the matrix A) was increased from 102 to 2002.

    Figure 2.  CPU time, in seconds, employed to solve the discrete Helmholtz initial value problem by using the Matlab command Ab, the GROU Algorithm 1 and the GROU Algorithm 1 with A written as LA, as obtained from Corollary 3.3.

    To measure the goodness of the approximations obtained, we calculated the normalized errors, that is, the value of the difference, in absolute value, of the results obtained and the real solution, between the length of the solution, i.e.,

    ε=|exactsolutionapproximatesolution|N2

    for the different approximations obtained. The values of these errors were of the order of 104 and can be seen in Figure 3.

    Figure 3.  Normalized error between the solution of the discrete Helmholtz initial value problem and the solutions obtained by using the Matlab command Ab, the GROU Algorithm 1 and the GROU Algorithm 1 with A written as LA, as obtained from Corollary 3.3.

    Now, let us consider the partial differential equation of order four:

    ut=ε(1+2x2)2u, (5.1)

    with the boundary conditions

    {u(x,0)=sin(kx),u(x,T)=sin(kx)eTfor0xL, (5.2)

    and

    u(0,t)=u(L,t)=0for0tT. (5.3)

    For k=1+ε1 L=2π/k, the initial value problem (5.1)–(5.3) has the following as a solution:

    u(x,t)=sin(kx)et.

    If we discretize the problem described by (5.1)–(5.3) as in the previous example with the same step size for both variables, h, we obtain a linear system of the form Aul=ˆfl, where A, in Laplacian-like form, is the matrix

    A=(128h2+(22ε)h44h282004h28128h2+(22ε)h44h28200024h28128h2+(22ε)h4)idM+idN(0h300h30h3000h30),

    and l=(i1)M+j is the order established for the indices, with 1iN and 1jM.

    To perform a numerical experiment, we set ε=2, L=T=2π and the same number of points for the two variables. At this point, we can solve the linear system associated with the Swift-Hohenberg discrete problem by using our tools: the Matlab command Ab, the GROU Algorithm 1, and the GROU Algorithm 1, together with the ALS Algorithm 2 with A written in Laplacian-like form. In this case, we used the following parameter values in the algorithms: tol=2.2204e16; ε=2.2204e16; rank_max=10 for the GROU Algorithm 1, with iter-max=5 for the ALS step; and, the number of nodes in (0,2π)2 was increased from 102 to 2002. Figure 4 shows the results obtained.

    Figure 4.  CPU time, in seconds, employed to solve the discrete Swift-Hohenberg initial value problem by using the Matlab command Ab, the GROU Algorithm 1 and the GROU Algorithm 1 with A written in Laplacian form.

    Again, we calculated the normalized errors to estimate the goodness of the approximations, the results of which are shown in Figure 5.

    Figure 5.  Normalized error between the solution of the discrete Swift-Hohenberg initial value problem and the solutions obtained by using the Matlab command Ab, the GROU Algorithm 1 and the GROU Algorithm 1 with A written in Laplacian form.

    In this work, we have studied the Laplacian decomposition algorithm, which, given any square matrix, calculates its best Laplacian approximation. Furthermore, in Theorem 3.3, we have shown how it is implemented optimally.

    For us, the greatest interest in this algorithm lies in the computational improvement of combining it with the GROU Algorithm 1 to solve linear systems through the discretization of a partial derivative equation. Said improvement can be seen in the different numerical examples shown, where we have compared this procedure with the standard resolution of Matlab by means of the instruction Ab.

    This proposal is a new way of dealing with certain large-scale problems, where classical methods prove to be more inefficient.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    J. A. Conejero acknowledges funding from grant PID2021-124618NB-C21, funded by MCIN/AEI/ 10.13039/501100011033, and by "ERDF: A way of making Europe", by the "European Union"; M. Mora-Jiménez was supported by the Generalitat Valenciana and the European Social Fund under grant number ACIF/2020/269; A. Falcó was supported by the MICIN grant number RTI2018-093521-B-C32 and Universidad CEU Cardenal Herrera under grant number INDI22/15.

    The authors declare that they no have conflicts of interest.



    [1] S. M. Allen, J. W. Cahn, A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening, Acta Metall., 27 (1979), 1085–1095. https://doi.org/10.1016/0001-6160(79)90196-2 doi: 10.1016/0001-6160(79)90196-2
    [2] S. Zhai, Z. Weng, Y. Mo, X. Feng, Energy dissipation and maximum bound principle preserving scheme for solving a nonlocal ternary Allen–Cahn model, Comput. Math. Appl., 155 (2024), 150–164. https://doi.org/10.1016/j.camwa.2023.06.018 doi: 10.1016/j.camwa.2023.06.018
    [3] M. Emamjomeh, M. Nabati, A. Dinmohammadi, Numerical study of two operator splitting localized radial basis function method for Allen–Cahn problem, Eng. Anal. Bound. Elem., 163 (2024), 126–137. https://doi.org/10.1016/j.enganabound.2023.07.014 doi: 10.1016/j.enganabound.2023.07.014
    [4] L. Q. Chen, Phase-field models for microstructure evolution, Ann. Rev. Mater. Res., 32 (2002), 113–140. https://doi.org/10.1146/annurev.matsci.32.112001.132041 doi: 10.1146/annurev.matsci.32.112001.132041
    [5] M. Fatima, R. P. Agarwal, M. Abbas, P. O. Mohammed, M. Shafiq, N. Chorfi, Extension of Cubic B-Spline for Solving the Time-Fractional Allen–Cahn Equation in the Context of Mathematical Physics, Computation, 12 (2024), 51. https://doi.org/10.3390/computation12030051 doi: 10.3390/computation12030051
    [6] Z. Lu, J. Wang, A novel and efficient multi-scale feature extraction method for EEG classification, AIMS Math., 9 (2024), 16605–16622. https://doi.org/10.3934/math.2024848 doi: 10.3934/math.2024848
    [7] S. Kim, J. Kim, Automatic Binary Data Classification Using a Modified Allen–Cahn Equation, Int. J. Pattern Recognit. Artif. Intell., 35 (2021), 2150013. https://doi.org/10.1142/S021800142150013X doi: 10.1142/S021800142150013X
    [8] D. Lee, S. Lee, Image segmentation based on modified fractional Allen–Cahn equation, Math. Probl. Eng., 2019 (2019), 3980181. https://doi.org/10.1155/2019/8059716 doi: 10.1155/2019/8059716
    [9] M. Beneš, V. Chalupeck'y, K. Mikula, Geometrical image segmentation by the Allen–Cahn equation, Appl. Numer. Math., 51 (2004), 187–205. https://doi.org/10.1016/j.apnum.2004.04.006 doi: 10.1016/j.apnum.2004.04.006
    [10] D. Lee, Gradient-descent-like scheme for the Allen–Cahn equation, AIP Adv., 13 (2023), 085010. https://doi.org/10.1063/5.0154657 doi: 10.1063/5.0154657
    [11] D. Lee, Computing the area-minimizing surface by the Allen–Cahn equation with the fixed boundary, AIMS Math., 8 (2023), 23352–23371. https://doi.org/10.3934/math.20231184 doi: 10.3934/math.20231184
    [12] J. Yang, Y. Li, C. Lee, Y. Choi, J. Kim, Fast evolution numerical method for the Allen–Cahn equation, J. King Saud Univ. Sci., 35 (2023), 102430. https://doi.org/10.1016/j.jksus.2022.102430 doi: 10.1016/j.jksus.2022.102430
    [13] B. Xia, R. Yu, X. Song, X. Zhang, J. Kim, An efficient data assimilation algorithm using the Allen–Cahn equation, Eng. Anal. Bound. Elem., 155 (2023), 511–517. https://doi.org/10.1016/j.enganabound.2023.07.005 doi: 10.1016/j.enganabound.2023.07.005
    [14] H. Zhang, X. Qian, S. Song, Third-order accurate, large time-stepping and maximum-principle-preserving schemes for the Allen–Cahn equation, Numer. Algorithms, 95 (2024), 1213–1250. https://doi.org/10.1007/s11075-023-01482-w doi: 10.1007/s11075-023-01482-w
    [15] J. Sun, H. Zhang, X. Qian, S. Song, Up to eighth-order maximum-principle-preserving methods for the Allen–Cahn equation, Numer. Algorithms, 92 (2023), 1041–1062. https://doi.org/10.1007/s11075-022-01324-w doi: 10.1007/s11075-022-01324-w
    [16] J. Choi, S. Ham, S. Kwak, Y. Hwang, J. Kim, Stability analysis of an explicit numerical scheme for the Allen–Cahn equation with high-order polynomial potentials, AIMS Math., 9 (2024), 19332–19344. https://doi.org/10.3934/math.2024803 doi: 10.3934/math.2024803
    [17] H. Kim, G. Lee, S. Kang, S. Ham, Y. Hwang, J. Kim, Hybrid numerical method for the Allen–Cahn equation on nonuniform grids, Comput. Math. Appl., 158 (2024), 167–178. https://doi.org/10.1016/j.camwa.2023.07.010 doi: 10.1016/j.camwa.2023.07.010
    [18] X. Chen, X. Qian, S. Song, Fourth-order structure-preserving method for the conservative Allen–Cahn equation, Adv. Appl. Math. Mech., 15 (2023), 159–181. https://doi.org/10.4208/aamm.OA-2021-0303 doi: 10.4208/aamm.OA-2021-0303
    [19] J. Rubinstein, P. Sternberg, Nonlocal reaction-diffusion equations and nucleation, IMA J. Appl. Math., 48 (1992), 249–264. https://doi.org/10.1093/imamat/48.3.249 doi: 10.1093/imamat/48.3.249
    [20] M. Brassel, E. Bretin, A modified phase field approximation for mean curvature flow with conservation of the volume, Math. Meth. Appl. Sci., 10 (2011), 1157–1180. https://doi.org/10.1002/mma.1348 doi: 10.1002/mma.1348
    [21] J. Yang, J. Kim, Efficient and structure-preserving time-dependent auxiliary variable method for a conservative Allen–Cahn type surfactant system, Eng. Comput., 38 (2022), 5231–5250. https://doi.org/10.1007/s00366-021-01583-5 doi: 10.1007/s00366-021-01583-5
    [22] L. Bronsard, B. Stoth, Volume-preserving mean curvature flow as a limit of a nonlocal Ginzburg–Landau equation, SIAM J. Math. Anal., 28 (1997), 769–807. https://doi.org/10.1137/S0036141096298974 doi: 10.1137/S0036141096298974
    [23] X. Yang, J. J. Feng, C. Liu, J. Shen, Numerical simulations of jet pinching-off and drop formation using an energetic variational phase-field method, J. Comput. Phys., 218 (2006), 417–428. https://doi.org/10.1016/j.jcp.2006.02.010 doi: 10.1016/j.jcp.2006.02.010
    [24] Z. Zhang, H. Tang, An adaptive phase field method for the mixture of two incompressible fluids, Comput. Fluids, 36 (2007), 1307–1318. https://doi.org/10.1016/j.compfluid.2006.10.001 doi: 10.1016/j.compfluid.2006.10.001
    [25] B. Xia, Y. Li, Z. Li, Second-order unconditionally stable direct methods for Allen–Cahn and conservative Allen–Cahn equations on surfaces, Mathematics, 8 (2020), 1486. https://doi.org/10.3390/math8091486 doi: 10.3390/math8091486
    [26] Z. Sun, S. Zhang, A radial basis function approximation method for conservative Allen–Cahn equations on surfaces, Appl. Math. Lett., 143 (2023), 108634. https://doi.org/10.1016/j.aml.2023.108634 doi: 10.1016/j.aml.2023.108634
    [27] X. Liu, Q. Hong, H. L. Liao, Y. Gong, A multi-physical structure-preserving method and its analysis for the conservative Allen–Cahn equation with nonlocal constraint, Numer. Algorithms, 97 (2024), 1–21. https://doi.org/10.1007/s11075-023-01502-9 doi: 10.1007/s11075-023-01502-9
    [28] J. Yang, J. Kim, Numerical study of incompressible binary fluids on 3D curved surfaces based on the conservative Allen–Cahn–Navier–Stokes model, Comput. Fluids, 228 (2021), 105094. https://doi.org/10.1016/j.compfluid.2021.105094 doi: 10.1016/j.compfluid.2021.105094
    [29] J. Yang, J. Kim, A phase-field model and its efficient numerical method for two-phase flows on arbitrarily curved surfaces in 3D space, Comput. Meth. Appl. Mech. Eng., 372 (2020), 113382. https://doi.org/10.1016/j.cma.2020.113382 doi: 10.1016/j.cma.2020.113382
    [30] C. Lee, S. Kim, S. Kwak, Y. Hwang, S. Ham, S. Kang, J. Kim, Semi-automatic fingerprint image restoration algorithm using a partial differential equation, AIMS Math., 8 (2023), 27528–27541. https://doi:10.3934/math.20231408 doi: 10.3934/math.20231408
    [31] Y. Choi, J. Kim, Maximum principle preserving and unconditionally stable scheme for a conservative Allen–Cahn equation, Eng. Anal. Bound. Elem., 150 (2023), 111–119. https://doi.org/10.1016/j.enganabound.2023.05.005 doi: 10.1016/j.enganabound.2023.05.005
    [32] J. Kim, S. Lee, Y. Choi, A conservative Allen–Cahn equation with a space–time dependent Lagrange multiplier, Int. J. Eng. Sci., 84 (2014), 11–17. https://doi.org/10.1016/j.ijengsci.2014.06.004 doi: 10.1016/j.ijengsci.2014.06.004
    [33] Y. Hwang, J. Yang, G. Lee, S. Ham, S. Kang, S. Kwak, et al., Fast and efficient numerical method for solving the Allen–Cahn equation on the cubic surface, Math. Comput. Simul., 215 (2024), 338–356. https://doi.org/10.1016/j.matcom.2023.07.024 doi: 10.1016/j.matcom.2023.07.024
    [34] Y. Wang, X. Xiao, X. Feng, Numerical simulation for the conserved Allen–Cahn phase field model of two-phase incompressible flows by an efficient dimension splitting method, Commun. Nonlinear Sci. Numer. Simul., 131 (2024), 107874. https://doi.org/10.1016/j.cnsns.2024.107874 doi: 10.1016/j.cnsns.2024.107874
    [35] X. Yang, Efficient, second-order in time, and energy stable scheme for a new hydrodynamically coupled three components volume-conserved Allen–Cahn phase-field model, Math. Models Meth. Appl. Sci., 31 (2021), 753–787. https://doi.org/10.1142/S0218202521500184 doi: 10.1142/S0218202521500184
    [36] W. Cai, J. Ren, X. Gu, Y. Wang, Parallel and energy conservative/dissipative schemes for sine–Gordon and Allen–Cahn equations, Comput. Meth. Appl. Mech. Eng., 425 (2024), 116938. https://doi.org/10.1016/j.cma.2024.116938 doi: 10.1016/j.cma.2024.116938
    [37] J. Kim, D. Jeong, S. D. Yang, Y. Choi, A finite difference method for a conservative Allen–Cahn equation on non-flat surfaces, J. Comput. Phys., 334 (2017), 170–181. https://doi.org/10.1016/j.jcp.2016.12.060 doi: 10.1016/j.jcp.2016.12.060
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(776) PDF downloads(107) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog