1.
Introduction
Over the past decades, owing to a broad variety of applications in engineering, sciences and economics, the linear complementarity problem (LCP) has been an active topic in the optimization community and has garnered a flurry of interest. The LCP is a powerful mathematical model which is intimately related to many significant scientific problems, such as the well-known primal-dual linear programming, bimatrix game, convex quadratic programming, American option pricing problem and others, see e.g., [1,2,3] for more details. The LCP consists in determining a vector z∈Rn such that
where A∈Rn×n and q∈Rn are given. We hereafter abbreviate the problem (1.1) by LCP(A,q).
The LCP(A,q) of form (1.1) together with its extensions are extensively investigated in recent years, and designing efficient numerical algorithms to fast and economically obtain the solution of the LCP(A,q) (1.1) is of great significance. Some numerical iterative algorithms have been developed for solving the LCP(A,q) (1.1) over the past decades, such as the pivot algorithms [1,2,4], the projected iterative methods [5,6,7,8], the multisplitting methods [9,10,11,12,13,14], the Newton-type iteration methods [15,16] and others, see e.g., [17,18,19] and the references therein. The modulus-based matrix splitting (MMS) iteration method, which was first introduced in[20], is particularly attractive for solving the LCP(A,q) (1.1). Based on the general variable transformation, by setting z=|x|+xγ and v=Ωγ(|x|−x), and let A=M−N, Bai reformulated the LCP(A,q) (1.1) as the following equivalent form [20]
where γ>0 and Ω∈Rn×n is a positive diagonal matrix. Then he skillfully designed a general framework of MMS iteration method for solving the large-scale sparse LCP(A,q) (1.1), which exhibits the following formal formulation.
Algorithm 1.1. ([20]) (The MMS method) Let A=M−N be a splitting of the matrix A∈Rn×n. Assume that x0∈Rn is an arbitrary initial guess. For k=0,1,2,⋯, compute {xk+1} by solving the linear system
and then set
until the iterative sequence {zk} is convergent. Here, Ω∈Rn×n is a positive diagonal matrix and γ is a positive constant.
The MMS iteration method not only covers some presented iteration methods, such as the nonstationary extrapolated modulus method [21] and the modified modulus method [22] as its special cases, but also yields a series of modulus-based relaxation methods, such as the modulus-based Jacobi (MJ), the modulus-based Gauss-Seidel (MGS), the modulus-based successive overrelaxation (MSOR) and the modulus-based accelerated overrelaxation (MAOR) methods. Thereafter, since the promising behaviors and elegant mathematical properties of the MMS iterative scheme, it immediately received considerable attention and diverse versions of the MMS method occurred. For instance, Zheng and Yin [23] established a new class of accelerated MMS (AMMS) iteration methods for solving the large-scale sparse LCP(A,q) (1.1), and the convergence analyses of the AMMS method with the system matrix A being a positive definite matrix or an H+-matrix were explored. In order to further accelerate the MMS method, Zheng et al. [24] combined the relaxation strategy with the matrix splitting technique in the modulus equation of [25] and presented a relaxation MMS (RMMS) iteration method for solving the LCP(A,q) (1.1). The parametric selection strategies of the RMMS method were discussed in depth [24]. In addition, the RMMS method covers the general MMS (GMMS) method [25] as a special case. In the sequel, by extending the two-sweep iteration methods [26,27], Wu and Li [28] developed a general framework of two-sweep MMS (TMMS) iteration method to solve the LCP(A,q) (1.1), and the convergences of the TMMS method were established with the system matrix A being either an H+-matrix or a positive-definite matrix. Ren et al. [29] proposed a class of general two-sweep MMS (GTMMS) iteration methods to solve the LCP(A,q) (1.1) which encompasses the TMMS method by selecting appropriate parameter matrices. Peng et al. [30] presented a relaxation two-sweep MMS (RTMMS) iteration method for solving the LCP(A,q) (1.1) and gave its convergence theories with the system matrix A being an H+-matrix or a positive-definite matrix. Huang et al. [31] combined the parametric strategy, the relaxation technique and the acceleration technique to construct an accelerated relaxation MMS (ARMMS) iteration method for solving the LCP(A,q) (1.1). The ARMMS method can be regarded as a generalization of some existing methods, such as the MMS [20], the GMMS [25] and the RMMS [24]. For more modulus-based matrix splitting type iteration methods, see [32,33,34,35,36,37,38,39,40,41] and the references therein.
On the other hand, Bai and Tong [42] equivalently transformed the LCP(A,q) (1.1) into a nonlinear equation without using variable transformation and proposed an efficient iterative algorithm by using the matrix splittings and extrapolation acceleration techniques. Then some relaxed versions of the method proposed in [42] were constructed by Bai and Huang [43] and the convergence theories were established under some mild conditions. Recently, Wu and Li [44] recasted the LCP(A,q) (1.1) into an implicit fixed-point equation
where A=M−N. In fact, if M=A and Ω=I, then (1.2) reduces to the fixed-point equation proposed in [42]. Based on (1.2), the new MMS (NMMS) method for solving the LCP(A,q) (1.1) was constructed in [44].
Algorithm 1.2. ([44]) (The NMMS method) Let A=M−N be a splitting of the matrix A∈Rn×n and the matrix Ω+M be nonsingular, where Ω∈Rn×n is a positive diagonal matrix. Given a nonnegative initial vector z0∈Rn, for k=0,1,2,⋯ until the iteration sequence {zk} is convergent, compute zk+1∈Rn by solving the linear system
It is obvious that the NMMS method does not need any variable transformations, which is different from the above mentioned MMS type iteration methods. However, the NMMS method still inherits the merits of the MMS type iteration methods and some relaxation versions of it are studied.
Remark 1.1. Let A=DA−LA−UA, where DA, −LA and −UA are the diagonal, strictly lower-triangular and strictly upper-triangular parts of A, respectively. It has been mentioned in [44] that the Algorithm 1.2 can reduce to the following methods.
(i) If M=A, Ω=I and N=0, then the Algorithm 1.2 becomes the new modulus method:
(ii) If M=A, N=0 and Ω=αI, then Algorithm 1.2 turns into the new modified modulus iteration method:
(iii) Let M=1α(DA−βLA) and N=1α((1−α)DA+(α−β)LA+αUA), then Algorithm 1.2 reduces to the new MAOR iteration method:
Evidently, based on (1.3), when (α,β) is equal to (α,α), (1,1) and (1,0), respectively, we can obtain the new MSOR (NMSOR), the new MGS (NMGS) and the new MJ (NMJ) iteration methods, respectively.
The goal of this paper is to further improve the computing efficiency of the Algorithm 1.2 for solving the LCP(A,q) (1.1). To this end, we utilize the two-sweep matrix splitting iteration technique in [28,29] and the relaxation technique, and construct a new class of relaxed acceleration two-sweep MMS (NRATMMS) iteration method for solving the LCP(A,q) (1.1). Convergence analysis of the NRATMMS iteration method is studied in detail. By choosing suitable parameter matrices, the NRATMMS iteration method can generate some relaxation versions. Numerical results are reported to demonstrate the efficiency of the NRATMMS iteration method.
The remainder of this paper is organized as follows. In Section 2, {we present some notations and definitions used hereinafter.} Section 3 is devoted to establishing the NRATMMS iteration method for solving the LCP(A,q) (1.1) and the global linear convergence of the proposed method is explored. Section 4 reports the numerical results. Finally, some concluding remarks are given in Section 5.
2.
Preliminaries
In this section, we collect some notations, classical definitions and some auxiliary results which lay the foundation of our developments.
Rn×n denotes the set of all n×n real matrices and Rn=Rn×1. I is the identity matrix with suitable dimension. |⋅| denotes absolute value for real scalar or modulus for complex scalar. For x∈Rn, xi refers to its i-th entry, |x|=(|x1|,|x2|,⋯,|xn|)∈Rn represents the componentwise absolute value of a vector x. tridiag(a,b,c) denotes a tridiagonal matrix that has a,b,c as the subdiagonal, main diagonal and superdiagonal entries in the matrix, respectively. Tridiag(A,B,C) denotes a block tridiagonal matrix that has A, B, C as the subdiagonal, main diagonal and superdiagonal block entries in the matrix, respectively.
Let two matrices P=(pij)∈Rm×n and Q=(qij)∈Rm×n, we write P≥Q(P>Q) if pij≥qij(pij>qij) holds for any i and j. For A=(aij)∈Rm×n, A⊤ and |A| represent the transpose of A and the absolute value of A(|A|=(|aij|)∈Rm×n), respectively. For A=(aij)∈Rn×n, ρ(A) represents its spectral radius. Moreover, the comparison matrix ⟨A⟩ is defined by
A matrix A∈Rn×n is called a Z-matrix if all of its off-diagonal entries are nonpositive, and it is a P-matrix if all of its principal minors are positive; we call a real matrix as an M-matrix if it is a Z-matrix with A−1≥0, and it is called an H-matrix if its comparison matrix ⟨A⟩ is an M-matrix. In particular, an H-matrix with positive diagonals is called an H+-matrix [9]. In addition, a sufficient condition for the matrix A to be a P-matrix is that A is an H+-matrix. A∈Rn×n is called a strictly diagonal dominant matrix if |aii|>∑j≠i|aij| for all 1≤i≤n.
Let M be nonsingular, then A=M−N is called an M-splitting if M is an M-matrix and N≥0, an H-splitting if ⟨M⟩−|N| is an M-matrix and an H-compatible splitting if ⟨A⟩=⟨M⟩−|N|[45]. Finally, the following lemmas are needed in the convergence analysis of the proposed method.
Lemma 1. ([46]) Let A∈Rn×n be an H+-matrix, then the LCP(A,q) (1.1) has a unique solution for any q∈Rn.
Lemma 2. ([47]) Let B∈Rn×n be a strictly diagonal dominant matrix. Then for all C∈Rn×n,
holds, where e=(1,1,⋯,1)⊤.
Lemma 3. ([48]) Let A be an H-matrix, then |A−1|≤⟨A⟩−1.
Lemma 4. ([49]) If A is an M-matrix, there exists a positive diagonal matrix V such that AV is a strictly diagonal dominant matrix with positive diagonal entries.
Lemma 5. ([49]) Let A, B be two Z-matrices, A be an M-matrix, and B≥A. Then B is an M-matrix.
Lemma 6. ([26]) Let
then ρ(A)<1.
Lemma 7. ([45]) If A=M−N is an M-splitting of A, then ρ(M−1N)<1 if and only if A is an M-matrix.
3.
The method and convergence
In this section, the NRATMMS iteration method for solving the LCP(A,q) (1.1) is developed, and the general convergence analysis of the NRATMMS iteration method will be explored.
Let A=M1−N1=M2−N2 be two splittings of A and Ω=Ω1−Ω2=Ω3−Ω4 with Ωi (i=1,2,3,4) being all nonnegative diagonal matrices, then (1.2) can be reformulated to the following fixed point format:
where θ≥0 is a relaxation parameter. Based on (3.1), the NRATMMS iteration method is established as in the following Algorithm 3.1.
Algorithm 3.1. (The NRATMMS iteration method) Let A=M1−N1=M2−N2 be two splittings of A and Ω=Ω1−Ω2=Ω3−Ω4 with Ωi(i=1,2,3,4) being all nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Given two initial guesses z0,z1∈Rn and a nonnegative relaxation parameter θ, the iteration sequence {zk} is generated by
for k=1,2,⋯ until convergence.
The Algorithm 3.1 provides a general framework of NMMS iteration methods for solving the LCP(A,q) (1.1), and it can yield a series of NMMS type iteration methods with suitable choices of the matrix splittings and the relaxation parameter. For instance, when θ=1 and Ωi=0 (i=1,2,3,4), the Algorithm 3.1 reduces to the new accelerated two-sweep MMS (NATMMS) iteration method
When θ=1, Ω1=Ω3=Ω, Ω2=Ω4=0, M2=A and N2=0, the Algorithm 3.1 reduces to the Algorithm 1.2. When M1=1α(DA−βLA), N1=1α[(1−α)DA+(α−β)LA+αUA], M2=DA−UA, N2=LA with α,β>0, the Algorithm 3.1 gives the new relaxed acceleration two-sweep MAOR (NRATMAOR) iteration method. If (α,β) is equal to (α,α),(1,1), and (1,0), the NRATMAOR iteration method reduces to the new relaxed acceleration two-sweep MSOR (NRATMSOR) iteration method, the new relaxed acceleration two-sweep MGS (NRATMGS) iteration method and the new relaxed acceleration two-sweep MJ (NRATMJ) iteration method, respectively.
The convergence analysis for Algorithm 3.1 is investigated with the system matrix A of the LCP(A,q) (1.1) being an H+-matrix.
Lemma 8. Assume that A∈Rn×n is an H+-matrix. Let A=M1−N1 and A=M2−N2 be an H-splitting and a general splitting of A, respectively, and Ωi(i=1,2,3,4) be four nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Denote
then the iteration sequence {zk} generated by the Algorithm 3.1 converges to the unique solution z∗ for arbitrary two initial vectors if ρ(˜L)<1.
Proof. Let z∗ be the exact solution of the LCP(A,q) (1.1), then it satisfies
Subtracting (3.3) from (3.2), we have
For simplicity, let
and
Then we have
Let
then the iteration sequence {zk} converges to the unique solution z∗ if ρ(L)<1. Since L≥0, according to Lemma 6, ρ(F+G)<1 implies ρ(L)<1. To prove the convergence of the Algorithm 3.1, it is sufficient to prove ρ(F+G)<1.
Under the conditions that A is an H+-matrix and A=M1−N1 {is} an H-splitting of A, i.e., ⟨M1⟩−|N1| is an M-matrix, then by Lemma 5, ⟨M1⟩≥⟨M1⟩−|N1| implies that M1 is an H-matrix, and Ω1+M1 is also an H-matrix. In the light of Lemma 3, it follows that
Recall (3.4) and (3.5), we obtain
which yields that
As a consequence, based on the monotone property of the spectral radius, the iteration sequence {zk} generated by Algorithm 3.1 converges to the unique solution z∗ of the LCP(A,q) (1.1) if ρ(˜L)<1. The proof is completed.
Theorem 3.1. Assume that A∈Rn×n is an H+-matrix. Let A=M1−N1 be an H-compatible splitting and A=M2−N2 be an M-splitting of A, and Ωi(i=1,2,3,4) be four nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Denote
then the iteration sequence {zk} generated by the Algorithm 3.1 converges to the unique solution z∗ of the LCP(A,q) (1.1) for arbitrary two initial vectors if one of the following two conditions holds.
(i) 0<θ≤1 and Ωi(i=1,2,3,4) satisfy
(ii) θ>1 and Ωi(i=1,2,3,4) satisfy
Here, Ω=Ω1−Ω2=Ω3−Ω4 and V is an arbitrary positive diagonal matrix such that (Ω1+⟨M1⟩)V is a strictly diagonal dominant matrix.
Proof. According to Lemma 8, we only need to demonstrate ρ(˜L)<1. Then, on the basis of Lemma 2 and Lemma 4, it follows that
When 0<θ≤1, it holds that
Since A=M2−N2 is an M-splitting of A, M2 is an M-matrix. Let M2=DM2−BM2 be a splitting of M2, where DM2 is the positive diagonal matrix of M2.
If Ω3≥DM2, it can be concluded that
If Ω3<DM2, we get
According to (3.8), (3.9) and (3.10), we have ρ(˜L)<1 if (3.6) holds.
When θ>1, it follows that
If Ω3≥DM2, it can be derived that
from which we have
provided that 1<θ<min1≤i≤n1+[(⟨A⟩−Ω4)Ve]i[(|N1|+Ω2)Ve]i and [(⟨A⟩−Ω4)Ve]i[(|N1|+Ω2)Ve]i>0(i=1,2,⋯,n).
If Ω3<DM2, it is implied that
from which we have
provided that 1<θ<min1≤i≤n1+[(⟨A⟩+Ω−DM2)Ve]i[(|N1|+Ω2)Ve]i and [(⟨A⟩+Ω−DM2)Ve]i[(|N1|+Ω2)Ve]i>0(i=1,2,⋯,n).
According to (3.11), (3.12) and (3.13), we have ρ(˜L)<1 if (3.7) holds.
Theorem 3.2. Assume that A∈Rn×n is an H+-matrix. Let ϱ≐ρ(D−1A|BA|). Assume that the choices of the four nonnegative diagonal matrices Ωi(i=1,2,3,4) and the three positive parameters α,β,θ such that M1+Ω1 is nonsingular and either Ω3≤DA≤min{2Ω,2Ω−2(θ−1)Ω2} or max{2Ω4,2Ω4+2(θ−1)Ω2}≤DA<Ω3. Then the NRATMAOR iteration method is convergent for arbitrary two initial vectors if one of the following eight conditions holds:
(i) 0<θ≤1,0<α≤1,0<β≤α,ϱ<12;
(ii) 0<θ≤1,1<α<2,0<β≤α,ϱ<2−α2α;
(iii) 0<θ≤1,0<α≤1,0<α≤β,ϱ<α2β;
(iv) 0<θ≤1,1<α<2,0<α≤β,ϱ<2−α2β;
(v) 1<θ<2−α2αϱ−2α+2,2(θ−1)2θ−1<α≤1,0<β≤α,ϱ<12;
(vi) 1<θ<α2αϱ+2α−2,1<α<2θ2θ−1,0<β≤α,ϱ<2−α2α;
(vii) 1<θ<2−α2βϱ−2α+2,2(θ−1)2θ−1<α≤1,0<α≤β,ϱ<α2β;
(viii) 1<θ<α2βϱ+2α−2,1<α<2θ2θ−1,0<α≤β,ϱ<2−α2β.
Proof. For the NRATMAOR iteration method, we have A=M1−N1=M2−N2 with
and
where α,β>0 are parameters. In order to use the result of Lemma 8, we need A=M1−N1 to be an H-splitting of A. Since A is an H+-matrix, we have DA>0. It follows from (3.14) that
If 0<β≤α, then
and it follows from Lemma 7 that S is an M-matrix if
which is satisfied if
or
If 0<α≤β, then
It follows from Lemma 7 that ˉS is an M-matrix if
which is satisfied if
or
In this case, since S is a Z-matrix, it follows from Lemma 5 and (3.15) that S is an M-matrix if (3.16) or (3.17) holds.
In conclusion, A=M1−N1 is an H-splitting of A (or, equivalently, S is an M-matrix) if one of the following four conditions holds:
or
In the following, let ˆA=ˆM−ˆN with ˆM=Ω1+⟨M1⟩ and ˆN=(θ+|1−θ|)|N1+Ω2|+|M2−Ω3|+|Ω4−N2|, then ˜L=ˆM−1ˆN. In order to prove the convergence of the NRATMAOR iteration method, based on Lemma 8, it suffices to prove ρ(˜L)<1 provided that A=M1−N1 is an H-splitting of A.
Since
is a lower triangular matrix with positive diagonal entries and non-positive off-diagonal entries, it is an M-matrix. In addition, ˆN≥0. According to Lemma 7, ˆA is an M-matrix implies ρ(˜L)<1. Thus, we will prove that the Z-matrix ˆA is an M-matrix in the following.
Case I: 0<θ≤1. In this case, we have
from which we have
It can be easy to prove that the first term of (3.22) is nonnegative if
or
Then it follows from (3.22) that
(i) If 0<β≤α, then it can be deduced from (3.25) that
from which and Lemma 5, we obtain that ˆA is an M-matrix whenever T is. It follows from Lemma 7 that T is an M-matrix if
which is satisfied if
or
(ii) If 0<α≤β, it can be deduced from (3.25) that
which is an M-matrix if (3.16) or (3.17) holds.
In Case I, it can be concluded from (i) and (ii) that ˆA is an M-matrix if one of the following four conditions holds:
or
Case II: θ>1. In this case, we have
from which we obtain
The first term of (3.30) is nonnegative if
or
Then it follows from (3.30) that
(a) If 0<β≤α, then it follows from (3.33) that
from which and Lemma 5, we find that ˆA is an M-matrix whenever R is. It follows from Lemma 7 that R is an M-matrix if
which is satisfied if
or
(b) If 0<α≤β, then
from which and Lemma 5, we find that ˆA is an M-matrix whenever ˜R is. It follows from Lemma 7 that ˜R is an M-matrix if
which is satisfied if
or
In Case II, it can be concluded from (a) and (b) that ˆA is an M-matrix if one of the following four conditions holds:
or
The proof is completed by combining (3.18)–(3.21), (3.23), (3.24), (3.26)–(3.29), (3.31), (3.32) and (3.34)–(3.37).
4.
Numerical results
In this section, three numerical examples are performed to validate the effectiveness of the NRATMMS iteration method.
All test problems are conducted in MATLAB R2016a on a personal computer with 1.19 GHz central processing unit (Intel (R) Core (TM) i5-1035U), 8.00 GB memory and Windows 10 operating system. In the numerical results, we report the number of iteration steps (denoted by "IT"), the elapsed CPU time in seconds (denoted as "CPU") and the norm of the absolute residual vector (denoted by "RES"). Here, RES is defined by
As [44], the following three examples are used.
Example 4.1. ([20]) Consider the LCP(A,q), where the matrix A=ˆA+μIm2(μ≥0) with
and q=−Az∗∈Rm2 with z∗=(1,2,1,2,⋯,1,2,⋯)⊤ being the unique solution of the LCP(A,q) (1.1).
Example 4.2. ([20]) Consider the LCP(A,q), where the matrix A=ˆA+μIm2(μ≥0) with
and q=−Az∗∈Rm2 with z∗=(1,2,1,2,⋯,1,2,⋯)⊤ being the unique solution of the LCP(A,q) (1.1).
Example 4.3. (Black-Scholes American option pricing). In [50], the original Black-Scholes-Merton model changes to the following partial differential complementarity system
The initial and boundary conditions of the American put option price y(x,τ) become y(x,0)=g(x,0) and limx→±∞y(x,τ)=limx→±∞g(x,τ), where g(x,τ) is the transformed payoff function. For the price x∈[a,b], (a,b) is equal to (−1.5,1.5), (−2,2) or (−4,4). Let ϑ,η be a number of time steps and a number of x-nodes, σ,T be fixed volatility and expiry time. According to [50], by discretizing (4.1), we obtain the LCP:
with A=tridiag(−τ,1+2τ,−τ) and τ=Δt(Δx)2, where Δt=0.5σ2Tϑ, Δx=b−aη denote the time step and the price step, respectively. And then, if we employ a transformation z=u−g and q=Ag−d to formula (4.2), the American option pricing problem can be rewritten as LCP (1.1). In our numerical computations, we take g=0.5z∗, and z∗=(1,0,1,0,⋯,1,0,⋯)⊤. The vector d is adjusted such that d=Az∗−w∗, where w∗=(0,1,0,1,⋯,0,1,⋯)⊤. The involved parameters are detailed in Table 3.
As shown in [44], the NMGS method can be top-priority when the six tested methods there are used to solve the LCP(A,q) in the three examples. Therefore, in this paper, we focus on comparing the performance of the NMGS method in [44] with the NRATMGS method. For the NMGS iteration method, Ω=DA is used [44]. For the NRATMGS method, we take Ω1=Ω3=DA, Ω2=Ω4=0, M2=A, N2=0 and α=β=1. In addition, the optimal parameter θexp in the NRATMGS iteration method is obtained experimentally (ranging from 0 to 2 with step size 0.1 for Example 4.1 and Example 4.2, and with step size 0.01 for Example 4.3) by minimizing the corresponding iteration step number. For the sake of fairness, each methods are run 10 times and we take the average value of computing times as the reported CPU. Both methods are started from the initial vectors z0=z1=(1,0,1,0,⋯,1,0,⋯)⊤ and stopped if RES(zk)<10−5 or the prescribed maximal iteration number kmax=500 is exceeded. The involved linear systems are solved by "∖". Numerical results for Examples 1–3 are reported in Tables 1–3. It follows from Tables 1–3 the NRATMGS method is better than the NMGS method (and the other methods tested in [44]) in terms of the iteration step number and CPU time when the parameter θexp is selected appropriately.
5.
Conclusions
In this paper, by applying the matrix splitting, relaxation technique and two-sweep iteration form to the new modulus-based matrix splitting formula, we propose a new relaxed acceleration two-sweep modulus-based matrix splitting (NRATMMS) iteration method for solving the LCP(A,q) (1.1). We investigate the convergence properties of the NRATMMS iteration method with the system matrix A of the LCP(A,q) (1.1) being an H+-matrix. Numerical experiments illustrate that the NRATMMS iteration method is effective, and it can be superior to some existing methods. However, the choices of the optimal parameters in theory require further investigation.
Acknowledgments
The authors are grateful to the five reviewers and the editor for their helpful comments and suggestions that have helped to improve the paper. This research is supported by the National Natural Science Foundation of China (12201275, 12131004), the Ministry of Education in China of Humanities and Social Science Project (21YJCZH204), the Project of Liaoning Provincial Federation Social Science Circles (2023lslqnkt-044, 2022lslwtkt-069), the Natural Science Foundation of Fujian Province (2021J01661) and the Ministry of Science and Technology of China (2021YFA1003600).
Conflict of interest
The authors confirm that there has no conflict of interest.