1.
Introduction
The total least squares with linear equality constraint (TLSE) problem is stated as
where Q∈Rp×n and [QA] are full row-rank and full column rank, respectively such that A∈Rq×n, b∈Rq and d∈Rp. As proved in Eq (17) in [1], if the following genericity condition holds
then the TLSE problem guarantees the existence of the unique solution
where
The TLSE problem is reduced to the total least squares (TLS) problem [2,3] when Q=0 and d=0, to the least squares problem with equality constraint [4,5,6,7,8] when G=0, and to the mixed least squares-total least squares problem [3,9] when Q=0 and d=0, and some columns of G are zero. The TLSE problem was first demonstrated by Dowling et al. [10] in 1992. They introduced the TLSE problem and interpreted how to solve it by using SVD and QR matrix factorizations, whereas Schaffrin and felus [11,12] investigated an iterative method of constrained total least squares estimation and an algorithmic approach to the TLSE problem with linear and quadratic constraints by using the Euler-Lagrange theorem. Liu et al. [13] suggested a QR-based inverse iteration method. Liu and colleagues [1,14] presented the perturbation analysis and condition numbers of the TLSE problem.
Condition numbers play a vital role in estimating forward errors of the algorithms [15,16,17]. Recent studies of condition numbers for different problems such as TLS, TLSE, multidimensional, mixed least squares, truncated and scaled TLS problems can be found in [1,14,18,19,20,21,22,23,24]. Structured TLS problems have received significant attention in recent years (see [25,26,27]). This topic has shifted many authors attention toward research on the structured condition numbers in which the TLS problem [28,29], the truncated TLS problem [30], the scaled TLS problem [31] and the mixed LS-TLS problem [32] are included. The analysis of the structured perturbations on the input data is essential for structured TLS problems because studying the structured TLSE problem is motivated by the fact that these methods preserve the underlying matrix structure that can improve the accuracy and efficiency of computation.
As far as we know, no work has been done on structured condition numbers for the TLSE problem. So, the main purpose of this work is to study structured condition numbers for TLSE problems as well as their relationships to unstructured condition numbers, and their statistical estimation. In certain situations, the computation of unstructured and structured condition numbers is not directly applicable to analyzing the forward error bound, but studying a reliable statistical condition estimation for unstructured and structured condition numbers is attractive and interesting.
Particularly, we will derive the unstructured condition numbers for a linear function of the TLSE problem by using the dual technique in Section 3. The dual technique was first introduced in [33] for the mixed and componentwise condition numbers of the least squares problem. Later, it was applied to get the mixed and componentwise condition numbers of the total least squares problem [28], the weighted least squares problem [34] and the constrained and weighted least squares problem [35]. As described in [33], the dual technique allows us to derive condition numbers by maximizing a linear function over a space of smaller dimension than the data space. Furthermore, in Section 4, the explicit expressions of relevant structured condition numbers are provided, and the links between the unstructured condition numbers for the TLSE problem and their corresponding structured counterparts are investigated. We also discuss how to recover the expressions of structured condition numbers for the solution of the TLS problem with the help of the derivative of the TLSE problem. Considering that it is expensive to compute these condition numbers, we consider the statistical estimation of both structured and unstructured condition numbers by using the small-sample statistical condition estimation (SSCE) method [36] and design three algorithms in Section 5. Meanwhile, in Section 5, we also provide some numerical findings and demonstrate the accuracy of the proposed algorithms by considering numerical examples. Moreover, Section 2 describes some useful preliminaries, and Section 6 gives some brief conclusions.
2.
Basic notations
In this section, we recall some necessary definitions and results about Dual techniques, which will be used throughout the paper.
2.1. Dual techniques
Consider a linear operator J:X→Y, between two Euclidean spaces X and Y with the scalar products ⟨⋅,⋅⟩X and ⟨⋅,⋅⟩Y, respectively. Denote the corresponding norms by ‖⋅‖X and ‖⋅‖Y, respectively. Here we provide definitions for "adjoint operator" and "dual norm".
Definition 2.1. The adjoint operator of J,J∗:Y→X is expressed as follows:
where (x,y)∈X×Y.
Definition 2.2. The dual norm ‖⋅‖X∗ of ‖⋅‖X is
It is known that the dual norms of the standard vector norms for the canonical scalar product in Rn are provided by:
Considering the scalar product <A,B>=trace(ATB), the norm of a matrix in Rm×n is ‖A‖F∗=‖A‖F.
Suppose that ‖⋅‖X,Y is an operator norm induced by ‖⋅‖X and ‖⋅‖Y for the linear operator J from X to Y. Assume that ‖⋅‖Y∗,X∗ is an operator norm induced by dual norms ‖⋅‖X∗ and ‖⋅‖Y∗ for the linear operator from Y to X.
The above discussion implies the following results [33].
Lemma 2.3. Assume that J is a linear operator from X to Y; then,
If the Euclidean space Y∗ has fewer dimensions than X, then it will be very suitable to find ‖J∗‖Y∗,X∗ in place of ‖J‖X,Y as given in [33].
According to [15], the absolute condition number of φ at y∈X is defined as
where φ is Fréchet differentiable in the neighborhood of y∈X and dφ(y) presents the Fréchet differential of φ at y. The relative normwise condition number for nonzero φ is given by
By using Lemma 2.3, we can define κ in terms of an adjoint operator and a dual norm as follows:
Let X=Rn be a data space for the componentwise metric and Xy presents the subset of all elements dy∈Rn for any input data y∈Rn satisfying dyi=0 for yi=0,1≤i≤n. Thus, the perturbation dy∈Xy of y can be measured by applying the below componentwise norm with respect to y as follows:
Equivalently,
By Eq (2.16) in [28], the dual norm of (2.4) can be written as
Using the above componentwise norm, we can rewrite the condition number κ.
Lemma 2.4. [28,33] Given the above assumptions and the componentwise norm defined in (2.4), the condition number κ can be expressed as
where ‖⋅‖c∗ is given by (2.5).
Next, we present some necessary results for the TLSE problem, which will be used throughout the paper.
Lemma 2.5. Let
Consider the following linear function φ of the TLSE solution:
where L∈Rl×n. With the help of [14,Theorem 3.2], using the genericity assumption (1.2) implies that φ:Rm×n×Rm×1 is a continuous map. Further, φ is Fréchet differentiable at (K,f); its Fréchet derivative is
where dK∈Rm×n, df∈Rm×1, tT=rT[−(˜A˜Q†),Iq] and r=Ax−b.
Using the 'vec' operator and applying vec(AXB)=(BT⊗A)vec(X), we obtain
where
Using (2.3) and (2.8), the absolute normwise condition number of φ for the TLSE solution can be expressed as follows:
The relative normwise condition number corresponding to κ(K,f) is given by
3.
Unstructured condition number of TLSE problem
When the data are sparse or poorly scaled, the componentwise perturbation analysis is more appropriate for investigating the TLSE problem's conditioning. Through the use of dual techniques discussed in the previous part, we will derive the explicit expressions of unstructured mixed and componentwise condition numbers for the TLSE problem in this section. Additionally, we demonstrate the mathematical equality between the derived expressions and the earlier ones [1]. Before moving on to the main results, we will first provide the lemma, which is as follows:
Lemma 3.1. The adjoint operator of the Fréchet derivative ℵ(dK,df) in (2.7) is given by
Proof. Let ℵ1(u) and ℵ2(u) be the first and second terms in the sum (2.7), respectively. For any u∈Rl, we get the following expression by using the concept of the scalar product in the matrix space:
For ℵ2, we have
Let
then
which completes the proof. Now, we present an explicit expression of the condition number κ (2.3) by applying the dual norm in the solution space.
Theorem 3.2. The condition number (2.3) for the linear function φ of the TLSE solution is expressed as
where
and DX presents the diagonal matrix diag(vec(X)) for any matrix X.
Proof. Let dkij and dfi be the entries of dK and df, respectively. Thus, using (2.5), we obtain
Using Lemma 3.1, we get the following:
where tj is the jth component of t. Consider (3.1), it can be verified that xj(2‖r‖−22HATrtT− [Q†A,HAT])ei−tiHTej is the (m(j−1)+i)th column of the n×(mn) matrix V. Thus, the above expression equals
Then, by Lemma 2.4, we get the required result.
Using Theorem 3.2, we can easily obtain the explicit expressions of the mixed condition number for the linear function φ of the TLSE solution.
Corollary 3.3. When the infinite norm is taken as the norm in the solution space Y under the same assumption as in Theorem 3.2, then obtain
If the infinity norm is selected as the norm in the solution space Rn, the corresponding mixed condition number is given by
Using Theorem 3.2, we can also find the explicit expressions of the componentwise condition number for the linear function φ of the TLSE solution.
Corollary 3.4. Considering the componentwise norm on the solution space given by
The componentwise condition number for the linear function φ of the TLSE solution has the following expression:
By applying the 2-norm to the solution space, we get an upper bound for the relevant condition number in terms of the 2-norm.
Corollary 3.5. When the 2-norm is used in the solution space, we obtain
Proof. If ‖⋅‖Q=‖⋅‖2, then ‖⋅‖∗Q=‖⋅‖2. Utilizing Theorem 3.2, we obtain the following
According to [37], for any matrix W, ‖W‖2,1=max‖u‖2=1‖Wu‖1=‖Wˆu‖1, where ˆu∈Rl is a unit 2-norm vector. Using ‖ˆu‖1≤√k‖ˆu‖2, we get
Substituting the above W with [VDK,SDf]TLT, we have
which implies (3.5).
Additionally, we utilize the dual techniques to derive the condition number expressions, which allows us to minimize the computational complexity of the problem. This is possible due to the fact that the number of columns in the matrix expression of ℵ is often less than the number of rows.
Remark 3.6. Using the Kronecker product property and the fact that
as shown in Eq (3.3) [14], we have
Applying these two facts together with (3.2) and (3.4) for the case where L=In allows us to recover the expressions of normwise, mixed and componentwise condition numbers of the TLSE problem, which are given in [1,Theorem 5].
The following corollary, based on a triangle inequality, yields the upper bounds for κm and κc, without the Kronecker product, and it omits its proof. Note that the following relationship holds for any matrix W∈Rp×q and diagonal matrix Dv∈Rq×q: ‖WDv‖∞=‖|WDv|‖∞=‖|W||Dv|‖∞=‖|W|‖Dv|e‖∞=‖|W‖Dv|‖∞=‖|W‖v∣‖∞, where e=[1,…,1]∈Rq.
Corollary 3.7. The mixed and componentwise condition numbers for the linear function φ of the TLSE solution can be bounded as follows:
4.
Structured condition numbers of TLSE problem
In this section, we study the sensitivity of a linear function of the structured TLSE solution to perturbations on the data k and f, which is given below:
such that φs(k,f)=Lx=L(Q†Ab+HATd) and x is the solution of the structured TLSE problem. Assume that K∈S is a linear structured matrix, like a Toeplitz matrix for the structured TLSE problem, where the set S of the linear structured matrix has the dimension θ and there exists a unique vector denoted by k=[k1,…,kθ]T such that
where S1,…,Sθ form a basis of S. Note that
and
where Φs K=[vec(S1),vec(S2),⋯,vec(Sθ)] and by the statement in [31,Theotrem 4.1], ΦsK is orthogonal with full column rank and has one nonzero entry in each row at most. Consider the fact that
when we restrict the perturbation matrices [ΔK,Δf] into [K,f], i.e., vec([ΔK,Δf])=Φs K,fϵ, where ϵ∈Rθ+m.
The following structured absolute normwise condition number of φs can be obtained by using (2.3) and (2.8)
where
The structured relative normwise condition number corresponding to κs(k,f) is expressed as
which can be efficiently computable with less storage and it is Kronecker product-free.
With the help of (2.7), we have to prove that φs given in (4.1) is Fréchet differentiable at (k,f) and find its Fréchet derivative.
Lemma 4.1. Assume the following linear function φs of the TLSE solution
where L∈Rl×n. With the help of [14,Theorem 3.2], using the genericity assumption (1.2) implies that φs:Rθ×Rm is a continuous map. Further, φs is Fréchet differentiable at (K,f); its Fréchet derivative is
where U=[u1, …,uθ]∈Rn×θ, ui=(2‖r‖−22HATrtT−[Q†A,HAT])Six−HSTit,dk∈Rθ and df∈Rm.
Lemma 4.2. The adjoint operator of the Fréchet derivative ℵs(dk,df) in (4.4) is given by
Theorem 4.3. The condition number of the linear function φs for the structured TLSE problem can be deduced from (3.1) as follows:
where
With the help of Theorem 4.3, we can simply determine the structured mixed condition number for the linear function φs of the TLSE solution.
Corollary 4.4. When the infinite norm is taken as the norm in the solution space Y, under the same assumption as in Theorem 4.3, we get
If the infinity norm is selected as the norm in the solution space Rn, the corresponding structured mixed condition number is given by
In light of the 2-norm to the solution space, we will derive an upper bound for the associated structured condition number in terms of the 2-norm. Since the proof is similar to that of Corollary 3.5, we will not repeat it here.
Corollary 4.5. When the 2-norm is used in the solution space, we get
Corollary 4.6. Assume that (3.3) represents the componentwise norm in the solution space. The structured componentwise condition number for the linear function φs of the TLSE solution has the following two expressions:
For a linearly structured matrix given by (4.2), we verify that the structured absolute normwise κs(k,f), mixed κs,m(k,f) and componentwise condition numbers κs,c(k,f) are less than the unstructured condition numbers κn(K,f), κm(K,f) and κc(K,f) respectively.
Theorem 4.7. Using the notations above, we have that κs(k,f)≤κ(K,f). Moreover suppose that the basis {S1,S2,…,Sθ} for S satisfies |K|=∑θi=1|ki||Si| for any K∈S; then,
Proof. The matrix ΦsK is column orthogonal according to [31,Theorem 4.1]. Therefore, ‖ΦsK‖2=1 and it is simple to observe that κs(k,f)≤κ(K,f) by comparing their expressions. By applying monotonicity of the infinity norm and using the assumption that
we get the following result.
Similarly, we can prove that κs,c(k,f)≤κc(K,f).
Remark 4.8. By utilizing the intermediate results of Lemma 4.1, we can retrieve the structured condition numbers for the TLS problem [28]. Suppose that Q=ΔQ=0 and d=Δd=0; we have
2‖r‖−22ATrrT=−2xˉrT1+xxT and
Considering the above fact and using (4.4), we obtain
ˉU=[ˉu1, …,ˉuθ]∈Rn×θ, ˉui=−(AT+2xˉrT1+xxT)Six+STiˉr, da∈Rθ and db∈Rm, where the latter is just the result in [28,Lemma 3.2] with which we can recover the structured condition numbers for the TLS problem [28].
5.
Numerical experiments
In the following section, we continue our research on estimating the unstructured and structured condition numbers for the TLSE problem before presenting the specific examples. In this part, we construct two algorithms to estimate the unstructured condition numbers. The first one, outlined in Algorithm A, is from [36] and has been used for different matrix problems [29,30,38,39,40,41,42]. We propose an algorithm for the unstructured normwise condition estimation of the TLSE problem based on SSCE. The second one, outlined in Algorithm B, is also from [36]. We provide a statistical estimation of the unstructured mixed and componentwise condition numbers by using the SSCE method [36].
Denote by κTLSEi(A,d) the normwise condition number of the function zTix, where zi's are from the unit n-sphere Sn−1 and are orthogonal. From (2.9), we have
where
The analysis in [36] shows that
where NTLSE,(γ)SCE:=ωγωβ√‖σ1‖22+‖σ2‖22+⋯+‖σγ‖22=‖κTLSE,(γ)abs‖F is a good estimate of the normwise condition number (2.9). In the above expression, ωβ is the Wallis factor with ω1=1, ω2=2/π, and for β>2;
The Wallis factor can be approximated by
with high accuracy. As a matter of fact, we can devise Algorithm A.
Algorithm A (SSCE method for the unstructured normwise condition number)
(1) Generate matrices [ΔK1,Δf1],…,[ΔKγ,Δfγ] with each entry in N(0,1), and orthonormalize the matrix
to obtain [τ1,τ2,…,τγ] via the modified Gram-Schmidt orthogonalization process. Each τi can be converted into the corresponding matrices [ΔKi,Δfi] by applying the unvec operation.
(2) Let β=m+mn. Approximate ωβ and ωγ by using (5.3).
(3) For i=1,2,…,γ, compute
(4) Compute the absolute condition vector by using (5.1), where the square operation is applied to each entry of σi,i=1,2,…,γ and the square root is also applied componentwise.
(5) Estimate the normwise condition number (2.9) by using (5.2).
Algorithm B (SSCE method for the unstructured mixed and componentwise condition numbers)
(1) Generate matrices [ΔK1,Δf1],[ΔK2,Δf2],…,[ΔKγ,Δfγ] with each entry in N(0,1) and orthonormalize the matrix
to obtain [τ1,τ2,…,τγ] via the modified Gram-Schmidt orthogonalization process. Apply the unvec operation to convert each τi into the corresponding matrices [ΔKi,Δfi]. Suppose that [ΔKi,Δfi] is the matrix [~ΔKi,~Δfi] and is multiplied by [K,f] componentwise.
(2) Assume that β=m(n+1). Approximate ωβ and ωγ by using (5.3).
(3) For i=1,2,…,γ, compute
Using the approximations for ωβ and ωγ, compute the absolute condition vector
(4) Compute the relative condition vector CTLSE,(γ)rel=CTLSE,(γ)abs/x. Estimate the mixed and componentwise condition estimations mTLSE,(γ)SCE and cTLSE,(γ)SCE as follows:
On the basis of the SSCE method [36], we propose Algorithm C to estimate the structured normwise, mixed and componentwise condition numbers.
Algorithm C (SSCE method for the structured condition numbers)
(1) Generate matrices [Δk1,Δk2,…,Δkγ]and[Δf1,Δf2,…,Δfγ] with entries in N(0,1), where ki∈Rθ and fi∈Rm. Orthonormalize the below matrix
to get an orthonormal matrix [ξ1,ξ2…,ξγ] by using a modified Gram-Schmidt orthogonalization technique, where ξi can be converted into the corresponding matrices [k⊤i,f⊤i]⊤ by applying the unvec operation.
(2) Let α=q+m. Approximate ωα and ωγ by using (5.3).
(3) For j=1,2,…,γ, compute yj from (5.4). Estimate the absolute condition vector
(4) Estimate the structured normwise condition estimation as follows:
(5) Compute the structured mixed condition estimation mSTLSE,(γ)SCE and structured componentwise condition estimation cSTLSE,(γ)SCE as follows:
Moving forward, we will illustrate four specific examples. The first compares the unstructured condition numbers with our SSCE-based estimates. It also comes to a conclusion about how well Algorithms A and B make estimates that are too high. The second is used to present the efficiency of statistical condition estimators of structured normwise, mixed, and componentwise condition numbers, while the third compares the structured and unstructured condition numbers, and the fourth checks the efficiency of over estimation ratios by implementing Algorithm C in in association with the structured condition numbers.
Example 5.1. To compare the unstructured normwise, mixed and componentwise condition numbers and interpret the effectiveness of Algorithms A and B, we employ the random TLSE problem that is generated by following method given in [18]. Consider a random matrix [A,b], and the matrix ˜Q=[Q,d] is constructed as follows
where Y=Ip−2yyT,Z=In+1−2zzT, and y∈Rγ,z∈Rn+1 are random unit vectors, and D denotes the diagonal matrix of order p×p with a condition number of κ˜Q. For Algorithms A and B, we assume q=300, p=75 and n=225, we create various TLSE problems for every chosen κ˜Q. The matrix L is provided for the purpose of selecting the component part of the solution. For instance, when L=In (l=n), all n components of the solution x are chosen equally. Whenever L=e⊤i (l=1), the ith row of In is chosen and only the ith part of the solution is chosen. Assume that xmax and xmin denote the highest and lowest elements of x in absolute value, respectively. We select the L matrix for our condition numbers.
Here, max and min are the indices for xmax and xmin, respectively. As a result, the components xmax and xmin, the whole x, and the subvector [x1 x2]T, are chosen in accordance with the following four matrices.
In light of Table 1, the mixed and componentwise condition numbers may more directly convey the true conditioning of this TLSE problem than the normwise condition number. We also found that Algorithms A and B can yield accurate results from condition estimates based on SSCE.
The rest of this part is provided to evaluate the efficiency of the over-estimation ratios proposed in Algorithms A and C. Assume random perturbations
and fix
When the perturbations ‖[ΔQΔA],[ΔdΔb]‖F are small enough, under the genericity condition (1.2), the following perturbed TLSE problem has the following unique solution, denoted by x+Δx:
where the perturbations ΔA of A, ΔQ of Q, Δb of b, and Δd of d are represented by
In order to show the efficiency of unstructured over-estimation ratios of Algorithms A and B. We determine the following over-estimation ratios
To carry out the experiments, we generated 500 TLSE problems, where κTLSE,(γ)SCE, mTLSE,(γ)SCE and cTLSE,(γ)SCE are the outcomes from Algorithms A and B. Generally, the ratios in (0.1,10) are acceptable [37,Chapter 19]. From Figure 1, we indicate that the mixed condition estimation rm and componentwise condition estimation rc are more effective than rn which may significantly overestimate the actual relative normwise error.
Regarding the structured TLSE problem, it is reasonable to take into consideration the fact that the perturbation ΔK has the same structure as K. For Toeplitz matrices, the assumption
for θ=m+n−1 is satisfied, when
where the MATLAB-routine notation K= toeplitz (Tc,Tr)∈S denotes a Toeplitz matrix with Tc∈Rm as its first column and Tr∈Rn as its first row; and, K=∑θi=1kiSi, where k=[TTc,Tr(2: end )]T∈Rm+n−1.
Example 5.2. A signal restoration implementation is the source of this example, which is derived from [25]. Assume that η is 1.21 and ν is 4. The ˉK convolution matrix is a m×(m−2ν) Toeplitz matrix with the first column
and the rest is ki1=0. Only k11 is a non-zero value in this row. Now, we generate a Toeplitz matrix and a corresponding right-hand side vector in this manner.
Here, G is a random Toeplitz matrix of the same structure as ˉK, and ˉf is the vector of all ones. A standard normal distribution is used to construct the G and h elements, and it scaled such that
We take ω=0.001 and m=300 in our experiment. In this example we compare the structured normwise, mixed and componentwise condition numbers and interpret the effectiveness of Algorithm C.
Taking into account the data presented in Table 2, we are able to reach the conclusion that Algorithm C is capable of providing accurate estimates of the structured mixed and componentwise numbers, whereas the structured normwise condition estimation may significantly overestimate the true relative structured normwise condition number.
Example 5.3. In this example, we consider the data matrix K and the vector f [3]
We note that the first m−2 singular values of [Kf] are equal but larger than the (m−1)th singular σm−1. It seems obvious that K is a Toeplitz matrix. Here we fix γ=2 in all calculations for Algorithm C. For Toeplitz matrix K and the vector f, we find that Algorithm C gives reliable componentwise condition estimations. From Table 3, we conclude that, in accordance with Theorem 4.7, the structured normwise mixed and componentwise condition numbers κn,s(k, f), κm,s(k,f) and κc,s(k,f), respectively, are consistently smaller than the corresponding unstructured ones κn(K,f), κm(K,f) and κc(K,f) as a result of selecting different values of m and n.
Example 5.4. Under structured perturbations, we can check the efficiency of structured condition estimations for the TLSE problem in the following example by taking 1000 samples of Toeplitz matrix K and f as taken before in Example 5.3. We construct the componentwise structured perturbation matrix ΔK and the perturbation vector Δf for each sample as given below:
where ε=10−8,E is a Toeplitz matrix and E and g are random matrices whose entries are uniformly distributed in the open interval (−1,1). Over estimation ratios with respect to the componentwise structured perturbations ΔK and Δf are given below:
Hence the structured condition estimations mSTLSE,(γ)SCE and cSTLSE,(γ)SCE are very reliable whereas the structured normwise condition estimation κSTLSE,(γ)SCE may seriously overestimate the true relative normwise error for γ=2 as shown in Figure 2.
6.
Conclusions
Using a dual technique, we have obtained explicit expressions for both the structured and unstructured condition numbers of the linear function of the TLSE problem in this article. Additionally, we investigated how the new results relate to the earlier findings. The comparisons between the structured and unstructured condition numbers are also given. We show that the previous structured condition numbers of the TLS problem can be recovered from the structured condition numbers of the TLSE problem. To efficiently estimate the structured and unstructured normwise, mixed, and componentwise conditions for the TLSE problem, we applied the SSCE method and constructed three algorithms. Finally, the performance of the proposed algorithms is illustrated in the numerical results. We have found that the structured condition numbers for the TLSE problem might be smaller than their unstructured counterparts, and that differences are significant. In the future, we will continue our research on this problem.
Acknowledgments
The authors are grateful to the handling editor and the anonymous referees for their constructive feedback and helpful suggestions. This work was supported by the Zhejiang Normal University Postdoctoral Research Fund (Grant No. ZC304022938), the Natural Science Foundation of China (Project No. 61976196) and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LZ22F030003.
Conflict of interest
The authors declare no conflict of interest.