
This paper presents three types of tensor Conjugate-Gradient (tCG) methods for solving large-scale linear discrete ill-posed problems based on the t-product between third-order tensors. An automatic determination strategy of a suitable regularization parameter is proposed for the tCG method in the Fourier domain (A-tCG-FFT). An improved version and a preconditioned version of the tCG method are also presented. The discrepancy principle is employed to determine a suitable regularization parameter. Several numerical examples in image and video restoration are given to show the effectiveness of the proposed tCG methods.
Citation: Hong-Mei Song, Shi-Wei Wang, Guang-Xin Huang. Tensor Conjugate-Gradient methods for tensor linear discrete ill-posed problems[J]. AIMS Mathematics, 2023, 8(11): 26782-26800. doi: 10.3934/math.20231371
[1] | Mingyuan Cao, Yueting Yang, Chaoqian Li, Xiaowei Jiang . An accelerated conjugate gradient method for the Z-eigenvalues of symmetric tensors. AIMS Mathematics, 2023, 8(7): 15008-15023. doi: 10.3934/math.2023766 |
[2] | Yu Xu, Youjun Deng, Dong Wei . Numerical solution of forward and inverse problems of heat conduction in multi-layered media. AIMS Mathematics, 2025, 10(3): 6144-6167. doi: 10.3934/math.2025280 |
[3] | Phuong Nguyen Duc, Erkan Nane, Omid Nikan, Nguyen Anh Tuan . Approximation of the initial value for damped nonlinear hyperbolic equations with random Gaussian white noise on the measurements. AIMS Mathematics, 2022, 7(7): 12620-12634. doi: 10.3934/math.2022698 |
[4] | Shahbaz Ahmad, Faisal Fairag, Adel M. Al-Mahdi, Jamshaid ul Rahman . Preconditioned augmented Lagrangian method for mean curvature image deblurring. AIMS Mathematics, 2022, 7(10): 17989-18009. doi: 10.3934/math.2022991 |
[5] | Zhenping Li, Xiangtuan Xiong, Jun Li, Jiaqi Hou . A quasi-boundary method for solving an inverse diffraction problem. AIMS Mathematics, 2022, 7(6): 11070-11086. doi: 10.3934/math.2022618 |
[6] | El Mostafa Kalmoun, Fatimah Allami . On the existence and stability of minimizers for generalized Tikhonov functionals with general similarity data. AIMS Mathematics, 2021, 6(3): 2764-2777. doi: 10.3934/math.2021169 |
[7] | Xuemin Xue, Xiangtuan Xiong, Yuanxiang Zhang . Two fractional regularization methods for identifying the radiogenic source of the Helium production-diffusion equation. AIMS Mathematics, 2021, 6(10): 11425-11448. doi: 10.3934/math.2021662 |
[8] | Daniel Gerth, Bernd Hofmann . Oversmoothing regularization with $\ell^1$-penalty term. AIMS Mathematics, 2019, 4(4): 1223-1247. doi: 10.3934/math.2019.4.1223 |
[9] | Fan Yang, Qianchao Wang, Xiaoxiao Li . A fractional Landweber iterative regularization method for stable analytic continuation. AIMS Mathematics, 2021, 6(1): 404-419. doi: 10.3934/math.2021025 |
[10] | Dun-Gang Li, Fan Yang, Ping Fan, Xiao-Xiao Li, Can-Yun Huang . Landweber iterative regularization method for reconstructing the unknown source of the modified Helmholtz equation. AIMS Mathematics, 2021, 6(9): 10327-10342. doi: 10.3934/math.2021598 |
This paper presents three types of tensor Conjugate-Gradient (tCG) methods for solving large-scale linear discrete ill-posed problems based on the t-product between third-order tensors. An automatic determination strategy of a suitable regularization parameter is proposed for the tCG method in the Fourier domain (A-tCG-FFT). An improved version and a preconditioned version of the tCG method are also presented. The discrepancy principle is employed to determine a suitable regularization parameter. Several numerical examples in image and video restoration are given to show the effectiveness of the proposed tCG methods.
In this paper, we consider the solution of large minimization problems of the form
minX∈Rm×p×n‖A∗X−B‖F,A=[a]l,m,ni,j,k=1∈Rl×m×n,B∈Rl×p×n, | (1.1) |
where the Frobenius norm of singular tube of A rapidly attenuates to zero with the increase of the index number. In particular, A has ill-determined tubal rank. Many of its singular tubes are nonvanishing with tiny Frobenius norm of different orders of magnitude. Problem (1.1) with such a tensor is called tensor linear discrete ill-posed problems. They arise from the restoration of color image and video, see e.g., [1,2,3,4,5]. Throughout this paper, the operation ∗ represents tensor t-product introduced in [1] and ‖⋅‖F denotes the tensor Frobenius norm, represented by
‖A‖F=√l∑i=1m∑j=1n∑k=1a2ijk. | (1.2) |
We assume that the observed tensor B∈Rm×p×n is polluted by an error tensor E∈Rm×p×n, i.e.,
B=B∗+E, | (1.3) |
where B∗∈Rm×p×n is an unknown and unavailable error-free tensor related to B. B∗ is determined by A∗X∗=B∗, where X∗ represents the explicit solution of problem (1.1) that is to be found. We assume that the upper bound of the Frobenius norm of E is known, i.e.,
‖E‖F≤δ. | (1.4) |
Straightforward solution of (1.1) generally is not meaningful due to propagation and severe amplification of the error E into the solution of (1.1) and the ill-posedness of A=[a]l,m,ni,j,k=1. In this paper we use Tikhonov regularization to reduce this effect and this regularization replaces (1.1) with penalty least-squares problems of the form
minX∈Rm×p×n{‖A∗X−B‖2F+μ‖X‖2F}, | (1.5) |
where μ>0 is a regularization parameter. We assume that
N(A)∩N(I)={O}, | (1.6) |
where N(A) denotes the null space of the tensor A under ∗, is the set of all solutions X of the equation A∗X=O. I is the identity tensor and O∈Rm×p×n is a tensor whose elements are all zero, respectively.
The normal equation of the minimization problem (1.5) is
(AT∗A+μI)∗X=AT∗B, | (1.7) |
then
Xμ=(AT∗A+μI)−1∗AT∗B | (1.8) |
is the unique solution of the Tikhonov minimization problem (1.5) under the assumption (1.6).
There are many methods for solving large-scale tensor linear discrete ill-posed problems (1.1). Recently, a tensor Golub–Kahan bidiagonalization method [4] and a GMRES method [5] were introduced for solving large-scale linear ill-posed problems (1.1) by iteratively solving (1.5). The randomized tensor singular value decomposition (rt-SVD) method in [6] was presented for computing super large data sets, and has prospects in image data compression and analysis. Ugwu and Reichel [7] proposed a new random tensor singular value decomposition (R-tSVD), which improves the truncated tensor singular value decomposition (T-tSVD) in [1]. Kilmer et al. [2] presented a tensor Conjugate-Gradient method (tCG) for tensor linear systems A∗X=B corresponding to the least-squares problems (1.1), where the regularization parameter in the tCG method is user-specified.
This paper mainly extends CG methods from matrix problems to tensor problems. Using matrix methods to solve the problem of image restoration and denoising is to flatten the three-dimensional data of color images into matrix form in turn for processing. Based on t-product, we extend the matrix method to tensor, which can preserve the data correlation between three-dimensional tensor sections. The tensor problem is projected into the Fourier domain, which makes use of the particularity of t-product structure. We further discuss the tCG method for the approximate solution of (1.1) in the Fourier domain. The discrepancy principle is used to determine a suitable regularization parameter of the tCG method. The proposed automatic determination strategy is called the tCG method with automatic determination of regularization parameters (A-tCG-FFT). A least-squares method based on the tCG method is provided for (1.1), which is called A-tCGLS-FFT. A preconditioned version of the A-tCG-FFT method is presented, which is abbreviated as A-tPCG-FFT. The Conjugate Gradient method only needs to save the current and last gradient values, which takes up less memory resources. Moreover, only the previously calculated gradient information is needed in each iteration process, so parallel calculation can be carried out and the calculation efficiency can be improved. Moreover, the way of projecting tensor problem into Fourier domain also greatly avoids the time and space complexity required to smooth tensor problem into matrix problem.
The rest of this paper is organized as follows. Section 2 introduces some symbols and preliminary knowledge that will be used in the context. Section 3 presents the A-tCG-FFT, A-tCGLS-FFT and A-tPCG-FFT methods for solving the minimization problem (1.5). Section 4 gives several examples on image and video restoration and Section 5 draws some conclusions.
This section gives some notations and definitions, and briefly summarizes some results that will be used later. For a third-order tensor A∈Rl×m×n, Figure 1 shows the frontal slices A(:,:,k), lateral slices A(:,j,:) and tube fibers A(i,j,:), and we abbreviate Ak=A(:,:,k) for simplicity. An ln×m matrix is obtained by the operator unfold(A), whereas the operator fold folds this matrix back to the tensor A, i.e.,
unfold(A)=[A1A2⋮An],fold(unfold(A))=A. |
Definition 1. Let A∈Rl×m×n, then a block-circulant matrix of A is denoted by bcirc(A), i.e.,
bcirc(A)=[A1A2⋮AnAnA1⋮An−1⋯⋯⋱⋯A2A3⋮A1]. |
Definition 2. [1] Given two tensors A∈Rl×m×n and B∈Rm×p×n, the t-product A∗B is defined as
A∗B=fold(bcirc(A)unfold(B))=C, | (2.1) |
where C∈Rl×p×n.
The following remarks will be used in Section 3.
Remark 2.1. [8] For suitable tensors A and B, it holds that
(1) bcirc(A∗B)=bcirc(A)∗bcirc(B).
(2) bcirc(AT)=bcirc(A)T.
(3) bcirc(A+B)=bcirc(A)+bcirc(B).
Let Fn be an n-by-n unitary discrete Fourier transform matrix, i.e.,
Fn=1√n[111⋯11ωω2⋯ωn−11ω2ω4⋯ω2(n−1)⋮⋮⋮⋱⋮1ωn−1ω2(n−1)⋯ω(n−1)(n−1)], |
where ω=e−2πin, then we get the tensor ˆA generated by using FFT along each tube of A, i.e.,
bdiag(ˆA)=[ˆA1ˆA2⋱ˆAn]=(Fn⊗Il)bcirc(A)(FHn⊗Im), | (2.2) |
where ⊗ is the Kronecker product, FHn is the conjugate transposition of Fn and ˆAi denotes the frontal slices of ˆA.
We also need the following remark.
Remark 2.2. [9] For appropriately sized tensors A and B,
(1) bdiag(^A∗B)=bdiag(ˆA)bdiag(ˆB).
(2) bdiag(^A+B)=bdiag(ˆA+ˆB).
(3) bdiag(^AH)=bdiag(ˆA)H. Additionally, if bdiag(A) is symmetric, bdiag(ˆA) is also symmetric.
(4) bdiag(^A−1)=bdiag(ˆA)−1.
Then the t-product of A and B in (2.1) can be expressed by
A∗B=fold((F∗n⊗Il)((Fn⊗Il)bcirc(A)(F∗n⊗Im))(Fn⊗Im)unfold(B)), | (2.3) |
and (2.1) is reformulated as
(ˆA1ˆA2⋱ˆAn)(ˆB1ˆB2⋮ˆBn)=(ˆC1ˆC2⋮ˆCn). | (2.4) |
It is easy to implement (2.1) in MATLAB. Using MATLAB notation, let ˆM = fft(M,[ ],3) be the tensor obtained by applying the FFT along the third dimension. Then (2.1) can be computed by taking the FFT along tube of A and B to obtain ˆA = fft(A,[ ],3) and ˆB = fft(B,[ ],3). Then for the matrix-matrix product of each pair of front slices of ˆA and ˆB, there is
ˆC(:,:,i)=ˆA(:,:,i)ˆB(:,:,i),i=1,2,…,n, |
and then taking the inverse FFT along the third dimension to obtain C = ifft(ˆC,[ ],3). We refer to Table 1 for more notations.
Notation | Interpretation |
A | tensor |
AT | transpose of tensors |
A−1 | inverse of tensor, and A−T=(A−1)T=(AT)−1 |
A(:,:,k) | the k-th frontal slice of tensor A |
A(i,j,:) | the tube fibers of tensor A |
Ak | A(:,:,k) |
ˆA | FFT of A along the third mode |
ˆAH | transpose of complex tensor ˆA |
unfold(A) | the block column matrix of A |
bcirc(A) | the block-circulant matrix |
I | identity tensor |
A | matrix |
I | identity matrix |
‖A‖F | the Frobenius-Norm of tensor A |
‖A‖ | the 2-Norm of matrix A |
⟨A,B⟩ | the matrix inner product ⟨A,B⟩=tr(ATB) |
A | ˆAHkˆAk+μjI |
ˆAk | ˆA(:,:,k) |
ˆAHk | transpose of complex matrix ˆAk |
∗ | t-product |
Based on the tensor conjugate gradient (tCG) method proposed by Kilmer et al. [10], this section presents a method to solve the regularization (1.5) by using the CG process in Fourier domain, where an suitable regularization parameter is automatically determined by the discrepancy principle. This method is abbreviated as A-tCG-FFT. The A-tCG-FFT improves the tCG method presented in [2] where the regularization parameter was user-specified. The following result shows the equivalent equation of (1.5) in the Fourier domain.
Theorem 3.1. Let ˆA = fft(A,[ ],3), ˆB = fft(B,[ ],3) and ˆX = fft(X,[ ],3). Then, solving the tensor regularization (1.5) is equivalent to solving a sequence of regularized least-squares problems of the matrix form
minˆXk{‖ˆAkˆXk−ˆBk‖2+μ‖ˆXk‖2},k=1,2,⋯,n, | (3.1) |
where ‖⋅‖ represents the 2-norm of the matrix, and ˆXk=ˆX(:,:,k), ˆAk=ˆA(:,:,k) and ˆBk=ˆB(:,:,k).
The normal equation of (3.1) can be represented as
(ˆAHkˆAk+μI)ˆXk=ˆAHkˆBk, | (3.2) |
Once getting ˆX, we have the approximation solution of (1.1) with the form X = iff(ˆX,[ ],3).
Proof. Following Remarks 2.1 and 2.2, (1.8) is represented as
unfold(ˆX)=(Fn⊗Il)unfold(X)=(Fn⊗Il)bcirc(AT∗A+μI)−1unfold(AT∗B)=bdiag^(AT∗A+μI)−1(Fn⊗Il)unfold(AT∗B)=bdiag^(AT∗A+μI)−1(Fn⊗Il)bcirc(AT)unfold(B)=bdiag^(AT∗A+μI)−1bdiag(^AH)unfold(ˆB)=bdiag(^AH∗ˆA+μI)−1bdiag(^AH)unfold(ˆB)=bdiag((^AH∗ˆA+μI)−1^AH)unfold(ˆB). |
Thus, by using (2.4) we have
ˆXk=[ˆAHkˆAk+μI]−1ˆAHkˆBk, |
which implies (3.2).
Now we discuss the determination of a suitable regularization parameter for (3.1). Let ˆB∗ = fft(B∗,[],3) and ˆB = fft(B,[],3), and denote ˆB∗k=ˆB∗(:,:,k) and ˆBk=ˆB(:,:,k). The assumption in (1.4) implies that
‖ˆEk‖≤δk, | (3.3) |
where ˆEk=ˆBk−ˆB∗k. The availability of the bound (3.3) allows us to determine μ by the discrepancy principle. Especially, the solution ˆXk of (3.1) satisfies
‖ˆAkˆXk−ˆBk‖≤ηδk, | (3.4) |
where η>1 is usually a user-specified constant and is independent of δk. For more details on the discrepancy principle, see e.g., [11].
In [12,13], the method of selecting regularization parameter through an a-priori strategy and a-posteriori parameter choice rule is mentioned to verify the convergence estimation between exact solution and regularization solution. Here, we choose regularization parameter to be obtained by automatically updating the polynomial function
μj=μ0ρj,j=0,1,2,⋯, | (3.5) |
where 0<ρ<1 and μ0=‖ˆA‖. Then, we obtain a suitable regularization parameter by iteratively applying (3.5) until (3.4) is satisfied.
Algorithm 1 summarizes the A-tCG-FFT method for solving (1.5). Algorithm 1 contains two nested iterations. The outer iteration updates μ by applying (3.5) so that the discrepancy principle is satisfied. The inner iteration is to use the CG method (CG-iteration) to solve the k-th normal equation (3.2) with the value of μj determined by the outer iteration, and ⟨M,N⟩ represents the inner product between matrices M and N. The inner iteration is stopped when the norm of the residual
rk,μj,i=ˆAHkˆBk−(ˆAHkˆAk+μjI)Xk,μj,i | (3.6) |
of the i-th iterate Xk,μj,i is less than a tolerance tol.
Algorithm 1: The A-tCG-FFT method for sloving (1.5) |
1: Input:A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1. |
2: Output:Least-square solution X of (1.5). |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: Let j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, A=ˆAHkˆAk+μjI, μj=μ0ρj. |
9: X0=0 or X0=Xk,μj−1; R0=ˆAHkˆBk-AX0; P0=R0. |
10: for i=0,1,...,until convergence do |
11: υi=⟨Ri,Ri⟩/⟨APi,Pi⟩. |
12: Xi+1=Xi+υiPi. |
13: Ri+1=Ri−υiAPi. |
14: ωi=⟨Ri+1,Ri+1⟩/⟨Ri,Ri⟩. |
15: Pi+1=Ri+1+ωiPi. |
16: end for |
17: Xk,uj=Xi+1. |
18: end while |
19: ˆX(:,:,k)=Xk,uj. |
20: end for |
21: X=ifft(ˆX,[],3). |
In the following Corollay 3.1, the rationality of the two methods for initial X0 is expounded.
Corollary 3.1. Let X0=0 be the initial solution of the inner iteration of Algorithm 1, then we have the initial residual
‖R0‖=‖ˆAHkˆBk‖≤‖ˆAHk‖‖ˆBk‖. |
Otherwise, if X0=Xk,μj−1, then we have
‖R0‖≤‖(I−μjμj−1I)‖‖ˆAHkˆBk‖+tol. |
Proof. Denote
rk,μj,i=ˆAHkˆBk−(ˆAHkˆAk+μjI)Xk,μj,i. |
It is easy to see that
‖R0‖=ˆAHkˆBk−(ˆAHkˆAk+μjI)X0=‖ˆAHkˆBk‖≤‖ˆAHk‖‖ˆBk‖ |
for X0=0. Let X0=Xk,μj−1, then we have
‖R0‖=‖ˆAHkˆBk−(ˆAHkˆAk+μjI)Xk,μj−1‖=‖ˆAHkˆBk−(ˆAHkˆAk+μjI)(ˆAHkˆAk+μj−1I)−1ˆAHkˆBk‖=‖[I−(ˆAHkˆAk+μjI)(ˆAHkˆAk+μj−1I)−1]ˆAHkˆBk‖≤‖(I−μjμj−1I)ˆAHkˆBk‖+tol≤‖(I−μjμj−1I)‖‖ˆAHkˆBk‖+tol. |
Theorem 3.2. If ˆX∗k is the exact solution of the symmetric positive definite equations (1.7), Xk,μj,i and Xk,μj,i+1 are generated by the inner CG-iteration of Algorithm 1, then
‖Xk,μj,i+1−ˆX∗k‖≤(1−1κ(A))12⋅‖Xk,μj,i−ˆX∗k‖, | (3.7) |
where A=ˆAHkˆAk+μjI, κ(A)=λmax(A)λmin(A). Here λmax(A) and λmin(A) are the maximum and minimum eigenvalues of A respectively.
Theorem 3.2 illustrates the convergence of Algorithm 1. Refer to [14] for the detailed proof process.
In the actual numerical calculation, if ˆAHkˆAk is singular and μ is very small, then ˆAHkˆAk+μjI is ill-conditioned. Inspired by the idea in [15] for the numerical stability of a linear system in matrix form, in this section, we use the CGLS method instead of the CG method in Algorithm 1.
The main difference between Algorithms 2 and 1 is the updating of zi=ˆBk−ˆAkXk,μj,i in Algorithm 2 rather than the residuals (3.6) of Algorithm 1. We refer to [16] for more details for a corresponding linear system in matrix form.
Algorithm 2: The A-tCGLS-FFT method for sloving (1.7) |
1: Input:A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1 |
2: Output:Least-square solution X of (1.7) |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, μj=μ0ρj. |
9: A=ˆAHkˆAk+μjI, B=ˆAHkˆBk. |
10: X0=0 or X0=Xk,μj−1; z0=ˆBk−ˆAkX0; |
11: R0=ˆAHkz0−μX0; P0=R0. |
12: for i=0,1,...,until convergence do |
13: Qi=ˆAkPi. |
14: γi=⟨Rj,Rj⟩/⟨Qi,Qi⟩+μj⟨Pi,Pi⟩. |
15: Xi+1=Xi+γiPi. |
16: zi+1=zi−γiQi. |
17: Ri+1=ˆAHkzi+1−μjXi+1. |
18: ωi=⟨Ri+1,Ri+1⟩/⟨Ri,Ri⟩. |
19: Pi+1=Ri+1+ωiPi. |
20: end for |
21: Xk,uj=Xi+1. |
22: end while |
23: ˆX(:,:,k)=Xk,uj. |
24: end for |
25: X=ifft(ˆX,[],3). |
In this subsection, we consider the acceleration of Algorithm 1 by preconditioning. In Algorithm 1, the coefficient matrix ˆAHkˆAk+μI of the k-th normal equation (3.2) is symmetric and positive definite. We set M=ˆAHkˆAk+μI and apply approximate Cholesky decomposition to M. The process of approximate Cholesky decomposition can be directly realized by Matlab function chol, then Mchol=chol(M) can be obtained. Let ML=MHchol and MR=Mchol, then there is M=MLMR, where ML is a lower triangular nonsingular sparse matrix and MR=MHL. Then we solve the preconditioned normal equations
~Ak~Xk=~Bk, | (3.8) |
instead of (3.2) in Algorithm 1, where ˜A=M−1L(ˆAHkˆAk+μI)M−1R, ˜X=MRˆXk and ˜B=M−1LˆAHkˆBk.
Let ˆXk,i and ˜Xk,i represent the i-th iterate of (3.8) and (3.2) in the inner CG- iteration of Algorithm 1 under a certain regularization parameter μ, respectively. Then we have
˜Rk,i=˜Bk−˜Ak˜Xk,i=M−1LˆBk−M−1LˆAkM−1RMRˆXk,i=M−1L(ˆBk−ˆAkˆXk,i)=M−1LˆRk,i. | (3.9) |
Denote ˜Pk,i=MRˆPk,i and ˆtk,i=M−1LˆRk,i, then we have
˜υk,i=⟨˜Rk,i,˜Rk,i⟩⟨˜Ak˜Pk,i,˜Pk,i⟩=⟨M−1LˆRk,i,M−1LˆRk,i⟩⟨M−1LˆAkM−1RMRˆPk,i,MRˆPk,i⟩=⟨M−1LˆRk,i,M−1LˆRk,i⟩⟨M−1LˆAkˆPk,i,MRˆPk,i⟩. | (3.10) |
According to the definition of matrix inner product ⟨A,B⟩=tr(ATB), where tr denotes the trace of a square matrix. Then, we have
⟨M−1LˆAkˆPk,i,MRˆPk,i⟩=tr((M−1LˆAkˆPk,i)TMRˆPk,i)=tr(ˆPTk,iˆAHkM−TLMRˆPk,i)=tr(ˆPTk,iˆAHkˆPk,i)=⟨ˆAkˆPk,i,ˆPk,i⟩. | (3.11) |
According to (3.10) and (3.11), we get
˜υk,i=⟨ˆtk,i,ˆtk,i⟩⟨ˆAkˆPk,i,ˆPk,i⟩, | (3.12) |
and
˜Xk,i+1=˜Xk,i+˜υk,i˜Pk.i,MRˆXk,i+1=MRˆXk,i+˜υk,iMRˆPk,i,ˆXk,i+1=ˆXk,i+˜υk,iˆPk,i. | (3.13) |
Then, we have
˜Rk,i+1=˜Rk,i−˜υk,i˜Ak˜Pk,i,M−1LˆRk,i+1=M−1LˆRk,i−˜υk,iM−1LˆAkM−1RMRˆPk,i,ˆRk,i+1=ˆRk,i−˜υk,iˆAkˆPk,i, | (3.14) |
and
˜ωk,i=⟨˜Rk,i+1,˜Rk,i+1⟩⟨˜Rk,i,˜Rk,i⟩=⟨M−1LˆRk,i+1,M−1LˆRk,i+1⟩⟨M−1LˆRk,i,M−1LˆRk,i⟩=⟨ˆtk,i+1,ˆtk,i+1⟩⟨ˆtk,i,ˆtk,i⟩. | (3.15) |
Finally, we have
˜Pk,i+1=˜Rk,i+1+˜ωk,i˜Pk,i,MRˆPk,i+1=M−1LˆRk,i+1+˜ωk,iMRˆPk,i,ˆPk,i+1=M−1RM−1LˆRk,i+1+˜ωk,iM−1RMRˆPk,i=M−1Rˆtk,i+1+˜ωk,iˆPk,i. | (3.16) |
Based on (3.10)–(3.16), we have the preconditioned process of Algorithm 1. Algorithm 3 lists the A-tPCG-FFT method.
Algorithm 3: The A-tPCG-FFT method for sloving (1.5) |
1: Input: A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1 |
2: Output: Least-square solution X of (1.5) |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, μj=μ0ρj. |
9: A=ˆAHkˆAk+μjI, B=ˆAHkˆBk. |
10: Decompose A to get MR and ML. |
11: X0=0 or X0=Xk,μj−1; R0=M−1L(B−AX0); P0=R0. |
12: for i=0,1,...,until convergence do |
13: ti=M−1LRi, ˜υi=⟨ti,ti⟩/⟨APi,Pi⟩. |
14: Xi+1=Xi+˜υiPi. |
15: Ri+1=Ri−˜υiAPi. |
16: ˜ωi=⟨ti+1,ti+1⟩/⟨ti,ti⟩. |
17: Pi+1=M−1Rti+˜ωiPi. |
18: end for |
19: Xk,uj=Xi+1. |
20: end while |
21: ˆX(:,:,k)=Xk,uj. |
22: end for |
23: X=ifft(ˆX,[],3). |
The following result gives the convergence of Algorithm 3.
Theorem 3.3. If ˆX∗k is the true solution of the symmetric positive definite equation (1.7), Xk,μj,0 is the initial solution in the internal CG iteration of Algorithm 3, and Xk,μj,i is the i-th iterate, then
‖Xk,μj,i−ˆX∗k‖≤2(κ(M−1LAM−1R)−1κ(M−1LAM−1R)+1)k‖Xk,μj,0−ˆX∗k‖, | (3.17) |
where A=ˆAHkˆAk+μjI and κ(M) is the same as that in Theorem 3.2.
Proof. Let A=ˆAHkˆAk+μjI, Taking A as M−1LAM−1R in the similar result of Algorithm 1 in [17] results in (3.17).
We can expect from Theorem 3.3 that Algorithm 3 generally achieves better convergence than Algorithm 1 since
cond(M−1LAM−1R)<cond(A). |
We will illustrate this in numerical experiments in Section 4.
This section presents three examples to show the application of Algorithms 1–3 on the restoration of image and video. All calculations were performed in MATLAB R2018a on computers with intel core i7 and 16GB RAM.
Let Xiter denote the iterative solution to (1.5). The quality of the approximate solution Xiter is defined by the relative error
Eiter=‖Xiter−Xtrue‖F‖Xtrue‖F, |
and the signal-to-noise ratio (SNR)
SNR(Xiter)=10log10‖Xtrue−E(Xtrue)‖2F‖Xiter−Xtrue‖2F, |
where Xtrue denotes the uncontaminated data tensor and E(Xtrue) is the average gray-level of Xtrue.
In (1.5), we generate a noise tensor E and simulate the error in the data tensor B=Btrue+E. E is a noise tensor with a random term of normal distribution, the mean value is zero, and the variance selection corresponds to a specific noise level ν=‖E‖F‖Btrue‖F.
We summarize the cross-channel blurring and within-channel blurring in the original blurring process in [3] that will be used in the following examples. The complete blurring model is presented in the form of
(Ablur⊗A(1)⊗A(2))xtrue=btrue, | (4.1) |
where
Ablur=[arrargarbagraggagbabrabgabb], |
and Ablur is a 3×3 matrix with the sum of each row equal to 1, which denotes cross-channel blurring as in [18].
btrue=[vec(Btrue,1)vec(Btrue,2)vec(Btrue,3)],xtrue=[vec(Xtrue,1)vec(Xtrue,2)vec(Xtrue,3)], |
and vec(⋅) is an operator that transforms a matrix into a vector by stacking columns of the matrix from left to right. A(1)∈Rn×n and A(2)∈Rn×n define within-channel blurring, which are the horizontal inner blurring matrix and the vertical inner blurring matrix, respectively, see [18] for more details. A special case to consider is arr=agg=abb=μ, agr=arg=agb=abg=β and abr=arb=γ. If the formula (4.1) is calculated directly in MATLAB, the results are stored in the matrix and cannot be calculated effectively. Therefore, t-product structure is considered, and (4.1) is presented in the following tensor form
Aυ∗Xtrue∗Aω=Btrue, | (4.2) |
where Aυ∈Rn×n×3 and Aω∈Rn×n×3. Each blurring matrix A(i) is defined as follows:
akl={1σ√2πexp(−(k−l)22σ2),|k−l|≤r0,otherwise, | (4.3) |
where σ is a parameter that controls the amount of smoothing. Therefore, if σ is larger, the problem becomes more ill posed.
Example 4.1. (Color image) This example shows the restoration of a blurred penguin color image by Algorithms 1 – 3. This example compares the effects of A-tCG-FFT, A-tCGLS-FFT and A-tPCG-FFT in color image deblurring, and the image is contaminated by cross-channel blur and additive noise. The cross-channel bluring is determined by the matrix
Ablur=[0.70.150.150.150.70.150.150.150.7]. |
For the within-channel blurring, we set σ=4 and r=7. Let Aυ(:,:,1)=μA(2),Aυ(:,:,2)=βA(2), Aυ(:,:,3)=γA(2), and Aω=I. The condition number of the obtained front slice of A is cond(A(:,:,k))=8.7257e+04(k=1,2,3). Let the two noise levels be ν=10−2 and ν=10−3, respectively. Formula (4.2) can be used to generate the blurred images, and B=Btrue+E adds noise to the blurred image. Figure 2 shows the original image and the blurred and noisy image with ν=10−3.
The essence of Algorithms 1–3 is to solve three inner and outer loops in the Fourier domain in turn. Each outer iteration uses the discrepancy principle to select the appropriate regularization parameter μ. We specify η=1.05 and ρ=12. We set the initial solution of inner iteration as X0=Xk,μj−1, and the convergence criterion of the inner iteration is given by the residual norm being less than tol=10−6.
Table 2 lists CPU time, SNR and relative error of three algorithms for the restoration of penguin image. We can see from Table 2 that the A-tPCG-FFT algorithm needs the least CPU time among all the methods, while the A-tCG-FFT algorithm is the most time consuming. There is little difference between SNR and relative errors for all the methods.
Noise level | Method | Relative error | SNR | CPU(secs) |
10−3 | A-tCG-FFT | 2.94×10−2 | 22.26 | 27.98 |
A-tCGLS-FFT | 2.94×10−2 | 22.26 | 16.68 | |
A-tPCG-FFT | 2.93×10−2 | 22.27 | 7.00 | |
10−2 | A-tCG-FFT | 5.01×10−2 | 17.61 | 12.79 |
A-tCGLS-FFT | 5.01×10−2 | 17.61 | 7.84 | |
A-tPCG-FFT | 5.01×10−2 | 17.61 | 3.46 |
Figure 3 shows the change of relative error with CPU time for the algorithms to process the third slice of penguin image with ν=10−3 in Table 2. Figure 3 shows the iterative solution of the inner iteration and outer iteration of the three algorithms. It can be seen that when the regularization parameters are selected, the relative error norm values of the iterative solutions obtained by the inner iteration process of the three algorithms are gradually decreasing. Similarly, after the three algorithms automatically update the regularization parameters in the outer iteration, the iterative solutions corresponding to the updated regularization parameters are all smaller than the relative error norm of the iterative solutions before updating. We can still get from Figure 3 that the A-tPCG-FFT algorithm converges the fastest among all methods.
Figure 4 displays the recovered penguin image for Algorithms 1–3 corresponding to the results with ν=10−3 in Table 2.
From Example 4.1 we can see that the A-tPCG-FFT method displays the best convergence among three methods.
Example 4.2 (Color video) We recover 10 consecutive frames (frames 41 to 50) of blurred and noised vipbarcode color video from MATLAB. Each frame has 240×240 pixels. We store 10 original video frames in the tensor Xtrue∈R240×30×240, obtained by stacking the grayscale images that constitute the three channels of each blurred color frame.
These frames are blurred by (4.2), where Aυ is a 3-way tensors such that Aυ(:,:,1)=A(1), Aυ(:,:,i)=O for i=2,…30, and Aω=I. Using σ=3 and r=5 to build the blurring matrices with periodic boundary conditions. The condition number of the obtained first slice of A is 1.6191e+03, and the condition number of the remaining frontal sections of A is infinite. Use B=Btrue+E adds noise to the blurred image, and the considered noise levels are ν=10−2 and ν=10−3. We set the initial solutions of their iterations as X0=Xk,μj−1, and under the convergence condition that the residual norm is less than tol=10−6. Using the discrepancy principle to determine regularization parameters, specify η=1.1,α0=12 and ρ=12. Table 3 records the concrete results of recovering these 10 frames of video data, including relative error, SNR and CPU time. Because the same criteria for selecting parameters and stopping iteration are set, these three algorithms differ slightly in relative error and SNR. However, the A-tPCG-FFT algorithm has a strong advantage in stopping iteration CPU time.
Noise level | Method | Relative error | SNR | time (secs) |
10−3 | A-tCG-FFT | 1.55×10−2 | 25.91 | 115.08 |
A-tCGLS-FFT | 1.55×10−2 | 25.91 | 68.25 | |
A-tPCG-FFT | 1.55×10−2 | 25.91 | 20.44 | |
10−2 | A-tCG-FFT | 4.66×10−2 | 16.36 | 73.03 |
A-tCGLS-FFT | 4.66×10−2 | 16.36 | 44.15 | |
A-tPCG-FFT | 4.66×10−2 | 16.36 | 12.53 |
Below we analyze the video recovery process of the 50th frame. Figure 5 shows the 50th original frame of this video file and the blurred and noisy video frame with ν=10−3.
Figure 6 shows the variation of the relative errors of the third piece of the 50th video frame with CPU time when the three algorithms process ν=10−3 in Table 3.
As can be seen from Figure 6, the convergence speed of algorithm A-tPCG-FFT is the fastest among the three algorithms. We can also get from Figure 6 that with the automatic updating of regularization parameters by the outer iteration and the progress of the inner iteration algorithm, the relative error norms of the iterative solutions obtained by the three algorithms are gradually decreasing.
Figure 7 is the result of the restoration of the blurred and noisy image of the 50th frame by A-tCG-FFT, A-tCGLS-FFT, and A-tPCG-FFT.
According to the results of Example 4.2, A-tCGLS-FFT and A-tPCG-FFT require less CPU time compared to A-tCG-FFT when recovering the data of each frame of video, so when processing multi-frame video data, this lead time will be accumulated in each process. That is to say, the more video frames, the more obvious the time advantages of A-tCGLS-FFT and A-tPCG-FFT, especially A-tPCG-FFT.
Based on the matrix conjugate gradient method, this paper presents three types of tensor Conjugate-Gradient methods for solving large-scale linear discrete ill-posed problems in tensor form. Firstly, we project the tensor equation to Fourier domain, and propose a strategy to automatically determine the regularization parameters of the tensor conjugate gradient method in the Fourier domain (A-tCG-FFT). In addition, we developed the A-tCGLS-FFT method and the preconditioned version of A-tCG-FFT. These proposed methods are used in different examples of color image and video restoration. Numerical experiments show that the tensor conjugate gradient methods are effective in solving ill-posed problems with t-product structure the Fourier domain.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors would like to thank the referees for their helpful and constructive comments. TThis research was supported in part by the Sichuan Science and Technology Program (grant 2022ZYD0008).
The authors declare no conflict of interest.
[1] |
M. E. Kilmer, C. D. Martin, Factorization strategies for third order tensors, Linear Algebra Appl., 435 (2011), 641–658. https://doi.org/10.1016/j.laa.2010.09.020 doi: 10.1016/j.laa.2010.09.020
![]() |
[2] |
N. Hao, M. E. Kilmer, K. Braman, R. C. Hoover, Facial recognition using tensor-tensor decompositions, SIAM J. Imaging Sci., 6 (2013), 437–463. https://doi.org/10.1137/110842570 doi: 10.1137/110842570
![]() |
[3] |
M. E. Guide, A. E. Ichi, K. Jbilou, R. Sadaka, On tensor GMRES and Golub-Kahan methods via the T-product for color image processing, Electron. J. Linear Algebra, 37 (2021), 524–543. https://doi.org/10.13001/ela.2021.5471 doi: 10.13001/ela.2021.5471
![]() |
[4] |
L. Reichel, U. O. Ugwu, The tensor Golub–Kahan–Tikhonov method applied to the solution of ill-posed problems with a t-product structure, Numer. Linear Algebra Appl., 29 (2021), e2412. https://doi.org/10.1002/nla.2412 doi: 10.1002/nla.2412
![]() |
[5] |
L. Reichel, U. O. Ugwu, Tensor Arnoldi–Tikhonov and GMRES-type methods for ill-posed problems with a t-product structure, J. Sci. Comput., 90 (2022), 59. https://doi.org/10.1007/s10915-021-01719-1 doi: 10.1007/s10915-021-01719-1
![]() |
[6] |
J. Zhang, A. K. Saibaba, M. E. Kilmer, S. Aeron, A randomized tensor singular value decomposition based on the t‐product, Numer. Linear Algebra Appl., 25 (2018), e2179. https://doi.org/10.1002/nla.2179 doi: 10.1002/nla.2179
![]() |
[7] | U. Ugwu, L. Reichel, Tensor regularization by truncated iteration: A comparison of some solution methods for large-scale linear discrete ill-posed problem with a t-product, 2021, arXiv: 2110.02485. https://doi.org/10.48550/arXiv.2110.02485 |
[8] |
K. Lund, The tensor t‐function: A definition for functions of third‐order tensors, Numer. Linear Algebra Appl., 27 (2020), e2288. https://doi.org/10.1002/nla.2288 doi: 10.1002/nla.2288
![]() |
[9] |
A. Ma, D. Molitor, Randomized Kaczmarz for tensor linear systems, Bit Numer. Math., 62 (2022), 171–194. https://doi.org/10.1007/s10543-021-00877-w doi: 10.1007/s10543-021-00877-w
![]() |
[10] |
M. E. Kilmer, K. Braman, N. Hao, R. C. Hoover, Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM J. Matrix Anal. Appl., 34 (2013), 148–172. https://doi.org/10.1137/110837711 doi: 10.1137/110837711
![]() |
[11] | H. W. Engl, M. Hanke, A. Neubauer, Regularization of inverse problems, Dordrecht: Springer, 2000. |
[12] |
S. Djennadi, N. Shawagfeh, O. A. Arqub, A fractional Tikhonov regularization method for an inverse backward and source problems in the time-space fractional diffusion equations, Chaos Solition. Fract., 150 (2021), 111127. https://doi.org/10.1016/j.chaos.2021.111127 doi: 10.1016/j.chaos.2021.111127
![]() |
[13] |
S. Djennadi, N. Shawagfeh, M. Inc, M. S. Osman, J. F. Gómez-Aguilar, O. A. Arqub, The Tikhonov regularization method for the inverse source problem of time fractional heat equation in the view of ABC-fractional technique, Phys. Scr., 96 (2021), 094006. https://doi.org/10.1088/1402-4896/ac0867 doi: 10.1088/1402-4896/ac0867
![]() |
[14] | G. H. Golub, C. F. Van Loan, Matrix computations, Johns Hopkins University Press, 1996. |
[15] |
J. Y. Yuan, Numerical methods for generalized least squares problems, J. Comput. Appl. Math., 66 (1996), 571–584. https://doi.org/10.1016/0377-0427(95)00167-0 doi: 10.1016/0377-0427(95)00167-0
![]() |
[16] |
Å. Bjorck, T. Elfving, Z. Strakos, Stability of conjugate gradient and Lanczos methods for linear least squares problems, SIAM J. Matrix Anal. Appl., 19 (1998), 720–736. https://doi.org/10.1137/S089547989631202X doi: 10.1137/S089547989631202X
![]() |
[17] | L. N. Trefethen, D. Bau, Numerical linear algebra, SIAM, 1997. https://doi.org/10.1137/1.9780898719574 |
[18] | P. C. Hansen, J. G. Nagy, D. P. O'Leary, Deblurring images: Matrices, spectra, and filtering, SLAM, 2006. |
1. | Shi-Wei Wang, Guang-Xin Huang, Feng Yin, Tensor Conjugate Gradient Methods with Automatically Determination of Regularization Parameters for Ill-Posed Problems with t-Product, 2024, 12, 2227-7390, 159, 10.3390/math12010159 | |
2. | Yuzi Jin, Soobin Kwak, Seokjun Ham, Junseok Kim, A fast and efficient numerical algorithm for image segmentation and denoising, 2024, 9, 2473-6988, 5015, 10.3934/math.2024243 |
Notation | Interpretation |
A | tensor |
AT | transpose of tensors |
A−1 | inverse of tensor, and A−T=(A−1)T=(AT)−1 |
A(:,:,k) | the k-th frontal slice of tensor A |
A(i,j,:) | the tube fibers of tensor A |
Ak | A(:,:,k) |
ˆA | FFT of A along the third mode |
ˆAH | transpose of complex tensor ˆA |
unfold(A) | the block column matrix of A |
bcirc(A) | the block-circulant matrix |
I | identity tensor |
A | matrix |
I | identity matrix |
‖A‖F | the Frobenius-Norm of tensor A |
‖A‖ | the 2-Norm of matrix A |
⟨A,B⟩ | the matrix inner product ⟨A,B⟩=tr(ATB) |
A | ˆAHkˆAk+μjI |
ˆAk | ˆA(:,:,k) |
ˆAHk | transpose of complex matrix ˆAk |
∗ | t-product |
Algorithm 1: The A-tCG-FFT method for sloving (1.5) |
1: Input:A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1. |
2: Output:Least-square solution X of (1.5). |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: Let j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, A=ˆAHkˆAk+μjI, μj=μ0ρj. |
9: X0=0 or X0=Xk,μj−1; R0=ˆAHkˆBk-AX0; P0=R0. |
10: for i=0,1,...,until convergence do |
11: υi=⟨Ri,Ri⟩/⟨APi,Pi⟩. |
12: Xi+1=Xi+υiPi. |
13: Ri+1=Ri−υiAPi. |
14: ωi=⟨Ri+1,Ri+1⟩/⟨Ri,Ri⟩. |
15: Pi+1=Ri+1+ωiPi. |
16: end for |
17: Xk,uj=Xi+1. |
18: end while |
19: ˆX(:,:,k)=Xk,uj. |
20: end for |
21: X=ifft(ˆX,[],3). |
Algorithm 2: The A-tCGLS-FFT method for sloving (1.7) |
1: Input:A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1 |
2: Output:Least-square solution X of (1.7) |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, μj=μ0ρj. |
9: A=ˆAHkˆAk+μjI, B=ˆAHkˆBk. |
10: X0=0 or X0=Xk,μj−1; z0=ˆBk−ˆAkX0; |
11: R0=ˆAHkz0−μX0; P0=R0. |
12: for i=0,1,...,until convergence do |
13: Qi=ˆAkPi. |
14: γi=⟨Rj,Rj⟩/⟨Qi,Qi⟩+μj⟨Pi,Pi⟩. |
15: Xi+1=Xi+γiPi. |
16: zi+1=zi−γiQi. |
17: Ri+1=ˆAHkzi+1−μjXi+1. |
18: ωi=⟨Ri+1,Ri+1⟩/⟨Ri,Ri⟩. |
19: Pi+1=Ri+1+ωiPi. |
20: end for |
21: Xk,uj=Xi+1. |
22: end while |
23: ˆX(:,:,k)=Xk,uj. |
24: end for |
25: X=ifft(ˆX,[],3). |
Algorithm 3: The A-tPCG-FFT method for sloving (1.5) |
1: Input: A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1 |
2: Output: Least-square solution X of (1.5) |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, μj=μ0ρj. |
9: A=ˆAHkˆAk+μjI, B=ˆAHkˆBk. |
10: Decompose A to get MR and ML. |
11: X0=0 or X0=Xk,μj−1; R0=M−1L(B−AX0); P0=R0. |
12: for i=0,1,...,until convergence do |
13: ti=M−1LRi, ˜υi=⟨ti,ti⟩/⟨APi,Pi⟩. |
14: Xi+1=Xi+˜υiPi. |
15: Ri+1=Ri−˜υiAPi. |
16: ˜ωi=⟨ti+1,ti+1⟩/⟨ti,ti⟩. |
17: Pi+1=M−1Rti+˜ωiPi. |
18: end for |
19: Xk,uj=Xi+1. |
20: end while |
21: ˆX(:,:,k)=Xk,uj. |
22: end for |
23: X=ifft(ˆX,[],3). |
Noise level | Method | Relative error | SNR | CPU(secs) |
10−3 | A-tCG-FFT | 2.94×10−2 | 22.26 | 27.98 |
A-tCGLS-FFT | 2.94×10−2 | 22.26 | 16.68 | |
A-tPCG-FFT | 2.93×10−2 | 22.27 | 7.00 | |
10−2 | A-tCG-FFT | 5.01×10−2 | 17.61 | 12.79 |
A-tCGLS-FFT | 5.01×10−2 | 17.61 | 7.84 | |
A-tPCG-FFT | 5.01×10−2 | 17.61 | 3.46 |
Noise level | Method | Relative error | SNR | time (secs) |
10−3 | A-tCG-FFT | 1.55×10−2 | 25.91 | 115.08 |
A-tCGLS-FFT | 1.55×10−2 | 25.91 | 68.25 | |
A-tPCG-FFT | 1.55×10−2 | 25.91 | 20.44 | |
10−2 | A-tCG-FFT | 4.66×10−2 | 16.36 | 73.03 |
A-tCGLS-FFT | 4.66×10−2 | 16.36 | 44.15 | |
A-tPCG-FFT | 4.66×10−2 | 16.36 | 12.53 |
Notation | Interpretation |
A | tensor |
AT | transpose of tensors |
A−1 | inverse of tensor, and A−T=(A−1)T=(AT)−1 |
A(:,:,k) | the k-th frontal slice of tensor A |
A(i,j,:) | the tube fibers of tensor A |
Ak | A(:,:,k) |
ˆA | FFT of A along the third mode |
ˆAH | transpose of complex tensor ˆA |
unfold(A) | the block column matrix of A |
bcirc(A) | the block-circulant matrix |
I | identity tensor |
A | matrix |
I | identity matrix |
‖A‖F | the Frobenius-Norm of tensor A |
‖A‖ | the 2-Norm of matrix A |
⟨A,B⟩ | the matrix inner product ⟨A,B⟩=tr(ATB) |
A | ˆAHkˆAk+μjI |
ˆAk | ˆA(:,:,k) |
ˆAHk | transpose of complex matrix ˆAk |
∗ | t-product |
Algorithm 1: The A-tCG-FFT method for sloving (1.5) |
1: Input:A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1. |
2: Output:Least-square solution X of (1.5). |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: Let j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, A=ˆAHkˆAk+μjI, μj=μ0ρj. |
9: X0=0 or X0=Xk,μj−1; R0=ˆAHkˆBk-AX0; P0=R0. |
10: for i=0,1,...,until convergence do |
11: υi=⟨Ri,Ri⟩/⟨APi,Pi⟩. |
12: Xi+1=Xi+υiPi. |
13: Ri+1=Ri−υiAPi. |
14: ωi=⟨Ri+1,Ri+1⟩/⟨Ri,Ri⟩. |
15: Pi+1=Ri+1+ωiPi. |
16: end for |
17: Xk,uj=Xi+1. |
18: end while |
19: ˆX(:,:,k)=Xk,uj. |
20: end for |
21: X=ifft(ˆX,[],3). |
Algorithm 2: The A-tCGLS-FFT method for sloving (1.7) |
1: Input:A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1 |
2: Output:Least-square solution X of (1.7) |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, μj=μ0ρj. |
9: A=ˆAHkˆAk+μjI, B=ˆAHkˆBk. |
10: X0=0 or X0=Xk,μj−1; z0=ˆBk−ˆAkX0; |
11: R0=ˆAHkz0−μX0; P0=R0. |
12: for i=0,1,...,until convergence do |
13: Qi=ˆAkPi. |
14: γi=⟨Rj,Rj⟩/⟨Qi,Qi⟩+μj⟨Pi,Pi⟩. |
15: Xi+1=Xi+γiPi. |
16: zi+1=zi−γiQi. |
17: Ri+1=ˆAHkzi+1−μjXi+1. |
18: ωi=⟨Ri+1,Ri+1⟩/⟨Ri,Ri⟩. |
19: Pi+1=Ri+1+ωiPi. |
20: end for |
21: Xk,uj=Xi+1. |
22: end while |
23: ˆX(:,:,k)=Xk,uj. |
24: end for |
25: X=ifft(ˆX,[],3). |
Algorithm 3: The A-tPCG-FFT method for sloving (1.5) |
1: Input: A∈Rm×m×n,B∈Rm×m×n,δ1,...,δn,μ0,ρ,η>1 |
2: Output: Least-square solution X of (1.5) |
3: ˆA=fft(A,[ ],3). |
4: ˆB=fft(B,[ ],3). |
5: for k=1,...,n do |
6: j=0,Xk,μ0=0. |
7: while ‖ˆAkXk,μj−ˆBk‖≥ηδk do |
8: j=j+1, μj=μ0ρj. |
9: A=ˆAHkˆAk+μjI, B=ˆAHkˆBk. |
10: Decompose A to get MR and ML. |
11: X0=0 or X0=Xk,μj−1; R0=M−1L(B−AX0); P0=R0. |
12: for i=0,1,...,until convergence do |
13: ti=M−1LRi, ˜υi=⟨ti,ti⟩/⟨APi,Pi⟩. |
14: Xi+1=Xi+˜υiPi. |
15: Ri+1=Ri−˜υiAPi. |
16: ˜ωi=⟨ti+1,ti+1⟩/⟨ti,ti⟩. |
17: Pi+1=M−1Rti+˜ωiPi. |
18: end for |
19: Xk,uj=Xi+1. |
20: end while |
21: ˆX(:,:,k)=Xk,uj. |
22: end for |
23: X=ifft(ˆX,[],3). |
Noise level | Method | Relative error | SNR | CPU(secs) |
10−3 | A-tCG-FFT | 2.94×10−2 | 22.26 | 27.98 |
A-tCGLS-FFT | 2.94×10−2 | 22.26 | 16.68 | |
A-tPCG-FFT | 2.93×10−2 | 22.27 | 7.00 | |
10−2 | A-tCG-FFT | 5.01×10−2 | 17.61 | 12.79 |
A-tCGLS-FFT | 5.01×10−2 | 17.61 | 7.84 | |
A-tPCG-FFT | 5.01×10−2 | 17.61 | 3.46 |
Noise level | Method | Relative error | SNR | time (secs) |
10−3 | A-tCG-FFT | 1.55×10−2 | 25.91 | 115.08 |
A-tCGLS-FFT | 1.55×10−2 | 25.91 | 68.25 | |
A-tPCG-FFT | 1.55×10−2 | 25.91 | 20.44 | |
10−2 | A-tCG-FFT | 4.66×10−2 | 16.36 | 73.03 |
A-tCGLS-FFT | 4.66×10−2 | 16.36 | 44.15 | |
A-tPCG-FFT | 4.66×10−2 | 16.36 | 12.53 |