Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Faster free pseudoinverse greedy block Kaczmarz method for image recovery

  • The greedy block Kaczmarz (GBK) method has been successfully applied in areas such as data mining, image reconstruction, and large-scale image restoration. However, the computation of pseudo-inverses in each iterative step of the GBK method not only complicates the computation and slows down the convergence rate, but it is also ill-suited for distributed implementation. The leverage score sampling free pseudo-inverse GBK algorithm proposed in this paper demonstrated significant potential in the field of image reconstruction. By ingeniously transforming the problem framework, the algorithm not only enhanced the efficiency of processing systems of linear equations with multiple solution vectors but also optimized specifically for applications in image reconstruction. A methodology that combined theoretical and experimental approaches has validated the robustness and practicality of the algorithm, providing valuable insights for technical advancements in related disciplines.

    Citation: Wenya Shi, Xinpeng Yan, Zhan Huan. Faster free pseudoinverse greedy block Kaczmarz method for image recovery[J]. Electronic Research Archive, 2024, 32(6): 3973-3988. doi: 10.3934/era.2024178

    Related Papers:

    [1] Fang Wang, Weiguo Li, Wendi Bao, Li Liu . Greedy randomized and maximal weighted residual Kaczmarz methods with oblique projection. Electronic Research Archive, 2022, 30(4): 1158-1186. doi: 10.3934/era.2022062
    [2] Ranran Li, Hao Liu . On global randomized block Kaczmarz method for image reconstruction. Electronic Research Archive, 2022, 30(4): 1442-1453. doi: 10.3934/era.2022075
    [3] Koung Hee Leem, Jun Liu, George Pelekanos . A regularized eigenmatrix method for unstructured sparse recovery. Electronic Research Archive, 2024, 32(7): 4365-4377. doi: 10.3934/era.2024196
    [4] Qin Guo, Binlei Cai . Learning capability of the rescaled pure greedy algorithm with non-iid sampling. Electronic Research Archive, 2023, 31(3): 1387-1404. doi: 10.3934/era.2023071
    [5] John Daugherty, Nate Kaduk, Elena Morgan, Dinh-Liem Nguyen, Peyton Snidanko, Trung Truong . On fast reconstruction of periodic structures with partial scattering data. Electronic Research Archive, 2024, 32(11): 6481-6502. doi: 10.3934/era.2024303
    [6] Zhengyu Liu, Yufei Bao, Changhai Wang, Xiaoxiao Chen, Qing Liu . A fast matrix completion method based on truncated $ {\mathit{L}}_{2, 1} $ norm minimization. Electronic Research Archive, 2024, 32(3): 2099-2119. doi: 10.3934/era.2024095
    [7] Jingqian Xu, Ma Zhu, Baojun Qi, Jiangshan Li, Chunfang Yang . AENet: attention efficient network for cross-view image geo-localization. Electronic Research Archive, 2023, 31(7): 4119-4138. doi: 10.3934/era.2023210
    [8] Daochang Zhang, Dijana Mosić, Liangyun Chen . On the Drazin inverse of anti-triangular block matrices. Electronic Research Archive, 2022, 30(7): 2428-2445. doi: 10.3934/era.2022124
    [9] Huimin Qu, Haiyan Xie, Qianying Wang . Multi-convolutional neural network brain image denoising study based on feature distillation learning and dense residual attention. Electronic Research Archive, 2025, 33(3): 1231-1266. doi: 10.3934/era.2025055
    [10] Jiange Liu, Yu Chen, Xin Dai, Li Cao, Qingwu Li . MFCEN: A lightweight multi-scale feature cooperative enhancement network for single-image super-resolution. Electronic Research Archive, 2024, 32(10): 5783-5803. doi: 10.3934/era.2024267
  • The greedy block Kaczmarz (GBK) method has been successfully applied in areas such as data mining, image reconstruction, and large-scale image restoration. However, the computation of pseudo-inverses in each iterative step of the GBK method not only complicates the computation and slows down the convergence rate, but it is also ill-suited for distributed implementation. The leverage score sampling free pseudo-inverse GBK algorithm proposed in this paper demonstrated significant potential in the field of image reconstruction. By ingeniously transforming the problem framework, the algorithm not only enhanced the efficiency of processing systems of linear equations with multiple solution vectors but also optimized specifically for applications in image reconstruction. A methodology that combined theoretical and experimental approaches has validated the robustness and practicality of the algorithm, providing valuable insights for technical advancements in related disciplines.



    In the modern fields of scientific research and medical diagnostics [1,2,3], there is an increasing reliance on image restoration techniques [4,5,6,7], which are particularly prominent in the field of medical imaging. Ensuring image quality is paramount for the authenticity of data [8]. By processing X-ray projection data obtained from CT scans, we can reconstruct clear tomographic images with a resolution of n×n pixels. This technology encompasses two main schools: mathematical theoretical analysis [9] and iterative methods [10]. The former, like the filtered back projection method [11], is widely used in fields such as CT imaging, while the latter excels in dealing with noise interference and data loss. With technological advancements, image restoration is moving toward greater precision and efficiency.

    To reconstruct images, we must solve a large-scale system of linear equations with multiple righthand sides, which is presented as follows:

    AX=B (1)

    In contemporary mathematics and engineering, solving systems of equations stands as a pivotal task, particularly in scientific research and industrial applications where efficiency and precision are of paramount importance. A variety of numerical methods have been widely adopted [12,13,14,15], among which the Kaczmarz algorithm [16] is renowned for its iterative projection approach in approximating the true solution. The algorithm's simplicity has facilitated its application across various domains, including image reconstruction [12,17], medical imaging [11,18], and signal processing [19,20]. Advances in technology have given rise to multiple enhanced versions of the Kaczmarz algorithm [21,22,23,24,25,26,27,28,29,30,31], improving performance in large-scale parallel computing and noisy data environments. Notably, with the development of free pseudo-inverse techniques, Du and Sun [24] further extended the randomized extended average block Kaczmar (REABK) method, proposing a class of pseudo-inverse-free stochastic block iterative methods for solving both consistent and inconsistent systems of linear equations, Free pseudo-inverse can accelerate convergence speed. Pseudo-inverse approximation is a specific case of generalized inverse techniques. For more theoretical analysis and applications, refer to literature [32]. Inspired by references [33,34], this paper introduces a faster lazy free greedy block Kaczmarz (LFGBK) method, exploring matrix sketching techniques as a key tool for accelerating matrix operations. This method employs sampling based on an approximate maximum distance criterion, excelling at selecting small, representative samples from large datasets for more efficient computation. The adaptive randomized block Kaczmarz (ARBK) [35] algorithm integrates adaptive and randomized block selection strategies. However, under certain specific matrix structures, ARBK may experience slow convergence or even fail to converge. Additionally, when dealing with large-scale sparse matrices, the ARBK algorithm incurs significant memory overhead. Our improved algorithm successfully overcomes these drawbacks, offering a more stable and efficient solution.

    Additionally, it not only addresses single righthand side linear equations but also extends to multiple righthand side linear equations, solving the memory overflow issues that traditional algorithms face when dealing with image processing vectors. A comprehensive theoretical framework supports the convergence of these methods. Numerical experiments validate its effectiveness, demonstrating improved computational efficiency and laying a solid foundation for further optimization of matrix sketching techniques.

    The structure of this paper is as follows: Section 2 introduces the necessary background knowledge. Section 3 presents the faster free pseudo-inverse GBK method for solving single righthand side linear equation systems. Section 4 proposes the leverage score sampling free pseudo-inverse GBK method for solving multiple righthand side linear equation systems. Section 5 details the numerical experiments, and Section 6 concludes the paper.

    In this article, we adopt the same notation as in reference [27]. For example, A(i),AT,A,||A||,||A||F, and [n], respectively, represent the i-th row of the coefficient matrix A, transpose, generalized inverse, spectral norm, F-norm, and the set {1,2,,n}.

    Recently, Niu and Zheng combined greedy strategy with the block Kaczmarz method, proposing the GBK [29] method for solving large-scale consistent linear equation systems. See Algorithm 1 for the specific process.

    Algorithm1 GBK method
    1: Input: A,b,l,x0range(AT)andη(0,1).
    2: Output: xl.
    3: for k=0,1,l1 do
    4:   Compute εk=ηmax1im{|biA(i)xk|2||A(i)||22}.
    5:   Determine the sequence of indicator sets. Tk={ik:|bikA(ik)xk|2εk||A(ik)||22}.
    6:   Compute xk+1=xk+ATk(bTkATkxk).
    7: end for

    Convergence analysis of the GBK method is described as follows:

    Theorem 1 ([29]) If the linear system of Eq (1) is consistent, then the iterative sequence {xk}k=0 generated by algorithm 1 converges to the minimum norm solution x=Ab of the system of equations, and satisfies for any k0,

    ||xk+1x||22(1γk(η)σ2min(A)||A||2F)k+1||x0x||22.

    The formula for γk(η) is defined as follows: γk(η)=η||A||2F||A||2F||ATk1||2F||ATk||2Fσ2max(ATk). Here, γ0(η) is defined as η||AT0||2Fσ2max(AT0), where η is in the range (0,1], and σmin(A) and σmax(A) represent the nonzero minimum singular value and maximum singular value of matrix A.

    In the matrix sketching technique, as described in leverage score sampling [36,37], we select samples based on the leverage score of each row. Specifically, we will choose each row with a probability proportional to its leverage score. Therefore, rows with higher scores (i.e., rows with greater influence in the dataset) will have a greater chance of being selected.

    Algorithm2 Leverage score sampling method based on manifold
    1: Input:ARm×n.
    2: Initialize C as a d×n zero matrix.
    3: Initialize the variable sum to 0.
    4: Calculate the singular value decomposition of A as U, S, and V.
    5: for k=1,,m
    6:   Calculate the sum of the squares of each row in U and add it to the sum.
    7:   Calculate the probability of each data point being sampled: Calculate the sum of squares for each row of U, then divide by the total sum.
    8:   Based on the calculated probabilities, sampling is conducted to obtain the sampling indices.
    9:   Utilize C=[indices,:] for indexing.
    10: end for
    11: Return CRd×n.

    The advantage of this sampling method lies in its ability to select a small, representative subset of samples from a large dataset, enabling more efficient computation. The resulting sample set SRd×m, where d is the chosen number of samples, can be utilized to estimate various properties of the original matrix A, such as singular value decomposition, principal component analysis, and so on. The LFGBK algorithm is similar to the derivation process in reference [38].

    Remark: In addition to the method proposed in this paper, there are other sampling techniques such as random Gaussian matrices, subsampled randomized Hadamard transform (SRHT), and uniform sampling. In previous experiments, we also tried other methods like random sparse sampling. Through comparison, we found that the method proposed in this paper excels in extracting the main features of matrices. Other methods not only have higher computational complexity but may also extract all-zero vectors during actual image processing, rendering calculations impossible. Our method effectively avoids these issues, which is why we chose to adopt it.

    This chapter introduces an algorithm, namely, the leverage score sampling free pseudo-inverse GBK method for single righthand side, and provides a proof of the corresponding convergence theory for the algorithm.

    First, each step follows the approximate maximum illustration principle.

    |bikA(ik)xk|2||A(ik)||22ηmax1im{|biA(i)xk|2||A(i)||22},

    Second, select the index set Tk for the block matrix ATk; then, project the current estimate onto each row forming the block matrix ATk; finally, calculate the average of the projections to determine the next iteration.

    xk+1=xk+(iTkwibiA(i)xk||A(i)||22AT(i)).

    Before presenting the convergence theory for Algorithm 3, let us first introduce a lemma.

    Algorithm 3 SLFGBK method
    1: Let A,b,l,x0 be within the range of (AT), parameter η within (0,1), and consider the sequences of step sizes (αk)k0 and weights (wk)k0.
    2: Output:xl.
    3: Initialization:
      Leverage score sampling enables the selection of a small, representative subset from large datasets, facilitating more efficient computation.
    Algorithm 2 generates a leverage score sampling transformation matrix ˜A=SARd×n and vector ˜b=Sb, where dm.
    4: for k=0,1,l1 do
    5:   Calculation ˜εk=ηmax1im{|˜bi˜A(i)xk|2||˜A(i)||22}.
    6:   Define the index set sequence Tk={ik:|˜biK˜A(ik)xk|2εk||˜A(ik)||22}.
    7:   Computation xk+1=xk+(iTkwi˜bi˜A(i)xk||˜A(i)||22˜AT(i)).
    8: end for

    Lemma 1 ([27]): If any vector u belongs to the range of AT, then

    ||Au||22λmin(ATA)||u||22

    Theorem 2: The leverage score transformation S satisfies d=O(n2/δθ2), and x=Ab is the minimum norm solution of the single righthand side linear equation systems, for any k ≥ 0.

    ||xk+1x||22(1˜ψk(η)σ2min(˜A)||˜A||2F)||xkx||22 (2)

    The function ˜ψk(η)=ηt(2t1t2σ2max(ˆATTk))σ2min,η(0,1], where range(A),σmin(A), and σmax(A) represent the range, nonzero minimum singular value, and maximum singular value of matrix A, respectively.

    Proof. With algorithm 3 and ˜rk=˜b˜Axk, we can obtain

    xk+1=xk+(iTk1|Tk|˜rik˜AT(i)||˜A(i)||22). (3)

    Expand Eq (3) with the set |Tk|=t and Tk={jk1,,jkt}.

    xk+1=xk+iTk1t˜AT(i)eTi˜rk||˜A(i)||22=xk+1t(˜AT(jk1)eTjk1˜rk||˜A(jk1)||22++˜AT(jkt)eTjkt˜rk||˜A(jkt)||22)=xk+1tˆATTkˆITk˜rk. (4)

    Among them

    ˆATTk=[˜AT(jk1)||˜A(jk1)||2,˜AT(jk2)||˜A(jk2)||2,,˜AT(jkt)||˜A(jkt)||2]Rn×t, (5)

    and

    ˆITk=[e(jk1)||˜A(jk1)||2,e(jk2)||˜A(jk2)||2,,e(jkt)||˜A(jkt)||2]TRt×d. (6)

    Subtracting x from Eq (4) simultaneously yields

    xk+1x=xkx1tˆATTkˆITk˜A(xkx)=(I1tˆATTkˆATk)(xkx) (7)

    Taking the spectral norm and squaring both sides of Eq (7), and for any positive semi-definite matrix Q, satisfying Q2λmax(Q)Q, we can obtain

    ||xk+1x||22=||(xkx)1tˆATTkˆATk(xkx)||22=||(xkx)||222t||ˆATk(xkx)||22+1t2||ˆATTkˆATk((xkx)||22||(xkx)||22(2t1t2σ2max(ˆATTk))||ˆATk(xkx)||22 (8)

    Using Eq (5) and the inequality |˜bjk˜A(jk)xk|2˜εk||˜A(jk)||22, a straightforward calculation yields.

    ||ˆATk(xkx)||22=jkTk1||˜A(jk)||22|˜rjkk|2 (9)
    jkTk1||˜A(jk)||22˜εk||˜A(jk)||2 (10)

    Substituting ˜εk=ηmax1id{|˜bi˜A(i)xk|2||˜A(i)||22} into the above equation yields the following result:

    ||ˆATk(xkx)||22jkTkηmax1id{|˜rik|2||˜A(i)||22}ηtdi=1|˜rik|2||A(i)||22||˜A(i)||22||A||2F=ηt||˜rk||2||˜A||2F (11)

    Given x0range(AT) and x=Ab, we have xkxrange(AT). Therefore, by Lemma 1, we can conclude that

    ||˜rk||2=||˜A(xkx)||22

    λ2min(˜AT˜A)||xkx||22 Combining the above equation, we can obtain the following

    ||ˆATk(xkx)||22ηtσ2min(˜A)||˜A||2F||xkx||22 (12)

    From Eqs (8) and (12), we can obtain Eq (2), and the iterative sequence {xk}k=0 converges to the minimum norm solution x=Ab of the system of equations. Hence, Theorem 2 is proved.

    In this chapter, we introduce an algorithm known as the multi-righthand-side leverage score sampling free pseudo-inverse GBK method, and we provide a proof for the corresponding convergence theory of the algorithm.

    For most image reconstruction, the system of equations is formulated as follows:

    AX=B (13)

    In the context, ARm×n,X=[x1,x2,,xd]Rn×d,B=[b1,b2,,bd]Rm×d,d>1. This paper addresses the solution of multiple righthand side linear equation systems and initially presents the index set for each iteration selection in the multiple righthand side linear equation systems.

    max1im{|b(i)A(i)xk|||A(i)||22} (14)

    Iterate through the following steps:

    xk+1=xk+b(i)A(i)xk||A(i)||22AT(i)

    To solve systems of linear equations with multiple righthand sides, our approach involves working with the j righthand side vector bj by selecting the working row Tkj according to the criterion established in Eq (15).

    max1im{|B(i,j)A(i)x(k)j|||A(i)||22},j=1,2,,kb. (15)

    The iterative formula corresponding to the solution at the j right endpoint is as follows:

    x(k+1)j=xkj+˜B(Tkj,j)˜A(Tkj)x(k)j||˜A(Tkj)||22˜ATTkj (16)

    According to the criterion for selecting the index set in Eq (15), each term on the righthand side corresponds to the selection of a working row. For the system of linear equations with multiple righthand terms as in Eq (13), the Kaczmarz method can be extended to an iterative format that simultaneously solves for the righthand side system of linear equations.

    In this context, X(k+1)=(x(k+1)1,x(k+1)2,,x(k+1)d) represents the approximate solution at the k+1 iteration.

    Xk+1=Xk+(˜ATjk1,,˜ATjkd)Tdiag(˜B(jk1,1)˜Ajk1x(k)1||˜Ajk1||22,,˜B(jkd,d)˜Ajkdx(k)d||˜Ajkd||22)

    To reduce computational effort, Algorithm 2 is employed to extract certain rows from matrix A corresponding to the index set Tkj. The sub-matrix formed by these rows is denoted as ˜A,, while the sub-matrix of B corresponding to the index set Tk is denoted as ˜B. The specific configurations of ˜A and ˜B are as follows.

    ˜BTTkj=(BTTkj1,BTTkj2,,BTTkjt) (17)

    and

    ˜ATTkj=(ATTk1,ATTk2,,ATTkt) (18)

    The transposed matrix BTTkji is represented as (B(Tkji,1),B(Tkji,2),,B(Tkji,d)), where Tkji is an element of the index set Tkj. Here, B(Tkji,t) denotes the element in the i row and t column of the sub-matrix ˜B, and |Tkj| denotes the number of row indices contained in the index set Tkj with |Tkj| equal to t.

    The multi-righthand-side method proposed in this paper is a specialized block approach for solving large-scale righthand side linear equations using a leverage-based pseudo-inverse GBK method. We provide a detailed framework for this method, which involves simultaneous computation of multiple righthand sides, with each iteration involving a matrix BRm×t containing t righthand sides, resulting in the selection of t row indices.

    This article exclusively discusses the scenario where αk equals 1. The convergence theory for solving large-scale righthand side linear equation groups with the LFGBK method is presented below:

    Theorem 3: Assume that the linear equation system (13) is consistent, and so are its extracted subsystems. The iterative sequence X(k) from k=0 to infinity, generated by Algorithm 4 starting from the initial value x0, converges to the least squares solution X=AB. The expected error for the iterative sequence solutions X(k) from k=0 to infinity is:

    E||X(k+1)X||2F(1σ2min(˜A(Tkj))||˜A(Tkj)||2F)E||XkX||2F (19)

    Algorithm 4 MLFGBK Method
    1: Let A,b,l,x0 be within the range of (AT), parameter η within (0,1), and consider the sequences of step sizes (αk)k0 and weights (wk)k0.
    2: Output:x.
    3: Initialization: Algorithm 2 generates a leverage score sampling transformation matrix ˜A=SARd×n and matrix ˜b=Sb, where dm.
    4: for k=0,1,l1 do
    5:   Calculation ˜εkj=ηmax1im{|B(i,j)A(i)X(k)j|||A(i)||22},j=1,2,,d.
    6:   Establish the sequence of indicator sets. Tkj={ikj:|˜bikj˜A(ikj)xkj|2˜εkj||˜A(ikj)||22}.
    7:   Calculation
          Xk+1=Xk+(˜ATjk1,,˜ATjkd)Tdiag{˜B(jk1,1)˜Ajk1x(k)1||˜Ajk1||22,,˜B(jkd,d)˜Ajkdx(k)d||˜Ajkd||22}.
    8: end for

    The tilde-decorated A and B, denoted as ˜A and ˜B, respectively, represent the sub-matrices formed by the rows corresponding to the index set Tkj within matrices A and B.

    Proof. According to Algorithm 4 and reference [39], we have

    x(k+1)j=xkj+˜B(Tkj,j)˜A(Tkj)x(k)j||˜A(Tkj)||22˜ATTkj

    Since X=AB is the least squares solution, we infer that xj=AB(:,j) for j=1,2,,d, where B(:,j) denotes the j column of matrix B.

    Further,

    E||X(k+1)X||2F=Etj=1||x(k+1)jxj||22=tj=1E||x(k+1)jxj||22 (20)

    For j=1,2,,d, it can be deduced that

    ||x(k+1)jxj||22=||x(k)jxj||22E||x(k+1)jxkj||22

    Then

    E||x(k+1)jxkj||22=|˜B(Tkj,j)˜A(Tkj)x(k)j|2||˜A(Tkj)||22=maxTkjiTkj{|˜B(Tkji,j)˜A(Tkji)x(k)j|2||˜A(Tkji)||22}

    The specific details of the term E||x(k+1)jxkj||22 are elaborated in the proof of Theorem 3.2 in [39].

    Due to

    maxTkjiTkj{|˜B(Tkji,j)˜A(Tkji)x(k)j|2||˜A(Tkji)||22}=maxTkjiTkj{|˜B(Tkji,j)˜A(Tkji)x(k)j|2||˜A(Tkji)||22}||Akx(k)j˜B(:,j)||22||Akx(k)j˜B(:,j)||22=maxTkjiTkj{|˜B(Tkji,j)˜A(Tkji)x(k)j|2||˜A(Tkji)||22}||Akx(k)j˜B(:,j)||22ti=1||˜A(Tkji)||22||˜B(Tkji,j)ATkjix(k)j||22||˜A(Tkji)||22||˜A(Tkj)x(k)j˜B(:,j)||22ti=1||˜A(Tkji)||22σ2min(˜A(Tkj))||˜A(Tkj)||22||x(k)jxj||22 (21)

    In this context, ˜B(:,j) denotes the j column of the matrix ˜B.

    E||X(k+1)Xkj||22=||x(k)jxj||22E||x(k+1)jxkj||22(1σ2min(˜A(Tkj))||˜A(Tkj)||22)||x(k)jxj||22 (22)

    Thus, by combining Eqs (20) and (22), we can derive inequality (19).

    The proof of Theorem 3 reveals that the convergence rate of Algorithm 4 is related to the sub-matrix ˜A sampled during each leveraged iteration.

    In this section, we evaluate the effectiveness of large-scale image restoration equations using several algorithms through comparative examples: the GBK method [29], the semi-stochastic block Kaczmarz method with simple random sampling for single righthand sides [40], the LFGBK method for single righthand sides, the semi-stochastic block Kaczmarz method with simple random sampling for multiple righthand sides [40], and the LFGBK method for multiple righthand sides. All experiments are implemented in MATLAB, with metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), mean squared error (MSE), and computational time (CPU in seconds) reported. Following the approach of [27], the PSNR, SSIM, MSE, and CPU metrics reflect the average iterations and computational time needed for 50 repeated calculations. During all computations, we initialize the matrix x=zeros(n,m) and set the righthand side B=AX, where X represents images from MATLAB's Cameraman, Phantom, and Mri datasets, each sized 100×100. Matrix A is constructed with elements corresponding to the product of the number of X-ray beams (4), the range of scanning angles t (0° to 179°), and the image pixels. The stopping criterion is set as RSE=||XkX||22||X||22106. In practice, the condition O(n2/δθ2) is quite stringent, but in many real-world computations, a sketching factor of d=n2 yields satisfactory results. We consider matrices AR(4180n,m) of two types: type a, generated by radon(), and type b, generated by randn().

    Based on the numerical results from Tables 1 and 2, when variable A is generated using radon, we can draw the following conclusions: ⅰ) The GBK method, single righthand side SRBK (single RHS SRBK) method, single RHS LFGBK method, multiple righthand side SRBK (multiple RHS SRBK) method, and multiple RHS LFGBK method all demonstrate effectiveness in solving equation systems for image restoration. ⅱ) The GBK method, single RHS SRBK method, single RHS LFGBK method, multiple RHS SRBK method, and multiple RHS LFGBK method show similar restoration quality; however, the single RHS LFGBK and multiple RHS LFGBK methods significantly outperform the GBK, Single RHS SRBK, and Multiple RHS SRBK methods in terms of time efficiency. ⅲ) The LFGBK method exhibits superior acceleration effects at η = 0.9 compared to η = 0.8. ⅳ) The MLFGBK method displays more pronounced acceleration effects at η = 0.9 than at η = 0.8.

    Figure 1.  Reconstructed images when A is generated by radon().
    Table 1.  Numerical experimental results of the GBK method, SSRBK method, and SLFGBK method when A is generated by radon().
    GBK SSRBK SLFGBK
    data Metrics η=0.8 η=0.9 η=0.8 η=0.9 η=0.8 η=0.9
    Cameraman PSNR 19.595 17.996 19.076 17.640 17.595 17.519
    SSIM 0.999 0.998 0.999 0.998 0.999 0.999
    MSE 0.011 0.015 0.012 0.017 0.017 0.017
    CPU 234.78 335.37 55.00 94.755 11.581 7.543
    Phantom PSNR 73.636 72.694 73.639 73.655 72.377 72.430
    SSIM 1.000 1.000 1.000 1.000 1.000 1.000
    MSE 0 0 0 0 0 0
    CPU 266.67 279.43 56.186 75.168 12.496 8.556
    Mri PSNR 29.319 28.936 28.068 29.023 28.057 28.112
    SSIM 0.969 0.975 0.924 0.919 0.962 0.978
    MSE 0.001 0.001 0.001 0.001 0.001 0.001
    CPU 259.86 375.48 60.963 162.06 19.340 15.563

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical experimental results of the SSRBK method, MSRBK method, and MLFGBK method when A is generated by radon().
    SLFGBK MSRBK MLFGBK
    data Metrics η=0.8 η=0.9 η=0.8 η=0.9 η=0.8 η=0.9
    Cameraman PSNR 17.595 17.519 17.611 17.732 17.894 17.876
    SSIM 0.999 0.999 0.999 0.999 0.999 0.999
    MSE 0.017 0.017 0.017 0.016 0.016 0.016
    CPU 11.581 7.543 18.027 14.207 3.196 2.880
    Phantom PSNR 72.377 72.430 72.492 72.532 72.486 72.760
    SSIM 1.000 1.000 1.000 1.0000 1.000 1.000
    MSE 0 0 0 0 0 0
    CPU 12.496 8.556 43.106 38.075 20.874 20.023
    Mri PSNR 28.057 28.112 28.081 28.234 28.161 28.506
    SSIM 0.962 0.978 0.901 0.899 0.895 0.891
    MSE 0.001 0.001 0.001 0.001 0.001 0.001
    CPU 19.340 15.563 42.535 37.575 18.237 17.539

     | Show Table
    DownLoad: CSV

    Based on the numerical results from Tables 3 and 4, when variable A is generated using randn, we can draw the following conclusions: ⅰ) The GBK method, single RHS SRBK method, single RHS LFGBK method, multiple RHS SRBK method, and multiple RHS LFGBK method all demonstrate effectiveness in solving equation systems for image restoration. ⅱ) The GBK method, single RHS SRBK method, single RHS LFGBK method, multiple RHS SRBK method, and multiple RHS LFGBK method show similar restoration quality; however, the Single RHS LFGBK and Multiple RHS LFGBK methods significantly outperform the GBK, single RHS SRBK, and multiple RHS SRBK methods in terms of time efficiency. ⅲ) The LFGBK method exhibits superior acceleration effects at η = 0.9 compared to η = 0.8. ⅳ) The MLFGBK method displays more pronounced acceleration effects at η = 0.9 than at η = 0.8.

    Figure 2.  Reconstructed images when A is generated by randn().
    Table 3.  Numerical experimental results of the GBK method, SSRBK method, and SLFGBK method when A is generated by randn().
    GBK SSRBK SLFGBK
    data Metrics η=0.8 η=0.9 η=0.8 η=0.9 η=0.8 η=0.9
    Cameraman PSNR 20.122 19.195 18.419 17.594 17.519 17.552
    SSIM 0.999 0.998 0.998 0.998 0.999 0.999
    MSE 0.009 0.012 0.014 0.017 0.017 0.017
    CPU 3.651 6.916 1.433 3.596 1.698 0.723
    Phantom PSNR 73.904 72.475 74.040 73.704 72.619 72.464
    SSIM 1.000 1.000 1.000 1.000 1.000 1.000
    MSE 0 0 0 0 0 0
    CPU 68.434 129.151 19.739 35.465 7.776 6.878
    Mri PSNR 30.615 29.072 29.289 28.899 28.080 28.169
    SSIM 0.912 0.900 0.893 0.884 0.892 0.905
    MSE 0 0.001 0.001 0.001 0.001 0.001
    CPU 63.758 122.983 28.132 31.338 9.451 8.641

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical experimental results of the SSRBK method, MSRBK method, and MLFGBK method when A is generated by randn().
    SLFGBK MSRBK MLFGBK
    data Metrics η=0.8 η=0.9 η=0.8 η=0.9 η=0.8 η=0.9
    Cameraman PSNR 17.519 17.552 17.748 17.762 17.860 17.805
    SSIM 0.999 0.999 0.999 0.999 0.999 0.999
    MSE 0.017 0.017 0.016 0.016 0.016 0.016
    CPU 1.698 0.723 14.582 11.879 2.005 1.672
    Phantom PSNR 72.619 72.464 72.398 72.629 72.695 72.353
    SSIM 1.000 1.000 1.000 1.000 1.000 1.000
    MSE 0 0 0 0 0 0
    CPU 7.776 6.878 47.513 45.431 13.824 13.757
    Mri PSNR 28.080 28.169 28.241 28.093 28.241 28.328
    SSIM 0.892 0.905 0.904 0.904 0.902 0.897
    MSE 0.001 0.001 0.001 0.001 0.001 0.001
    CPU 9.451 8.641 54.802 38.773 11.533 10.332

     | Show Table
    DownLoad: CSV

    This study aims to explore a new image reconstruction algorithm, the LFGBK method, which utilizes a greedy strategy. The core of this method lies in transforming the image reconstruction problem into solving a linear system problem with multiple righthand sides. In traditional Kaczmarz algorithms, the iterative process of gradually approaching the solution is often inefficient and susceptible to the influence of initial value selection. However, the LFGBK method introduces leverage score sampling and extends from solving single righthand side linear equations to solving multiple righthand side linear equations, which is crucial for handling large-scale problems due to the time-consuming and resource-intensive nature of pseudo-inverse computation on large matrices. By avoiding pseudo-inverse calculation, the LFGBK method significantly improves the algorithm's computational efficiency. To validate the effectiveness of the proposed algorithm, this study conducts in-depth research through theoretical analysis and simulation experiments. The theoretical analysis confirms the convergence of the LFGBK method, ensuring the reliability and stability of the algorithm. The simulation experiments, compared with traditional Filtered Back Projection (FBP) methods, demonstrate the advantages of LFGBK in terms of image reconstruction quality. The experimental results show that the LFGBK method significantly preserves image details and reduces noise, proving its practicality and superiority in image reconstruction tasks.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare there is no conflict of interest.



    [1] A. C. Kak, M. Slaney, Principles of computerized tomographic imaging, Society for Industrial and Applied Mathematics, New York, 2001.
    [2] J. A. Fessler, B. P. Sutton, Nonuniform fast Fourier transforms using min-max interpolation, IEEE T. Signal Proces., 51 (2003), 560–574. https://doi.org/10.1109/TSP.2002.807005 doi: 10.1109/TSP.2002.807005
    [3] S. F. Gull, G. J. Daniell, Image reconstruction from incomplete and noisy data, Nature, 272 (1978), 686–690. https://doi.org/10.1038/272686a0 doi: 10.1038/272686a0
    [4] X. L. Zhao, T. Z. Huang, X. M. Gu, L. J. Deng, Vector extrapolation based Landweber method for discrete ill-posed problems, Math. Probl. Eng., 2017 (2017), 1375716. https://doi.org/10.1155/2017/1375716 doi: 10.1155/2017/1375716
    [5] S. C. Park, M. K. Park, M. G. Kang, Super-resolution image reconstruction: a technical overview, IEEE Signal Proc. Mag., 20 (2003), 21–36. https://doi.org/10.1109/MSP.2003.120320 doi: 10.1109/MSP.2003.120320
    [6] F. Natterer, F. Wübbeling, Mathematical methods in image reconstruction, Society for Industrial and Applied Mathematics, New York, 2001.
    [7] G. L. Zeng, Image reconstruction—a tutorial, Comput. Med. Imag. Grap., 25 (2001), 97–103. https://doi.org/10.1016/S0895-6111(00)00059-8 doi: 10.1016/S0895-6111(00)00059-8
    [8] J. I. Goldstein, D. E. Newbury, J. R. Michael, N. W. M. Ritchie, J. H. J. Scott, D. C. Joy, X-Rays, in Scanning Electron Microscopy and X-Ray Microanalysis, Springer, (2018), 39–63. https://doi.org/10.1007/978-1-4939-6676-9_4
    [9] P. Toft, The Radon Transform-Theory and Implementation, Ph.D thesis, Technical University of Denmark in Copenhagen, 1996.
    [10] A. Rahman, System of linear equations in Computed Tomography (CT), Bachelor's thesis, Brac University in Dacca, 2018.
    [11] G. T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, Springer Science & Business Media, Berlin, 2009.
    [12] Y. Censor, G. T. Herman, M. Jiang, A note on the behavior of the randomized Kaczmarz algorithm of Strohmer and Vershynin, J. Fourier Anal. Appl., 15 (2009), 431–436. https://doi.org/10.1007/s00041-009-9077-x doi: 10.1007/s00041-009-9077-x
    [13] O. Axelsson, Iterative Solution Methods, Cambridge University Press, Cambridge, 1996.
    [14] I. Gohberg, I. A. Fel_dman, Convolution Equations and Projection Methods for Their Solution, American Mathematical Soc., Providence, 2005.
    [15] Z. Z. Bai, C. H. Jin, Column-decomposed relaxation methods for the overdetermined systems of linear equations, Int. J. Appl. Mat. Com.-Pol., 13 (2003), 71–82.
    [16] S. Kaczmarz, Angenäherte auflösung von systemen linearer glei-chungen, Bull. Int. Acad. Pol. Sci. Lett. Class. Sci. Math. Nat., (1937), 355–357.
    [17] M. Brooks, A Survey of Algebraic Algorithms in Computerized Tomography, Master's Thesis, University of Ontario Institute of Technology in Oshawa, 2010.
    [18] Y. Censor, Parallel application of block-iterative methods in medical imaging and radiation therapy, Math. Program., 42 (1988), 307–325. https://doi.org/10.1007/BF01589408 doi: 10.1007/BF01589408
    [19] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (20031), 103. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
    [20] D. A. Lorenz, S. Wenger, F. Schöpfer, M. Magnor, A sparse Kaczmarz solver and a linearized Bregman method for online compressed sensing, in 2014 IEEE International Conference on Image Processing (ICIP), (2014), 1347–1351. https://doi.org/10.1109/ICIP.2014.7025269
    [21] J. D. Moorman, T. K. Tu, D. Molitor, D. Needell, Randomized Kaczmarz with averaging, BIT Numer. Math., 61 (2021), 337–359. https://doi.org/10.1007/s10543-020-00824-1 doi: 10.1007/s10543-020-00824-1
    [22] T. Strohmer, R. Vershynin, A randomized Kaczmarz algorithm with exponential convergence, J. Fourier Anal. Appl., 15 (2009), 262–278. https://doi.org/10.1007/s00041-008-9030-4 doi: 10.1007/s00041-008-9030-4
    [23] A. Zouzias, N. M. Freris, Randomized extended Kaczmarz for solving least squares, SIAM J. Matrix Anal. A., 34 (2013), 773–793.
    [24] K. Du, X. H. Sun, Pseudoinverse-free randomized block iterative algorithms for consistent and inconsistent linear systems, preprint, arXiv: 2011.10353.
    [25] D. Needell, J. A. Tropp, Paved with good intentions: analysis of a randomized block Kaczmarz method, Linear Algebra Appl., 441 (2014), 199–221. https://doi.org/10.1016/j.laa.2012.12.022 doi: 10.1016/j.laa.2012.12.022
    [26] Z Z. Bai, W. T. Wu, On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems, Appl. Math. Lett., 83 (2018), 21–26. https://doi.org/10.1016/j.aml.2018.03.008 doi: 10.1016/j.aml.2018.03.008
    [27] Z Z. Bai, W. T. Wu, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput., 40 (2018), A592–A606. https://doi.org/10.1137/17M1137747 doi: 10.1137/17M1137747
    [28] E. Rebrova, D. Needell, On block Gaussian sketching for the Kaczmarz method, Numer. Algor., 86 (2021), 443–473. https://doi.org/10.1007/s11075-020-00895-9 doi: 10.1007/s11075-020-00895-9
    [29] Y. Q. Niu, B. Zheng, A greedy block Kaczmarz algorithm for solving large-scale linear systems, Appl. Math. Lett., 104 (2020), 106294. https://doi.org/10.1016/j.aml.2020.106294 doi: 10.1016/j.aml.2020.106294
    [30] I. Necoara, Faster randomized block Kaczmarz algorithms, SIAM J. Matrix Anal. A., 40 (2019), 1425–1452. https://doi.org/10.1137/19M1251643 doi: 10.1137/19M1251643
    [31] T. Elfving, Block-iterative methods for consistent and inconsistent linear equations, Numer. Math., 35 (1980), 1–12. https://doi.org/10.1007/BF01396365 doi: 10.1007/BF01396365
    [32] D. P. Woodruff, Sketching as a tool for numerical linear algebra, Found. Trends Theor. Comput. Sci., 10 (2014), 1–157. http://doi.org/10.1561/0400000060 doi: 10.1561/0400000060
    [33] Y. Zhang, H. Li, L. Tang, Greedy randomized sampling nonlinear Kaczmarz methods, Calcolo, 61 (2024). https://doi.org/10.1007/s10092-024-00577-1 doi: 10.1007/s10092-024-00577-1
    [34] J. Zhang, Y. Wang, J. Zhao, On maximum residual nonlinear Kaczmarz-type algorithms for large nonlinear systems of equations, J. Comput. Appl. Math., 425 (2023), 115065. https://doi.org/10.1016/j.cam.2023.115065 doi: 10.1016/j.cam.2023.115065
    [35] T. Li, T. J. Kao, D. Isaacson, J. C. Newell, G. J. Saulnier, Adaptive Kaczmarz method for image reconstruction in electrical impedance tomography, Physiol. Meas., 34 (2013), 595. https://doi.org/10.1088/0967-3334/34/6/595 doi: 10.1088/0967-3334/34/6/595
    [36] M. B. Cohen, C. Musco, C. Musco, Input sparsity time low-rank approximation via ridge leverage score sampling, in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, (2017), 1758–1777. https://doi.org/10.1137/1.9781611974782.115
    [37] A. Rudi, D. Calandriello, L. Carratino, L. Rosasco, On fast leverage score sampling and optimal learning, Adv. Neural Inform. Process. Syst., 31 (2018).
    [38] Y. Zhang, H. Li, A count sketch maximal weighted residual Kaczmarz method for solving highly overdetermined linear systems, Appl. Math. Comput., 410 (2021), 126486. https://doi.org/10.1016/j.amc.2021.126486 doi: 10.1016/j.amc.2021.126486
    [39] Y. Jiang, G. Wu, L. Jiang, A semi-randomized Kaczmarz method with simple random sampling for large-scale linear systems, Adv. Comput. Math., 49 (2023). https://doi.org/10.1007/s10444-023-10018-2 doi: 10.1007/s10444-023-10018-2
    [40] G. Wu, Q. Chang, A semi-randomized block Kaczmarz method with simple random sampling for large-scale linear systems with multiple right-hand sides, preprint, arXiv: 2212.08797.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(929) PDF downloads(45) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog