
Image reconstruction represents an important technique applied in various fields such as medicine, biology, materials science, nondestructive testing, and so forth. In this paper, we transform the problem of image reconstruction into the problem of solving linear systems with multiple right-hand sides. Based on the idea of K-means clustering, we propose the global randomized block Kaczmarz method, so as to solve the problem of the linear systems with multiple right-hand sides effectively and use this method to image reconstruction. Theoretical analysis proves the convergence of this method, and the simulation results demonstrate the performance of this method in image reconstruction.
Citation: Ranran Li, Hao Liu. On global randomized block Kaczmarz method for image reconstruction[J]. Electronic Research Archive, 2022, 30(4): 1442-1453. doi: 10.3934/era.2022075
[1] | Wenya Shi, Xinpeng Yan, Zhan Huan . Faster free pseudoinverse greedy block Kaczmarz method for image recovery. Electronic Research Archive, 2024, 32(6): 3973-3988. doi: 10.3934/era.2024178 |
[2] | Fang Wang, Weiguo Li, Wendi Bao, Li Liu . Greedy randomized and maximal weighted residual Kaczmarz methods with oblique projection. Electronic Research Archive, 2022, 30(4): 1158-1186. doi: 10.3934/era.2022062 |
[3] | Jia-Min Luo, Hou-Biao Li, Wei-Bo Wei . Block splitting preconditioner for time-space fractional diffusion equations. Electronic Research Archive, 2022, 30(3): 780-797. doi: 10.3934/era.2022041 |
[4] | Jingshi Li, Jiachuan Zhang, Guoliang Ju, Juntao You . A multi-mode expansion method for boundary optimal control problems constrained by random Poisson equations. Electronic Research Archive, 2020, 28(2): 977-1000. doi: 10.3934/era.2020052 |
[5] | Bin Wang, Lin Mu . Viscosity robust weak Galerkin finite element methods for Stokes problems. Electronic Research Archive, 2021, 29(1): 1881-1895. doi: 10.3934/era.2020096 |
[6] | Huimin Qu, Haiyan Xie, Qianying Wang . Multi-convolutional neural network brain image denoising study based on feature distillation learning and dense residual attention. Electronic Research Archive, 2025, 33(3): 1231-1266. doi: 10.3934/era.2025055 |
[7] | Jiange Liu, Yu Chen, Xin Dai, Li Cao, Qingwu Li . MFCEN: A lightweight multi-scale feature cooperative enhancement network for single-image super-resolution. Electronic Research Archive, 2024, 32(10): 5783-5803. doi: 10.3934/era.2024267 |
[8] | Zengyu Cai, Liusen Xu, Jianwei Zhang, Yuan Feng, Liang Zhu, Fangmei Liu . ViT-DualAtt: An efficient pornographic image classification method based on Vision Transformer with dual attention. Electronic Research Archive, 2024, 32(12): 6698-6716. doi: 10.3934/era.2024313 |
[9] | Yantong Guo, Quansheng Wu, Xiaofeng Wang . An extension of high-order Kou's method for solving nonlinear systems and its stability analysis. Electronic Research Archive, 2025, 33(3): 1566-1588. doi: 10.3934/era.2025074 |
[10] | Xuefei He, Kun Wang, Liwei Xu . Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28(4): 1503-1528. doi: 10.3934/era.2020079 |
Image reconstruction represents an important technique applied in various fields such as medicine, biology, materials science, nondestructive testing, and so forth. In this paper, we transform the problem of image reconstruction into the problem of solving linear systems with multiple right-hand sides. Based on the idea of K-means clustering, we propose the global randomized block Kaczmarz method, so as to solve the problem of the linear systems with multiple right-hand sides effectively and use this method to image reconstruction. Theoretical analysis proves the convergence of this method, and the simulation results demonstrate the performance of this method in image reconstruction.
Image reconstruction plays a vital role in magnetic resonance imaging [1], X-ray computed tomography [2] and radio astronomy imaging applications [3]. Reconstruction methods [4,5,6,7] are mainly divided into analytical reconstruction and iterative reconstruction, which are based on the Radon transform [8], the inverse Radon transform, and the projection slice theorem serving as mathematical foundations. The most commonly used analytical reconstruction methods are all in the form of the filtered back projection (FBP) method [9]. Iterative reconstruction methods formulate the final result as a solution of linear systems [10].
For example, image reconstruction, as a mathematical process, generates tomographic images from X-ray [11] projection data acquired at many different angles in computed tomography (CT). If there has a test image with n×n pixels, p is the number of X-ray beams, and t is the scanning angle, which ranges from 0∘ to 179∘, with intervals of 1∘. The X-ray beam Xs passing through the test image is shown in the following figure.
In Figure 1, w(i,j)X(t)s represents the length of the X-ray beam Xs(s=1,2,⋯,p) when it passes through the pixel {i,j} along the line l(t)Xs(t=0,2,⋯,179), and μ(i,j) is the average attenuation coefficient for each pixel {i,j}. The data g(i,j)X(t)s is measured when an X-ray beam X(t)s crosses the pixel {i,j} and intensity is absorbed. Thus, the image reconstruction problem amounts to solving the linear systems with multiple right-hand sides of the form
AX=B, | (1.1) |
where
A=[w(1,1)X(0)1⋯w(1,n)X(0)1⋮⋱⋮w(n,1)X(0)1⋯w(n,n)X(0)1⋮⋱⋮w(1,1)X(179)p⋯w(1,n)X(179)p⋮⋱⋮w(n,1)X(179)p⋯w(n,n)X(179)p],X=[μ(1,1)⋯μ(1,n)⋮⋱⋮μ(n,1)⋯μ(n,n)],B=[g(1,1)X(0)1⋯g(1,n)X(0)1⋮⋱⋮g(n,1)X(0)1⋯g(n,n)X(0)1⋮⋱⋮g(1,1)X(179)p⋯g(1,n)X(179)p⋮⋱⋮g(n,1)X(179)p⋯g(n,n)X(179)p]. |
The linear systems with multiple right-hand sides (1.1) can be written as
Ax(l)=b(l), | (1.2) |
where l=1,2,⋯,n. The numerical methods [12,13,14,15] for solving linear systems (1.2) have seen a significant maturation. Among them the Kaczmarz method [16] is a classic and effective row-action method. Out of its simplicity, the Kaczmarz method has been a preferable solution tool in many fields, such as image reconstruction [15,17], computerized tomography [9,18] and signal processing [19,20]. Based on the Kaczmarz method, a series of row iteration methods are proposed [21,22,23].
In order to reduce the computational costs, Censor proposed the block Kaczmarz method [18], Niu [24] presented the greedy block Kaczmarz method, Chen [25] declared the randomized double block Kaczmarz method and raised the upper bound of the error estimate in expectation. The approximate solution x(l)k(l=1,⋯,n) can be obtained by solving the linear systems (1.2) with one of the above-mentioned solving methods, thereby getting the approximate solution Xk=[x(1)k,⋯,x(n)k] of the linear systems with multiple right-hand sides. In the above solving process, the calculation and storage requirements increase with the iteration.
Instead of solving each of the n linear systems independently by using some iterative methods, it is more efficient to solve the linear systems with multiple right-hand sides globally. A great deal of research has been finished, including the block conjugate gradient method [26], the block generalized minimal residual method [27] and the GMRES seed projection method [28], see also [29,30,31,32,33,34] and the references therein.
In regard to the problem of image reconstruction, we transform it into solving a linear systems with multiple right-hand sides and propose the idea of global iteration. Based on the idea of K-means clustering, we put forward the global randomized block Kaczmarz method, and use this method to solve the linear systems with multiple right-hand sides effectively, so as to image reconstruction. Theoretical analysis proves the convergence of the global randomized block Kaczmarz method. Simulation results show that the reconstructed image quality of this method is superior to that of the filtered back projection method.
In this paper, we adopt the following notations. Let A∗ be the conjugate transpose of matrix A, A† the Moore-Penrose pseudoinverse, λmin(A) the smallest nonzero eigenvalue, ‖A‖2 the spectral norm and ‖A‖F the Frobenius norm respectively. B(i) represents the ith column of matrix B. Let [m]:={1,2,⋯,m}, the index set J={J1,J2,⋯,Jq}⊆[m] satisfies Ji∩Jj=∅ and ∪qi=1Ji=[m]. AJi is the row submatrix indexed by Ji. X⋆=A†B denotes the least Euclidean-norm solution of the linear systems with multiple right-hand sides (1.1). Ek indicates the expected value conditional on the first k iterations, that is,
Ek[⋅]=E[⋅|Jo,J1,⋯,Jk−1], |
where Jl(l=0,1,⋯,k−1) is the Jlth submatrix chosen at the lth iteration, and then we can obtain that E[Ek[⋅]]=E[⋅].
K-means clustering [35] is the simplest unsupervised learning method as well as the most widely used to solve clustering problems. Recently, several Kaczmarz-type methods that embed the clustering methods appeared, see e.g., [36,37], and the complete process of the K-means method is given in Method 2.1.
Method 2.1. The K-means method |
Input: A date set X containing m data items, and the number of clusters q. Output: ρi, Ci. 1: Randomly choose q items from X as the initial cluster centers; 2: Repeat Steps 3 and 4 until the termination; 3: Assign each data item to the cluster whose centroid is nearest in terms of q∑i=1∑xk∈Ci|1−x⊤kρi‖xk‖2‖ρi‖2|; 4: Based on the mean value of the data objects in the cluster, update the cluster centers ρi; 5: Return ρi and Ci with ρi being the cluster center and Ci the clusters for i=1,2,⋯,q. |
Using this method to divide the rows of coefficient matrix of linear systems with multiple right-hand sides (1.1), we can obtain
A=[AJ1⋮AJq],B=[BJ1⋮BJq], | (2.1) |
where AJi∈CmJi×n, BJi∈CmJi and q∑i=1mJi=m. Starting from an initial guess X0, the current iterate Xk is orthogonally projected to the hyperplane AJikXk=BJik under the probability criterion
Pr(row=Jik)=‖AJik‖2F‖A‖2F. |
The global randomized block Kaczmarz (MGRBK) method for solving linear systems with multiple right-hand sides can be formulated as
Xk+1=Xk+A†Jik(BJik−AJikXk), | (2.2) |
where the index of working submatrix Jik is chosen from the set J={J1,J2,⋯,Jq} at random, with probability proportional to ‖AJik‖2F. The form of A†Jik in our simulation experiments can be seen in the reference [38]. Then the global randomized block Kaczmarz method can be described in the following.
Method 2.2. The global randomized block Kaczmarz method |
Input: A, B, l, q and X0. Output: Xl. 1: The blocks AJi, BJi(i=1,2,⋯,q) are obtained by K-means method; 2: for k=0,1,2,⋯,l−1do 3: Select Jik∈J with probability Pr(row=Jik)=1q; 4: Compute Xk+1=Xk+A†Jik(BJik−AJikXk); 5: end for |
Note that the index set J is nonempty for all iteration index k. According to a large number of numerical experiments, we find that the number of blocks q fluctuates around m2000, where m is the number of rows of coefficient matrix A. Hence, we have a rough estimate is that q≈m2000. For the convergence property of the global randomized block Kaczmarz method, we establish the following theorem.
Theorem 2.1. Let the linear systems with multiple right-hand sides (1.1), with the coefficient matrix A∈Cm×n and the right-hand side B∈Cm×n, be consistent. The iteration sequence {Xk}∞k=0 generated by the global randomized block Kaczmarz method starting from any initial approximation X0∈R(A∗), converges to the unique least-norm solution X⋆=A†B in expectation. Moreover, the solution error in expectation for the iteration sequence {Xk}∞k=0 obeys
E‖Xk−X⋆‖2F≤(1−λmin(A∗A)‖A‖2F)k‖X0−X⋆‖2F. | (2.3) |
Proof. From the iteration scheme of the global randomized block Kaczmarz method, we can straightforwardly obtain
Xk+1−Xk=A†Jik(BJik−AJikXk). |
Since BJik=AJikX⋆, it holds that
Xk−Xk+1=A†JikAJik(Xk−X⋆), | (2.4) |
Xk+1−X⋆=(In−A†JikAJik)(Xk−X⋆). | (2.5) |
From the properties of the Moore-Penrose pseudoinverse, we have
(Xk−X⋆)∗(A†JikAJik)∗(In−A†JikAJik)(Xk−X⋆)=(Xk−X⋆)∗(A†JikAJik−A†JikAJik)(Xk−X⋆)=0. |
Therefore, we know that matrix Xk−Xk+1 is orthogonal to Xk+1−X⋆ for any k≥0, in other words, (Xk−Xk+1)∗(Xk+1−X⋆)=0. It follows that
‖Xk−X⋆‖2F=‖Xk−Xk+1‖2F+‖Xk+1−X⋆‖2F. | (2.6) |
By Eq (2.4), Eq (2.6) can be rewritten as
‖Xk+1−X⋆‖2F=‖Xk−X⋆‖2F−‖A†JikAJik(Xk−X⋆)‖2F. | (2.7) |
With this, we obtain
Ek‖Xk+1−X⋆‖2F=‖Xk−X⋆‖2F−Ek‖A†JikAJik(Xk−X⋆)‖2F=‖Xk−X⋆‖2F−q∑i=1(‖AJik‖2F‖A‖2F‖A†JikAJik(Xk−X⋆)‖2F)=‖Xk−X⋆‖2F−1‖A‖2Fq∑i=1‖AJik(Xk−X⋆)‖2F=‖Xk−X⋆‖2F−1‖A‖2F‖A(Xk−X⋆)‖2F≤‖Xk−X⋆‖2F−1‖A‖2Fλmin(A∗A)‖Xk−X⋆‖2F. |
Here the last inequality is achieved with the use of the estimate
‖AC‖2F=tr(C∗A∗AC)≥λmin(A∗A)tr(C∗C)=λmin(A∗A)‖C‖2F, |
where C∈Cn×c. Thus, we can get the estimate
Ek‖Xk+1−X⋆‖2F≤(1−1‖A‖2Fλmin(A∗A))‖Xk−X⋆‖2F,k=0,1,2,⋯. | (2.8) |
Finally, by taking the full expectation on both sides of (2.8), we obtain
E‖Xk+1−X⋆‖2F≤(1−1‖A‖2Fλmin(A∗A))E‖Xk−X⋆‖2F,k=0,1,2,⋯. |
By induction on the iteration index k, we have
E‖Xk−X⋆‖2F≤(1−1‖A‖2Fλmin(A∗A))k‖X0−X⋆‖2F. |
Consequently, the convergence property of the global randomized block Kaczmarz method is proved.
In this section, we investigate the performance of the global randomized block Kaczmarz (MGRBK) method in image reconstruction through practical experiments. We also compare the image reconstruction results of the MGRBK method and filtered back projection (FBP) method, in which the FBP method is realized by using the MATLAB function iradon. All computations are started from the initial matrix X0=0, and the row vectors of coefficient matrix A are divided into q blocks.
Example 3.1. The MGRBK method is used to solve the linear systems with multiple right-hand sides to reconstruct the test image. The test image is obtained through the image processing toolbox function phantom in MATLAB. The Shepp-Logan phantom is shown in the following Figure.
For the Shepp-Logan phantom with 100×100 pixels, p=4, the corresponding linear systems with multiple right-hand sides is AX=B, where the dimension of coefficient matrix A is 72000×100. Then we solve this linear systems with multiple right-hand sides by using the MGRBK method, q=60 and the number of iterations is 30. The computational process of image reconstruction is shown in the following figure.
In Figure 3, we can see that at the 18th iteration, the test image can be reconstructed with the global randomized block Kaczmarz method. Through observation, the reconstructed image can retain the details of the original image.
In Figure 4, the image reconstruction is realized by using the global randomized block Kaczmarz method and the filtered back-projection method respectively, and the reconstruction results are compared. From the visual inspection of the reconstructed image, the MGRBK method outperforms the FBP method.
Example 3.2. Select the test image from the photo gallery in MATLAB, and then solve the linear systems with multiple right-hand sides through the MGRBK method to reconstruct the test image. The test image is shown in the following figure.
For the cameraman image with 200×200 pixels, p=5, the corresponding linear systems with multiple right-hand sides is AX=B, where the dimension of coefficient matrix A is 180000×200. Then we solve this linear systems with multiple right-hand sides by using the MGRBK method. Let q=70 and the number of iterations is 40. The computational process of image reconstruction is shown in the following figure.
In Figure 6, we can see that test image can be reconstructed with the global randomized block Kaczmarz method as reach the 38th iteration. Through observation, the reconstructed image can retain the details of the original image.
In Figure 7, the reconstruction results of the MGRBK method and the FBP method are compared. From the visual inspection of the reconstructed image, the former method is superior to the latter method.
Example 3.3. The test image is obtained through the image processing toolbox function phantom in MATLAB, which is the same as that in Example 3.1. We compare the reconstructed image of the global randomized block Kaczmarz (MGRBK) method with that of a typical iterative method, that is Kaczmarz method. The reconstructed images of these two iterative methods are shown in the following figure.
From the visual inspection, the reconstructed image obtained by using the MGRBK method has higher resolution. In the MGRBK method, the relative residual (Rres) is defined by
Rres=‖B−AXk‖2F‖B‖2F, |
and Rres of the MGRBK method is 1.6049e-04. In [39], the relative residual of Kaczmarz method is 0.0015. This means that the approximate solution of MGRBK method is approach to exact solution. Hence, the reconstructed image quality of the MGRBK method is much better.
Iterative reconstruction refers to iterative methods used to reconstruct 2D and 3D images in certain imaging techniques. A variety of reconstruction methods have been investigated to improve the quality of reconstructed images. In this paper, we transform the problem of image reconstruction into solving a linear systems with multiple right-hand sides. In order to solve the linear systems with multiple right-hand sides effectively, we propose the global randomized block Kaczmarz method based on the idea of K-means clustering, and use this method to image reconstruction. Theoretical analysis proves the convergence of this method. Simulation results show that the reconstructed image quality of this method is better than that of the FBP method.
This research was supported in part by NSFC under grant 11401305 and 11571171.
The authors declare there is no conflicts of interest.
[1] |
J. A. Fessler, B. P. Sutton, Nonuniform fast fourier transforms using min-max interpolation, IEEE Trans. Signal Process., 51 (2003), 560–574. https://doi.org/10.1109/TSP.2002.807005 doi: 10.1109/TSP.2002.807005
![]() |
[2] |
A. C. Kak, M. Slaney, G. Wang, Principles of computerized tomographic imaging, Med. Phys., 29 (2002), 107–107. https://doi.org/10.1118/1.1455742 doi: 10.1118/1.1455742
![]() |
[3] |
S. F. Gull, G. J. Daniell, Image reconstruction from incomplete and noisy data, Med. Phys., 272 (1978), 686–690. https://doi.org/10.1038/272686a0 doi: 10.1038/272686a0
![]() |
[4] |
G. L. Zeng, Image reconstruction a tutorial, Comput. Med. Imaging Graphics, 25 (2001), 97–103. https://doi.org/10.1016/S0895-6111(00)00059-8 doi: 10.1016/S0895-6111(00)00059-8
![]() |
[5] |
F. Natterer, F. Wubbeling, G. Wang, Mathematical methods in image reconstruction, Med. Phys., 29 (2002), 107–108. https://doi.org/10.1118/1.1455744 doi: 10.1118/1.1455744
![]() |
[6] |
S. C. Park, M. K. Park, M. G. Kang, Super-resolution image reconstruction: a technical overview, IEEE Signal Processing Mag., 20 (2003), 21–36. https://doi.org/10.1109/MSP.2003.1203207 doi: 10.1109/MSP.2003.1203207
![]() |
[7] |
X. L. Zhao, T. Z. Huang, X. M. Gu, L. J. Deng, Vector extrapolation based Landweber method for discrete ill-posed problems, Math. Probl. Eng., 2017 (2017), 1375716. https://doi.org/10.1155/2017/1375716 doi: 10.1155/2017/1375716
![]() |
[8] | S. Helgason, The Radon Transform (2nd ed.), Progress in Mathematics, Springer, Boston, MA, 2007. https://doi.org/10.1007/978-1-4757-1463-0 |
[9] | G. T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, Springer, Dordrecht, 2009. |
[10] | A. Rahman, System of Linear Equations in Computed Tomography (CT), MSc Thesis, BRAC University, Dhaka, Bangladesh, 2018. |
[11] | T. G. Feeman, X-rays, Springer New York, 2010. https://doi.org/10.1007/978-0-387-92712-1_1 |
[12] | O. Axelsson, Iterative Solution Methods, Cambridge University Press, Cambridge, 1994. |
[13] | Z. Z. Bai, C. H. Jin, Column-decomposed relaxation methods for the overdetermined systems of linear equations, Int. J. Appl. Math. Comput. Sci., 13 (2003), 71–82. |
[14] | C. Brezinski, Projection Methods for Systems of Equations, Elsevier Science B.V., Amsterdam, 1997. |
[15] |
Y. Censor, G. T. Herman, J. Ming, A note on the behavior of the randomized Kaczmarz algorithm of Strohmer and Vershynin, J.Fourier Anal. Appl., 15 (2009), 431–436. https://doi.org/10.1007/s00041-009-9077-x doi: 10.1007/s00041-009-9077-x
![]() |
[16] | S. Kaczmarz, Angen¨aherte aufl¨osung von systemen linearer gleichungen, Bull. Int. Acad. Pol. Sci. Lett. Ser. A, 35 (1937), 355–357. |
[17] | M. A. Brooks, A Survey of Algebraic Algorithms in Computerize Tomography, MSc Thesis, University of Ontario Institute of Technology, Oshawa, Ontario, 2010. |
[18] |
Y. Censor, Parallel application of block-iterative methods in medical imaging and radiation therapy, Math. Program., 42 (1988), 307–325. https://doi.org/10.1007/BF01589408 doi: 10.1007/BF01589408
![]() |
[19] | D. A. Lorenz, S. Wenger, F. Sch¨opfer, A sparse Kaczmarz solver and a linearized Bregman method for online compressed sensing, in 2014 IEEE international conference on image processing (ICIP), (2014), 1347–1351. https://doi.org/10.1109/ICIP.2014.7025269 |
[20] |
C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
![]() |
[21] |
T. Strohmer, R. Vershynin, A randomized Kaczmarz algorithm with exponential convergence, J.Fourier Anal. Appl., 15 (2009), 262–278. https://doi.org/10.1007/s00041-008-9030-4 doi: 10.1007/s00041-008-9030-4
![]() |
[22] |
Z. Z. Bai, W. T. Wu, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput., 40 (2018), A592–A606. https://doi.org/10.1137/17M1137747 doi: 10.1137/17M1137747
![]() |
[23] |
Z. Z. Bai, W. T. Wu, On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems, Appl. Math. Lett., 83 (2018), 21–26. https://doi.org/10.1016/j.aml.2018.03.008 doi: 10.1016/j.aml.2018.03.008
![]() |
[24] |
Y. Q. Niu, B. Zheng, A greedy block Kaczmarz algorithm for solving large-scale linear systems, Appl. Math. Lett., 104 (2020), 106294–106294. https://doi.org/10.1016/j.aml.2020.106294 doi: 10.1016/j.aml.2020.106294
![]() |
[25] |
J. Q. Chen, Z. D. Huang, On the error estimate of the randomized double block Kaczmarz method, Appl. Math. Comput., 370 (2020), 124907–124907. https://doi.org/10.1016/j.amc.2019.124907 doi: 10.1016/j.amc.2019.124907
![]() |
[26] |
D. P. O'Leary, The block conjugate gradient algorithm and related methods, Linear Algebra Appl., 29 (1980), 293–322. https://doi.org/10.1016/0024-3795(80)90247-5 doi: 10.1016/0024-3795(80)90247-5
![]() |
[27] |
V. Simoncini, E. Gallopoulos, Convergence properties of block GMRES and matrix polynomials, Linear Algebra Appl., 247 (1996), 97–119. https://doi.org/10.1016/0024-3795(95)00093-3 doi: 10.1016/0024-3795(95)00093-3
![]() |
[28] |
V. Simoncini, E. Gallopolous, An iterative method for nonsymmetric systems with multiple right-hand sides, SIAM J. Sci. Computi., 16 (2006), 917–933. https://doi.org/10.1137/0916053 doi: 10.1137/0916053
![]() |
[29] |
V. Simoncini, A stabilized QMR version of block BiCG, SIAM J. Matrix Anal. Appl., 18 (2006), 419–434. https://doi.org/10.1137/S0895479894264673 doi: 10.1137/S0895479894264673
![]() |
[30] | Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, PA, 2003. |
[31] |
R. W. Freund, M. Malhotra, A block QMR algorithm for non-Hermitian linear systems with multiple right-hand sides, Linear Algebra Appl., 254 (1997), 119–157. https://doi.org/10.1016/S0024-3795(96)00529-0 doi: 10.1016/S0024-3795(96)00529-0
![]() |
[32] |
R. Sides R, A. E. Guennouni, K. Jbilou, H. Sadok, A block version of BiCGSTAB for linear systems with multiple right-hand sides, Electron. Trans. Numer. Anal., 16 (2003), 129–142. https://doi.org/10.1038/021396a0 doi: 10.1038/021396a0
![]() |
[33] | H. Dai, Block bidiagonalization methods for multiple nonsymmetric linear systems, Numer. Math. J. Chin. Univ., 02 (2001), 209–225. |
[34] |
G. D. Gu, A seed method for solving nonsymmetric linear systems with multiple right-hand sides, Int. J. Comput. Math., 79 (2002), 307–326. https://doi.org/10.1080/00207160211931 doi: 10.1080/00207160211931
![]() |
[35] |
A. K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognit. Lett., 31 (2009), 651–666. https://doi.org/10.1016/j.patrec.2009.09.011 doi: 10.1016/j.patrec.2009.09.011
![]() |
[36] | Y. J. Li, K. C. Mo, H. S. Ye, Accelerating random Kaczmarz algorithm based on clustering information, in Proceedings of the AAAI Conference on Artificial Intelligence, 30 (2016), 1823–1829. |
[37] |
X. L. Jiang, K. Zhang, J. F. Yin, Randomized block Kaczmarz methods with k-means clustering for solving large linear systems, J. Comput. Appl. Math., 403 (2022), 113828. https://doi.org/10.1016/j.cam.2021.113828 doi: 10.1016/j.cam.2021.113828
![]() |
[38] |
J. R. Sendra, J. Sendra, Computation of Moore-Penrose generalized inverses of matrices with meromorphic function entries, Appl. Math. Comput., 313 (2017), 355–366. https://doi.org/10.1016/j.amc.2017.06.007 doi: 10.1016/j.amc.2017.06.007
![]() |
[39] |
G. Vijayalakshmi, P. Vindhya, Comparison of algebraic reconstruction methods in computed tomography, Int. J. Comput. Sci. Inf. Technol., 5 (2014), 6007–6009. https://doi.org/10.3934/dcdsb.2004.4.1065 doi: 10.3934/dcdsb.2004.4.1065
![]() |
1. | Shunchang Li, Gang Wu, A Semi‐Randomized and Augmented Kaczmarz Method With Simple Random Sampling for Large‐Scale Inconsistent Linear Systems, 2024, 1070-5325, 10.1002/nla.2591 | |
2. | Guang-Xin Huang, Shuang-You Zhong, Tensor randomized extended Kaczmarz methods for large inconsistent tensor linear equations with t-product, 2024, 96, 1017-1398, 1755, 10.1007/s11075-023-01684-w | |
3. | Ran-Ran Li, Hao Liu, The maximum residual block Kaczmarz algorithm based on feature selection, 2025, 10, 2473-6988, 6270, 10.3934/math.2025286 |
Method 2.1. The K-means method |
Input: A date set X containing m data items, and the number of clusters q. Output: ρi, Ci. 1: Randomly choose q items from X as the initial cluster centers; 2: Repeat Steps 3 and 4 until the termination; 3: Assign each data item to the cluster whose centroid is nearest in terms of q∑i=1∑xk∈Ci|1−x⊤kρi‖xk‖2‖ρi‖2|; 4: Based on the mean value of the data objects in the cluster, update the cluster centers ρi; 5: Return ρi and Ci with ρi being the cluster center and Ci the clusters for i=1,2,⋯,q. |
Method 2.2. The global randomized block Kaczmarz method |
Input: A, B, l, q and X0. Output: Xl. 1: The blocks AJi, BJi(i=1,2,⋯,q) are obtained by K-means method; 2: for k=0,1,2,⋯,l−1do 3: Select Jik∈J with probability Pr(row=Jik)=1q; 4: Compute Xk+1=Xk+A†Jik(BJik−AJikXk); 5: end for |
Method 2.1. The K-means method |
Input: A date set X containing m data items, and the number of clusters q. Output: ρi, Ci. 1: Randomly choose q items from X as the initial cluster centers; 2: Repeat Steps 3 and 4 until the termination; 3: Assign each data item to the cluster whose centroid is nearest in terms of q∑i=1∑xk∈Ci|1−x⊤kρi‖xk‖2‖ρi‖2|; 4: Based on the mean value of the data objects in the cluster, update the cluster centers ρi; 5: Return ρi and Ci with ρi being the cluster center and Ci the clusters for i=1,2,⋯,q. |
Method 2.2. The global randomized block Kaczmarz method |
Input: A, B, l, q and X0. Output: Xl. 1: The blocks AJi, BJi(i=1,2,⋯,q) are obtained by K-means method; 2: for k=0,1,2,⋯,l−1do 3: Select Jik∈J with probability Pr(row=Jik)=1q; 4: Compute Xk+1=Xk+A†Jik(BJik−AJikXk); 5: end for |