Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

On randomized multiple row-action methods for linear feasibility problems

  • In this paper, for solving linear feasibility problems we propose two randomized methods: a multiple row-action method (RMR) based on partial rows of residual vectors and its generalized method (GRMR) with history information in updating the current update. By introducing a linear combination of the information from the previous and subsequent iterative steps with the relaxation parameter ξ, the GRMR method unifies various RMR-type algorithms. A thorough convergence analysis for the proposed methods is provided. The theoretical results show the theoretical convergence rate of the GRMR method with 0ξ1 is always worse or equal compared to that of the RMR method. Therefore, a global linear rate for the GRMR method is explored for 1ξ0. Finally, numerical experiments on both randomly generated and real-world data show our algorithms outperform the original methods in terms of computing time and iteration counts. In particular, when the appropriate parameters are selected, the GRMR method is the competitive row-action method for solving linear feasibility problems.

    Citation: Hui Song, Wendi Bao, Lili Xing, Weiguo Li. On randomized multiple row-action methods for linear feasibility problems[J]. Networks and Heterogeneous Media, 2024, 19(4): 1448-1469. doi: 10.3934/nhm.2024062

    Related Papers:

    [1] Ying Lv, Li-Li Xing, Wen-Di Bao, Wei-Guo Li, Zhi-Wei Guo . A class of pseudoinverse-free greedy block nonlinear Kaczmarz methods for nonlinear systems of equations. Networks and Heterogeneous Media, 2024, 19(1): 305-323. doi: 10.3934/nhm.2024014
    [2] JinJun Yong, Changlun Ye, Xianbing Luo . A fully discrete HDG ensemble Monte Carlo algorithm for a heat equation under uncertainty. Networks and Heterogeneous Media, 2025, 20(1): 65-88. doi: 10.3934/nhm.2025005
    [3] Chunlin Du, Yu Zhang, Haolan Qu, Haowen Guo, Xinbo Zou . Acceleration of solving drift-diffusion equations enabled by estimation of initial value at nonequilibrium. Networks and Heterogeneous Media, 2024, 19(1): 456-474. doi: 10.3934/nhm.2024020
    [4] Edward S. Canepa, Alexandre M. Bayen, Christian G. Claudel . Spoofing cyber attack detection in probe-based traffic monitoring systems using mixed integer linear programming. Networks and Heterogeneous Media, 2013, 8(3): 783-802. doi: 10.3934/nhm.2013.8.783
    [5] Yuhua Zhu . A local sensitivity and regularity analysis for the Vlasov-Poisson-Fokker-Planck system with multi-dimensional uncertainty and the spectral convergence of the stochastic Galerkin method. Networks and Heterogeneous Media, 2019, 14(4): 677-707. doi: 10.3934/nhm.2019027
    [6] Xu Yang, François Golse, Zhongyi Huang, Shi Jin . Numerical study of a domain decomposition method for a two-scale linear transport equation. Networks and Heterogeneous Media, 2006, 1(1): 143-166. doi: 10.3934/nhm.2006.1.143
    [7] Zhongyi Huang . Tailored finite point method for the interface problem. Networks and Heterogeneous Media, 2009, 4(1): 91-106. doi: 10.3934/nhm.2009.4.91
    [8] Ming-Kai Wang, Cheng Wang, Jun-Feng Yin . A second-order ADI method for pricing options under fractional regime-switching models. Networks and Heterogeneous Media, 2023, 18(2): 647-663. doi: 10.3934/nhm.2023028
    [9] Shi Jin, Min Tang, Houde Han . A uniformly second order numerical method for the one-dimensional discrete-ordinate transport equation and its diffusion limit with interface. Networks and Heterogeneous Media, 2009, 4(1): 35-65. doi: 10.3934/nhm.2009.4.35
    [10] Martin Gugat, Rüdiger Schultz, Michael Schuster . Convexity and starshapedness of feasible sets in stationary flow networks. Networks and Heterogeneous Media, 2020, 15(2): 171-195. doi: 10.3934/nhm.2020008
  • In this paper, for solving linear feasibility problems we propose two randomized methods: a multiple row-action method (RMR) based on partial rows of residual vectors and its generalized method (GRMR) with history information in updating the current update. By introducing a linear combination of the information from the previous and subsequent iterative steps with the relaxation parameter ξ, the GRMR method unifies various RMR-type algorithms. A thorough convergence analysis for the proposed methods is provided. The theoretical results show the theoretical convergence rate of the GRMR method with 0ξ1 is always worse or equal compared to that of the RMR method. Therefore, a global linear rate for the GRMR method is explored for 1ξ0. Finally, numerical experiments on both randomly generated and real-world data show our algorithms outperform the original methods in terms of computing time and iteration counts. In particular, when the appropriate parameters are selected, the GRMR method is the competitive row-action method for solving linear feasibility problems.



    Consider solving the large-scale linear inequalities

    Axb, (1.1)

    where ARm×n is an m-by-n (m>n) real coefficient matrix, bRm is an m-dimensional right-hand side, and xRn is an unknown n-dimensional vector. We confine the scope of this work to the regime of mn, where iterative methods are typically employed. We denote the feasible region of Eq (1.1) by S={xRn|Axb}. Through this paper, we assume that the coefficient matrix A has no zero rows and S.

    The Kaczmarz method [1], or the algebraic reconstruction technique (ART) [2], is one of the most popular solvers to solve linear systems of equations. Originally proposed by Polish mathematician Stefan Kaczmarz in 1937, it has found a wide range of applications in many fields such as image reconstruction, medical scanners, computerized tomography, and digital signal processing. At each iteration, the Kaczmarz method uses a cyclic rule to select a row of the matrix and projects the current iteration onto the corresponding hyperplane. Let Ai,: stand for the i-th row of the coefficient matrix A and b=(b1,b2,,bm)T, hence, for the algebraic reconstruction technique (Kaczmarz)

    xk=xk1+bi(Ai,:)T,xk1Ai,:22(Ai,:)T,k=1,2,, (1.2)

    where i=(k mod m)+1, , is the Euclidean inner product, and 2 is the corresponding norm in Rn. A lot of applications have demonstrated that random row selection in the coefficient matrix A can significantly enhance its convergence rate compared to sequential selection. Strohmer and Vershyn [3] proposed the randomized Kaczmarz (RK) method by selecting the working row with a probability proportional to its Euclidean norm and proved its expected linear convergence rate in 2009. Then, Bai and Wu in [4] discussed an improved estimate of the convergence rate of the randomized Kaczmarz method. Subsequently, based on the RK method, many iterative methods have been proposed to further improve its convergence rate and computational efficiency. For instance, many variants of the RK method with different selection rules for working rows or various generalizations of the RK method are explored (see [5, 6, 7, 8] for more details). Especially, the study on the block Kaczmarz algorithm is deepening. The block Kaczmarz method was first proposed by Elfving [9]. Unlike the Kaczmarz single-projection method, the block Kaczmarz method is equivalent to solving multiple equations at each iteration. Needell and Tropp [10] proposed the random block Kaczmarz (RBK) method. For the randomized (block) Kaczmarz method, it is necessary to traverse all rows of the coefficient matrix. There is a large amount of data in the coefficient matrix, which leads to a huge amount of computation and storage. To avoid computing the pseudoinverses, Gower and Richtárik [11] proposed the Gaussian Kaczmarz (GK) method. The Gaussian Kaczmarz (GK) method, defined by

    xk+1=xkηT(Axkb)ATη22ATη, (1.3)

    can be regarded as another kind of block Kaczmarz method that writes directly the increment in the form of a linear combination of all columns of AT at each iteration, where η is a Gaussian vector with mean 0Rm and the covariance matrix IRm×m, i.e., ηN(0,I). Here I denotes the identity matrix. In Eq (1.3), all columns of AT are used. The expected linear convergence rate was analyzed in [11] in the case that A is of full column rank. Recently, Chen and Huang [12] proposed a fast deterministic block Kaczmarz (FDBK) method, in which a set Uk is first computed according to the greedy index selection strategy [13] and then the vector ηk is constructed by

    ηk=iUk(biAi,:xk)μi. (1.4)

    Its relaxed version was given by [14]. In these methods [11, 12, 13, 14], the calculations on the pseudoinverses are not needed, while, to compute Uk, one has to scan the residual vector from scratch during each iteration. In fact, Eq (1.3) is viewed as a special case of the following prototype projection iteration (for more details, see Section 5.1.2 in [15]),

    xk+1=xk+V(WTAV)+WT(bAxk), (1.5)

    for k=0,1,2,, where V and W are two parameter matrices, which is proposed by Saad based on Petrov-Galerkin (PG) conditions. It is easy to see that with different V and W, one can obtain different popular iterations as special cases, including the multiple row-action iterate scheme. For example, let W=η and V=ATW with ηRm being any non-zero vector, the iteration step (1.5) becomes the form (1.3).

    For solving the linear feasibility problem (1.1), Leventhal and Lewis [16] extended the randomized Kaczmarz method. At each iteration k, if the inequality is already satisfied for the selected row i, then set xk+1=xk. If the inequality is not satisfied, the previous iterate only projects onto the solution hyperplane {x|Ai,:,x=bi}. The update rule for this algorithm is thus

    xk=xk1((Ai,:)T,xk1bi)+Ai,:22(Ai,:)T,k=1,2,. (1.6)

    One can see that xk+1 in Eq (1.6) is indeed the projection of xk onto the set {x|Ai,:,xbi}. Leventhal and Lewis [16] (Theorem 4.3) proved that a randomized projection (RP) method converges to a feasible solution linearly in expectation. Recently, by combining the ideas of Kaczmarz and Motzkin methods [17, 18], Loera et al. proposed the sampling Kaczmarz-Motzkin (SKM) method for solving the linear feasibility problem (1.1) in [19]. Later, Morshed et al. [20] developed a generalized framework, namely the generalized sampling Kaczmarz-Motzkin (GSKM) method, that extends the SKM algorithm and proves the existence of a family of SKM-type methods. In addition, they also proposed a Nesterov-type acceleration scheme in the SKM method called probably accelerated sampling Kaczmarz-Motzkin (PASKM), which provides a bridge between Nesterov-type acceleration of machine learning and sampling Kaczmarz methods for solving linear feasibility problems.

    In this paper, inspired by [13, 20], we develop a randomized multiple row-action (RMR) method for the linear feasibility problem (1.1). Partial rows indexed by Uk are instead of that of [13] to reduce the calculation on the residual vector in the RMR method, Moreover, by using history information in updating the current update, we establish a generalized version of randomized multiple row-action (GRMR) method. This general framework will provide an ideal platform for researchers to experiment with a wide range of iterative projection methods and design efficient algorithms for solving optimization problems in areas such as artificial intelligence, machine learning, etc. We emphasize that our algorithms are pseudoinverse-free and therefore different from projection-based block algorithms. We prove the linear convergence of our algorithms in the mean-square sense. Numerical results are presented to illustrate the efficiency of our algorithms.

    The rest of the paper is organized as follows. In Section 2, the notations and preliminaries are provided. In Section 3, we introduce the RMR method and analyze its convergence properties. Experimental results on both randomly generated and real-world data are reported and discussed in Section 4. Finally, we present some conclusions in Section 5.

    Throughout the paper, for any random variables ξ and ζ, we use E[ξ] and E[ξ|ζ] to denote the expectation of ξ and the conditional expectation of ξ given ζ. For an integer m1, let [m]:={1,,m}. For any real matrix A, we use ai, aj, ai,j, AT, A, A2, AF and Range(A) to denote the i-th row, the j-th column, the (i,j)-th entry, the transpose, the Moore-Penrose pseudoinverse, the spectral norm, the Frobenius norm, and the column space of A, respectively. The nonzero singular values of a matrix A are σmax(A)=σ1(A)σ2(A)σr(A):=σmin(A)>0, where r is the rank of A, and we use σmax(A) and σmin(A) to denote the biggest and smallest nonzero singular values of A. We see that A2=σ1(A) and AF=ri=1σ2i(A). Matrix A with m rows and n columns belongs to Rm×n, the corresponding compact singular value decomposition of ARm×n is denoted as A=UDVT, where U and V are unitary matrices with appropriate size and D is the nonsingular and diagonal matrix with singular value on the diagonal. Let PS(x) be the projection of x onto the nonempty closed convex set S: that is, PS(x) is the vector y that is the optimal solution to minzS=xz2. Additionally, define the distance from x to a set S by d(x,S)=minzSxz=xPS(x) as denoted in [21]. For any cR, uRn, we define c+=max{0,c}, u+=((u1)+,,(un)+)T. For an index set τ, we use A(τ,:), A(:,τ), v(τ) and |τ| to denote the row and column submatrix of A indexed by τ, the subvector of v with component indices listed in τ and the cardinality of the set τ, respectively. We use In to denote the n-order identity matrix and I for short if its size is without confusion. We use ei=I(:,i) to represent the ith column of the identity matrix.

    Lemma 1 (Hoffman[22]). Let xRn and S be the feasible region of the linear feasibility problem (1.1). There exists a constant L>0 such that the following inequality holds:

    xPS(x)22L2(Axb)+22. (2.1)

    Lemma 1 is a famous result of Hoffmann's work on systems of linear inequalities. The constant L is called the Hoffman constant. For a consistent system of equations (i.e., there exists a unique x such that Ax=b), L can be expressed in terms of the smallest singular value of matrix A, i.e.,

    L2=1A12=1σ2min(A).

    Lemma 2 ([20]). For any xRn and ˉxS, the following identity holds:

    d(x,S)2=xPS(x)22xˉx22. (2.2)

    In the paper, for any ϕ1,ϕ20, the following parameters are defined:

    ϕ=ϕ1+ϕ21+4ϕ22,ρ=ϕ+ϕ1,R1=1+ϕϕ+ρ,R2=1ρϕ+ρ,R3=ϕ2+ρϕ+ρ,R4=ϕϕ2ϕ+ρ. (2.3)

    Lemma 3 ([20]). Let {Gk} be a non-negative real sequence satisfying the following relation:

    Gk+1ϕ1Gk+ϕ2Gk1,k1,G1=G00,

    if ϕ1,ϕ20 and ϕ1+ϕ2<1, then the following bounds hold:

    1. Let ϕ be the largest root of ϕ2+ϕ1ϕϕ2=0, then

    Gk+1(1+ϕ)(ϕ+ϕ1)kG0,k1.

    2. Define ρ=ϕ+ϕ1, then we have the following:

    E[Gk+1Gk]{[R1ρk+1+R2ϕk+1R1ρkR2ϕk]G0,keven;[R3ρkR4ϕkR3ρk1+R4ϕk1]G0,kodd,

    where 0ϕ<1 and 0<ρ<1.

    Lemma 4 ([20]). Let the real sequences Hk0 and Fk0 satisfy the following recurrence relation:

    [Hk+1Fk+1][Π1Π2Π3Π4][HkFk], (2.4)

    where Π1,Π2,Π3,Π40 such that the following relations

    Π1Π4Π2Π30,Π1+Π4<1+min{1,Π1Π4Π2Π3} (2.5)

    hold. Then the sequences {Hk} and {Fk} converge and the following result holds:

    [Hk+1Fk+1][Π1Π2Π3Π4]k[H1F1]=[Γ3(Γ1ρk2Γ2ρk1)Γ1Γ2Γ3(ρk1ρk2)Γ3(ρk2ρk1)Γ3(Γ1ρk1Γ2ρk2)][H1F1],

    where

    Γ1=Π1Π4+(Π1Π4)2+4Π2Π32Π3,Γ2=Π1Π4(Π1Π4)2+4Π2Π32Π3,Γ3=Π3(Π1Π4)2+4Π2Π3,ρ1=12[Π1Π4(Π1Π4)2+4Π2Π3],ρ2=12[Π1Π4+(Π1Π4)2+4Π2Π3], (2.6)

    and Π1,Π30 and 0ρ1ρ2<1.

    In this paper, for solving the linear feasibility problem (1.1), combined with the iteration schemes (1.3), (1.6), and the parameter W=ηk in Eq (1.4), we propose a randomized multiple row-action (RMR) method and its variant (GRMR), respectively, described in Algorithms 1 and 2.

    Algorithm 1: The RMR method for Axb
    1: Input partition {Ui}si=1, initial vector x0Rn, 0<α<2, and maximum iteration number l.
    2: for k = 1, 2, , l1
    3: Pick ik[s] with probability AUik,:2FA2F;
    4: Compute xk+1=xkαηTk(Axkb)+ATηk22ATηk, where ηk=iUik(Ai,:xkbi)+ei;
    5: End for
    6: Output xl.

    Before we delve into the main theorems, we give an important lemma as follows:

    Lemma 5. Assume that ηk=iUik(Ai,:xbi)+ei with partition {Uik}sik=1, βUmax:=maxj[s]{σ2max(AUj,:)/AUj,:2F}, and βUmin:=minj[s]{σ2min(AUj,:)/AUj,:2F}. Then, for any xRn there exists the following relation:

    μ1xPS(x)22Ek[|ηTk(Axb)+|2ATηk22]μ2xPS(x)22, (3.1)

    where 0<μ1=1βUmaxA2FL2μ2=min{1,σ2max(A)βUminA2F}1.

    Proof. From the definition of η and AUik,:xbUikAUik,:(xPS(x)), there results in

    |ηTk(Axb)+|2ATηk22=|(iUik(Ai,:xbi)+ei)T(Axb)+|2ATηk22=|(I:,Uik(AUik,:xbUik)+)T(Axb)+|2ATηk22=((AUik,:xbUik)+)T(AUik,:xbUik)+22ATηk22=((AUik,:xbUik)+)T(AUik,:xbUik)22ATηk22((AUik,:xbUik)+)TAUik,:(xPS(x))22ATηk22=(ATI:,Uik(AUik,:xbUik)+)T(xPS(x))22ATηk22ATηk22(xPS(x))22ATηk22xPS(x)22. (3.2)

    Since (AUik,:xbUik)+R(A), the inequality comes from ATUik,:(AUik,:xbUik)+22σ2min(AUik,:)(AUik,:xbUik)+22. Thus, we have the following relation,

    |ηT(Axb)+|2ATη22=|(iUik(Ai,:xbi)+ei)T(Axb)+|2ATUi,:(AUik,:xbUik)+22=(AUik,:xbUi)+42ATUik,:(AUik,:xbUik)+22(AUik,:xbUik)+22σ2min(AUik,:). (3.3)

    Now, taking the expectation conditional on the first k iterations on both sides of Eq (3.3), we have

    Ek[|ηTk(Axb)+|2ATηk22]=Ek[AUik,:2F|ηTk(Axb)+|2ATηk221AUik,:2F]Ek[AUik,:2F(AUik,:xbUik)+22σ2min(AUik,:)1AUik,:2F]1βUminEk[(AUik,:xbUik)+22AUik,:2F]=1βUminsik=1AUik,:2FA2F(AUik,:xbUik)+22AUik,:2F=(Axb)+22βUminA2F(AxAPS(x))+22βUminA2FAxAPS(x)22βUminA2Fσ2max(A)βUminA2FxPS(x)22, (3.4)

    where the third inequality comes from APS(x)b and the last equality follows from the fact that Ax22σ2max(A)x22.

    Meanwhile, from Eq (3.2), it is easy to see that the following relation exists,

    Ek[|ηTk(Axb)+|2ATηk22]xPS(x)22. (3.5)

    Therefore, with the use of Eqs (3.4) and (3.5), there holds that

    Ek[|ηTk(Axb)+|2ATηk22]μ2xPS(x)22. (3.6)

    Similarly, we have

    |ηTk(Axb)+|2ATηk22=|(iUik(Ai,:xbi)+ei)T(Axb)+|2ATUik,:(AUik,:xbUik)+22(AUik,:xbUik)+22σ2max(AUik,:).

    Then, taking the conditional expectation on the above inequalities, we obtain

    Ek[|ηTk(Axb)+|2ATηk22]Ek[(AUik,:xbUik)+22σ2max(AUik,:)]=Ek[AUik,:2F(AUik,:xbUik)+22σ2max(AUik,:)1AUik,:2F]1βUmaxsik=1AUik,:2FA2F(AUik,:xbUi)+22AUik,:2F=(Axb)+22βUmaxA2FxPS(x)22βUmaxA2FL2=μ1xPS(x)22, (3.7)

    where the last inequality is obtained by Lemma 1.

    Hence, from Eqs (3.6) and (3.7), the conclusion is obtained.

    Using the above lemma, we have the following theorem that provides the convergence of the RMR method.

    Theorem 1. Assume that the linear feasibility problem (1.1) is consistent, and the stepsize is 0<α<2. Then the iteration sequence {xk} generated by the RMR method for arbitrary x0Rn satisfies

    E[xk+1PS(xk+1)22](h(α))kx0PS(x0)22,

    where h(α):=(12αα2L2βUmaxA2F)<1 with βUmax:=maxj[s]{σ2max(AUj,:)/AUj,:2F}.

    Proof. From direct calculation results, there results in

    ATηk,PS(xk)xk=ηTk(APS(xk)Axk)ηTk(bAxk)((AUik,:xbUik)+)T((bUikAUik,:x)+)=ηTk(bAxk)+. (3.8)

    Then, there follows that

    xk+1PS(xk+1)22xk+1PS(xk)22=xkPS(xk)αηTk(Axkb)+ATηk22ATηk22=xkPS(xk)22+α2|ηTk(Axkb)+|2ATηk222αηTk(Axkb)+ATηk22ATηk,xkPS(xk)(3.8)xkPS(xk)22(2αα2)|ηTk(Axkb)+|2ATηk22. (3.9)

    Taking the conditional expectation on the first k iterations and using Lemma 5, we have

    Ek[xk+1PS(xk+1)22]Lemma5(12αα2L2βUmaxA2F)xkPS(xk)22. (3.10)

    By the law of total expectation, there holds that

    E[xk+1PS(xk+1)22](12αα2L2βUmaxA2F)E[xkPS(xk)22].

    Finally, unrolling the recurrence gives the desired result, i.e.

    E[xk+1PS(xk+1)22](12αα2L2βUmaxA2F)kx0PS(x0)22=(h(α))kx0PS(x0)22.

    Remark 1. It can be seen from Theorem 1 that the rate of the RMR algorithm is given by h(α)=(12αα2L2βUmaxA2F), and it reaches the minimum value h(α)=(11L2βUmaxA2F) when α=1.

    Next, to improve the RMR method, we propose a generalized version of the RMR method (GRMR) in which the history information is used. Here, we take two iterates, xk1 and xk, generated by the successive iteration in the RMR method, and update the next iterate, xk+1, as an affine combination of the previous two updates, i.e., starting with x0=x1,z0=z1Rn,

    xk+1=(1ξ)zk+ξzk1,for k1,

    where zk=xkαηTk(Axkb)+ATηk22ATηk is the k-th update of the GRMR algorithm, which is described in Algorithm 2. It is easy to see that when ξ=0, the GRMR algorithm reduces to the original RMR algorithm.

    Algorithm 2: The GRMR method for Axb
    1: Input partition {Ui}si=1, initial vector x1=x0,z1=z0Rn, 0<α<2, ξQ and maximum iteration number l.
    2: for k=1, 2, , l1
    3: Pick ik[s] with probability AUik,:2FA2F;
    4: Compute zk=xkαηTk(Axkb)+ATηk22ATηk, where ηk=iUik(Ai,:xkbi)+ei;
    5: Set xk+1=(1ξ)zk+ξzk1;
    6: End for
    7: Output xl.

     | Show Table
    DownLoad: CSV

    Before the convergence analysis of the proposed GRMR method is explored, the following sets are first defined. For any ξR, let us denote the sets Q,Q1, and Q2 as

    Q1={ξ|0ξ1},Q=Q1Q2,Q2={1<ξ0|(1+ξ)h(α)ξ(1+αμ2)<1}. (3.11)

    Theorem 2. Suppose that the linear feasibility problem (1.1) is consistent. For arbitrary x1=x0Rn, the sequence of iterates {xk} by the GRMR method converges with 0<α<2 and 0ξ1(ξQ1). The following results hold:

    1. Take ρ=ϕ1+ϕ,ϕ=ϕ1+ϕ21+4ϕ22, then

    E[xk+1PS(xk+1)22]ρk(1+ϕ)x0PS(x0)22.

    2. Take R1,R2,R3, and R4 as in Eq (2.3), then

    E[xk+1PS(xk+1)22xkPS(xk)22]{[R1ρk+1+R2ϕk+1R1ρkR2ϕk]x0PS(x0)22,keven;[R3ρkR4ϕkR3ρk1+R4ϕk1]x0PS(x0)22,kodd,

    where 0ϕ,ϕ1,ϕ2<1, 0<ρ=ϕ1+ϕ<1, h(α):=(12αα2L2βUmaxA2F)<1 with βUmax:=maxj[s]{σ2max(AUj,:)/AUj,:2F}.

    Proof. Since for any ξQ1, (1ξ)PS(xk)+ξPS(xk1)S, straightforward calculations, we have

    xk+1PS(xk+1)22Lemma2xk+1(1ξ)PS(xk)ξPS(xk1)22=(1ξ)zk+ξzk1(1ξ))PS(xk)ξPS(xk1)22=(1ξ)[xkPS(xk)αηTk(Axkb)+ATηk22ATηk]+ξ[xk1PS(xk1)αηTk1(Axk1b)+ATηk122ATηk1]22(1ξ)xkPS(xk)αηTk(Axkb)+ATηk22ATηk22+ξxk1PS(xk1)αηTk1(Axk1b)+ATηk122ATηk122. (3.12)

    By taking the conditional expectation on Eq (3.12) with respect to index k,k-1 we have

    Ek,k1[xk+1PS(xk+1)22](1ξ)Ek[xkPS(xk)αηTk(Axkb)+ATηk22ATηk22]+ξEk1[xk1PS(xk1)αηTk1(Axk1b)+ATηk122ATηk122]. (3.13)

    Meanwhile, with the use of Eqs (3.9) and (3.10), there holds that

    Ek[xkPS(xk)αηTk(Axkb)+ATηk22ATηk22](12αα2L2βUmaxA2F)xkPS(xk)22=h(α)xkPS(xk)22, (3.14)

    and

    Ek1[xk1PS(xk1)αηTk1(Axk1b)+ATηk122ATηk122(12αα2L2βUmaxA2F)xk1PS(xk1)22=h(α)xk1PS(xk1)22. (3.15)

    Taking the total expectation on Eq (3.13) and combining Eqs (3.14) and (3.15), we can get

    E[xk+1PS(xk+1)22](1ξ)h(α)E[xkPS(xk)22]+ξh(α)E[xk1PS(xk1)22]=ϕ1E[xkPS(xk)22]+ϕ2E[xk1PS(xk1)22], (3.16)

    which satisfies the condition of Lemma 3 with ϕ1=(1ξ)h(α) and ϕ2=ξh(α). Therefore, using the first part of Lemma 3, we have

    E[xk+1PS(xk+1)22](ϕ+ϕ1)k(1+ϕ)x0PS(x0)22=ρk(1+ϕ)x0PS(x0)22.

    Furthermore, using the second part of Lemma 3 and Eq (3.16), we can get the second part of Theorem 2.

    Remark 2. When 0ξ1, we obtain a global linear rate for the GRMR method:

    ρ=ϕ+ϕ1=(1ξ)h(α)+(1ξ)2h2(α)+4ξh(α)2.

    Since 0<h(α)<1 and 0ξ1, we can obtain that ρ is greater than or equal to h(α). Therefore, the theoretical convergence rate of the GRMR method with 0ξ1 is always worse or equal compared to that of the RMR method.

    In the next theorem, we will investigate a global linear rate for the GRMR method with ξQ2.

    Theorem 3. Assume that the linear feasibility problem (1.1) is consistent. Let {xk} and {zk} be the sequence of random iterates generated by GRMR with 0<α<2 and ξQ2. Define

    Π1=h(α),Π2=|ξ|,Π3=αμ2h(α),Π4=|ξ|(1+αμ2), (3.17)

    and Γ1,Γ2,Γ3,ρ1,ρ2 as in Eq (2.6) with the parameter choice of Eq (3.17). Then, the following results hold,

    E[xk+1PS(xk+1)zk+1zk][Γ1Γ3ρk2Γ2Γ3ρk1Γ3ρk2Γ3ρk1]x0PS(x0),

    where Γ1,Γ30 and 0ρ1ρ2<1.

    Proof. From Step 5 of the GRMR method and Eq (3.14), there follows that

    Ek+1,k[xk+1PS(xk+1)]Ek[xk+1PS(xk)]Step5=Ek[zkPS(xk)ξ(zkzk1)]Ek[zkPS(xk)]+|ξ|Ek[zkzk1]{Ek[zkPS(xk)2]}12+|ξ|zkzk1(3.14)h(α)xkPS(xk)+|ξ|zkzk1. (3.18)

    Taking the total expectation on Eq (3.18), we have

    E[xk+1PS(xk+1)]h(α)E[xkPS(xk)]+|ξ|E[zkzk1]. (3.19)

    Then using the update formula for zk+1 in Step 5 of the GRMR method and Lemma 5, we have

    Ek+1,k[zk+1zk]=Ek+1,k[xk+1αηTk+1(Axk+1b)+ATηk+122ATηk+1zk]Step5=Ek+1,k[ξ(zkzk1)αηTk+1(Axk+1b)+ATηk+122ATηk+1]|ξ|zkzk1+αEk,k+1[ηTk+1(Axk+1b)+ATηk+122ATηk+1]Lemma5|ξ|zkzk1+αμ2Ek[xk+1PS(xk+1)]. (3.20)

    Taking the total expectation on Eq (3.20) and using Eq (3.19), we have

    E[zk+1zk](3.20)|ξ|E[zkzk1]+αμ2E[xk+1PS(xk+1)](3.19)|ξ|(1+αμ2)E[zkzk1]+αμ2h(α)E[xkPS(xk)]. (3.21)

    Combining Eqs (3.19) and (3.21), we can deduce the following inequality:

    E[xk+1PS(xk+1)zk+1zk][h(α)|ξ|αμ2h(α)|ξ|(1+αμ2)]E[xkPS(xk)zkzk1]. (3.22)

    From the definition, we check that Π1,Π2,Π3,Π40. Since ξQ2, we have

    Π2Π3Π1Π4=|ξ|αμ2h(α)|ξ|h(α)|ξ|αμ2h(α)=|ξ|h(α)0. (3.23)

    We have

    Π1+Π4Π1Π4+Π2Π3=h(α)+|ξ|(1+αμ2)|ξ|h(α)(3.11)<1. (3.24)

    From the above formula (3.24), there holds that Π1+Π4<1+|ξ|h(α)=1+min{1,|ξ|h(α)}=1+min{1,Π1Π4Π2Π3}. With the use of Eq (3.23), there results in Π2Π3Π1Π40, which is precisely the condition provided in Eq (2.5).

    Let the sequences Fk=E[zkzk1] and Hk=E[xkPS(xk)]. Then, by using Lemma 4, we have

    [Hk+1Fk+1][Γ3(Γ1ρk2Γ2ρk1)Γ1Γ2Γ3(ρk1ρk2)Γ3(ρk2ρk1)Γ3(Γ1ρk1Γ2ρk2)][H1F1]. (3.25)

    where Γ1,Γ2,Γ3,ρ1,ρ2 can be derived from Eq (2.6) using the parameter choice of Eq (3.17).

    Since x1=x0 and z1=z0, there follows that F1=E[z1z0]=0 and H1=E[x1PS(x1)]=H0. Thus, the formula (3.25) become into the following form:

    [Hk+1Fk+1]=[xk+1PS(xk+1)zk+1zk][Γ1Γ3ρk2Γ2Γ3ρk1Γ3ρk2Γ3ρk1]x0PS(x0). (3.26)

    From Lemma 4, we have Γ1,Γ30 and 0ρ1ρ2<1, which proves the conclusion.

    In this section, we discuss the numerical experiments performed to show the computational efficiency of the proposed algorithms (Algorithms 1 and 2). As mentioned before, we limit our focus on the over-determined systems regime (i.e., mn), where iterative methods are competitive in general. We present some numerical examples, both synthetic and real-world data, to demonstrate the convergence of the RMR and GRMR methods.

    We suppose that the subset {Ui}si=1 is computed by

    {Ui}={{(i1)τ+1,(i1)τ+2,,iτ},i[s1],{(s1)τ+1,(s1)τ+2,,m},is,

    where τ=20 is the size of Ui. All experiments are carried out using MATLAB (version R2021b) on a laptop with a 2.50-GHz intel Core i9-12900H processor, 16 GB memory, and Windows 11 operating system.

    In testing synthetic data and the SuiteSparse Matrix Collection, the stopping criterion is

    RSE=(Axb)+2106,

    or the maximum iteration steps of 300,000 being reached. Besides, we use the symbol to indicate the case that either the corresponding iteration method can not reach the stopping criterion within iteration steps or the computing time exceeds 1800 seconds.

    In testing the sparse Netlib LP data, we set the stopping criterion to be

    where is the tolerance gap.

    In this section, IT and CPU denote the number of iteration steps and computing times (in seconds), respectively. IT and CPU are the medians of the required iteration steps and the elapsed CPU times for times repeated runs of the corresponding method. The SKM, GSKM, and PASKM algorithms involve the selection of many parameters as well, and we have selected a set of parameters with better performance based on the literature [20]. To ensure that the system (1.1) is consistent, we randomly generate vectors , and set the right-hand side as . Both and are generated randomly by the MATLAB function .

    Example 1. For the coefficient matrix , we mainly consider two types, namely dense and sparse matrices, respectively. We randomly generate the dense matrix by the MATLAB function . The sparse matrix is generated randomly by the MATLAB function with a density of for the nonzero elements. We compared RMR and GRMR with SKM, GSKM, PASKM1, and PASKM2 with the initial vector ( generated randomly by the MATLAB function ).

    From Tables 14, we list IT and CPU of SKM, GSKM, PASKM1, PASKM2, RMR, and GRMR methods for the consistent linear feasible problem in Example 1. We set in tables. The performance of these algorithms was tested on both dense and sparse coefficient matrices in two sets of experiments presented in Tables 1 and 2 with a constant number of rows but an increasing number of columns. Tables 3 and 4 show the performance of the algorithms with different orders of coefficient matrices. The convergence rates of the different methods are observed for consistent systems, as depicted in Figure 1. From Tables 14, it can be observed that the RMR and GRMR methods outperform the SKM, GSKM, PASKM1, and PASKM2 methods for consistent systems. Furthermore, Figure 1 demonstrates that compared with other methods, the RMR and GRMR approaches achieve higher accuracy with fewer iterations (IT) and computational time (CPU).

    Table 1.  IT and CPU of six methods for dense matrices with and different in Example 1.
    SKM IT 1067 2982 5614 9727 15220 21832
    CPU 0.9855 4.7603 9.1254 17.5679 33.0315 71.7751
    GSKM IT 851 2252 4294 7268 11276 16374
    CPU 0.7875 3.7005 7.3854 13.6847 23.5921 58.1655
    PASKM1 IT 750 1819 3340 5707 8850 12717
    CPU 0.6957 3.0986 5.5658 10.3246 16.8315 35.4983
    PASKM2 IT 700 1474 2310 3362 4.5721e+03 6.2811e+03
    CPU 0.6484 2.5027 3.7529 5.7056 8.7151 17.7476
    RMR IT 412 868 1297 2132 2871 3505
    CPU 0.0523 0.1780 0.3607 0.7130 1.2153 2.3454
    GRMR IT 399 849 1234 1777 2482 3275
    CPU 0.0503 0.1680 0.3227 0.6910 1.1115 2.1774

     | Show Table
    DownLoad: CSV
    Table 2.  IT and CPU of six methods for sparse matrices with and different in Example 1.
    SKM IT 1303 3138 5724 10207 14553 21886
    CPU 1.0057 6.6166 8.2686 16.9807 26.5132 43.8631
    GSKM IT 1012 2410 4373 7606 10984 16502
    CPU 0.7751 5.3301 6.7144 12.2361 21.1343 33.7102
    PASKM1 IT 877 1993 3478 5967 8693 13133
    CPU 0.6877 4.2751 5.1627 9.5969 14.9724 23.7929
    PASKM2 IT 791 1567 2398 3450 4729 6550
    CPU 0.6343 3.5159 3.9884 5.5175 8.1405 12.4132
    RMR IT 459 1034 1563 2419 3129 4684
    CPU 0.0672 0.3124 0.7153 1.1694 1.6621 2.6481
    GRMR IT 426 1018 1550 2394 3054 4287
    CPU 0.0625 0.3106 0.7091 1.1563 1.6199 2.4405

     | Show Table
    DownLoad: CSV
    Table 3.  IT and CPU of six methods for dense matrices with different and in Example 1.
    SKM IT 2847 6058 8876 12224 15171 18685
    CPU 1.3973 7.3557 14.5893 21.0069 32.0768 81.6442
    GSKM IT 2198 4602 6675 9347 11173 13946
    CPU 1.1239 5.6573 11.6139 16.4939 23.1984 63.4526
    PASKM1 IT 1.775 3616 5143 7256 9076 10995
    CPU 0.8970 4.4592 9.0139 12.7491 16.9114 48.8534
    PASKM2 IT 885 1887 2682 3729 4684 5569
    CPU 0.4631 2.3404 4.6370 6.6436 8.7244 23.4332
    RMR IT 667 1251 1978 2523 3217 3933
    CPU 0.0268 0.0608 0.1731 0.5311 1.2986 1.7975
    GRMR IT 638 1226 1860 2462 3150 3740
    CPU 0.0255 0.0585 0.1621 0.5126 1.2877 1.7047

     | Show Table
    DownLoad: CSV
    Table 4.  IT and CPU of six methods for sparse matrices with different and in Example 1.
    SKM IT 3143 5961 8667 11404 14620 17525
    CPU 1.1965 2.0794 14.9667 22.3923 27.1353 47.9605
    GSKM IT 2396 4452 6510 8512 10734 13104
    CPU 0.9519 1.5554 11.9794 16.7697 21.0046 37.0169
    PASKM1 IT 1974 3472 5158 6765 8734 10302
    CPU 0.8684 1.2462 9.8658 12.9433 15.4741 28.1906
    PASKM2 IT 1119 1959 2870 3694 4700 5546
    CPU 0.4947 0.7069 5.5380 7.1027 8.0391 14.3387
    RMR IT 675 1275 2005 2725 3129 3853
    CPU 0.0506 0.1899 0.4785 1.0729 1.6562 2.4124
    GRMR IT 668 1219 1919 2575 3054 3792
    CPU 0.0488 0.1808 0.4536 1.0211 1.6199 2.3684

     | Show Table
    DownLoad: CSV
    Figure 1.  The convergence behaviors of RSE versus IT and CPU given by six methods with sparse coefficient matrix , for Example 1.

    In Table 5, we list IT and CPU of the RMR method for dense matrices with different for Example 1. From Table 5, we can see that the RMR algorithm with performs better. As shown in Figure 2, the choice of parameter affects IT and CPU required by the RMR method to achieve desired accuracy levels. When is appropriately selected, the IT and CPU needed by the RMR method are significantly reduced.

    Table 5.  IT and CPU of the RMR method for dense matrices with different in Example 1.
    0.10 0.40 0.55 0.80 1.05 1.30 1.55 1.85
    IT 5808.9 910.0 536.7 436.6 413.3 456.8 578.2 1500.0
    CPU 0.5981 0.1007 0.0591 0.0486 0.0469 0.0514 0.0650 0.1616
    IT 13620.0 2093.9 1133.8 851.2 806.5 879.4 1161.7 3005.1
    CPU 3.0197 0.5453 0.3023 0.2278 0.2141 0.2355 0.3115 0.7872
    IT 24926.4 3778.3 1962.1 1423.8 1267.4 1407.1 1837.5 4595.9
    CPU 13.8342 2.1100 1.1329 0.7885 0.7346 0.8189 1.0251 2.5884
    IT 39569.2 5934.5 2969.8 2013.3 1726.1 1865.3 2432.2 6223.0
    CPU 14.6740 2.4406 1.2494 0.8401 0.7170 0.7846 1.0214 2.6008
    IT 61697.4 9158.3 4554.6 2998.8 2423.2 2441.9 3094.0 7574.4
    CPU 19.6330 3.0827 1.5505 1.0319 0.8332 0.8344 1.0518 2.6854
    IT 83018.8 12278.2 6063.6 3931.3 2958.2 3066.8 3725.5 9557.4
    CPU 48.1672 7.3571 3.6987 2.4025 1.8190 1.9194 2.2922 5.8362

     | Show Table
    DownLoad: CSV
    Figure 2.  The convergence behaviors of RSE versus IT and CPU given by RMR method with dense coefficient matrix for Example 1.

    Example 2. For given and , we construct a matrix by , where , and . These matrices are generated by , , and .

    This example gives us many flexibilities to adjust the input parameters , and . We consider two types of rank-deficient cases by setting , , and in Table 6 and , , and in Table 7. We compared RMR and GRMR with SKM, GSKM, PASKM1, and PASKM2 with the initial vector ( generated randomly by the MATLAB function ).

    Table 6.  IT and CPU of six methods for matrices with and different in Example 2.
    SKM IT 1.2447e+03 7.9725e+03 2.1648e+04 5.5140e+04 1.2848e+05
    CPU 0.2798 6.0742 30.2712 72.6378 177.7376
    GSKM IT 9.7291e+02 6.3474e+03 1.7195e+04 4.3673e+04 1.0183e+05
    CPU 0.2223 4.9473 25.7609 59.6574 142.0572
    PASKM1 IT 8.0253e+02 5.3334e+03 1.4417e+04 3.6519e+04 8.4791e+04
    CPU 0.1833 4.2592 21.6780 48.1641 118.7923
    PASKM2 IT 5.1557e+02 3.4486e+03 9.1972e+03 2.3007e+04 5.2762e+04
    CPU 0.1181 2.8350 13.2988 27.2750 73.9614
    RMR IT 2.3945e+02 1.4493e+03 7.0107e+03 1.9027e+04 3.4009e+04
    CPU 0.0124 0.0844 0.9885 3.4734 9.3129
    GRMR IT 2.0725e+02 1.2183e+03 5.7479e+03 1.5770e+04 2.8827e+04
    CPU 0.0109 0.0784 0.8225 2.8958 8.2372

     | Show Table
    DownLoad: CSV
    Table 7.  IT and CPU of six methods for matrices with and different in Example 2.
    SKM IT 7.5260e+02 4.9593e+03 2.5683e+04 5.8852e+04 1.06256e+05
    CPU 0.6725 7.4758 29.9510 86.5563 134.1015
    GSKM IT 6.1130e+02 3.9435e+03 2.0453e+04 4.6742e+04 9.5641e+04
    CPU 0.5448 5.9391 21.6146 65.3934 102.7754
    PASKM1 IT 5.3000e+02 3.3225e+03 1.7098e+04 3.9103e+04 8.2928e+04
    CPU 0.4740 5.2256 19.0484 49.6410 87.6790
    PASKM2 IT 3.6930e+02 2.1724e+03 1.0772e+04 2.4912e+04 5.8286e+04
    CPU 0.3334 3.4637 12.0785 32.7437 61.1034
    RMR IT 3.4125e+02 2.4008e+03 5.8137e+03 1.2747e+04 3.5337e+04
    CPU 0.0405 0.2881 1.0362 5.9427 11.1385
    GRMR IT 2.905e+02 2.0233e+03 4.7020e+03 1.0390e+04 2.8997e+04
    CPU 0.0345 0.2446 0.8953 4.9289 9.3507

     | Show Table
    DownLoad: CSV

    In Tables 6 and 7, we report the numerical results of the SKM, GSKM, PASKM1, PASKM2, RMR, and GRMR algorithms with , , , for two rank-deficient consistent linear systems. We can observe the following phenomena: the relative solution error of GRMR decays faster than those of SKM, GSKM, PASKM1, PASKM2, and RMR when the number of iteration steps and computing time increase.

    In Table 8, we list IT and CPU of the GRMR method for dense matrices with , , , , and different for Example 2. We can find that the choice of is the best choice for the GRMR method in Figure 3.

    Table 8.  IT and CPU of the GRMR method for dense matrices with , , , and different in Example 2.
    n -0.6 -0.4 -0.2 0 0.6 0.9
    50 IT 3.0910e+02 2.1429e+02 2.7607e+02 3.3015e+02 4.8100e+02 5.9819e+02
    CPU 0.0490 0.0344 0.0441 0.0511 0.0747 0.0933
    100 IT 1.5200e+03 1.1347e+03 1.4166e+03 1.7213e+03 2.5678e+03 3.1229e+03
    CPU 0.9983 0.9612 0.9436 0.9793 0.9883 0.9909
    150 IT 9.9682e+03 4.6247e+03 5.9130e+03 7.2372e+03 1.0993e+04 1.2839e+04
    CPU 2.9349 1.3518 1.8325 2.3050 3.4949 4.0744
    200 IT - 6.4790e+03 8376e+03 1.0158e+04 1.5754e+04 1.8845e+04
    CPU - 2.2413 2.9027 3.5326 5.8995 7.1385
    250 IT - 2.4789e+04 3.1179e+04 3.8087e+04 5.8929e+04 6.9713e+04
    CPU - 3.8122 4.8770 6.5623 9.9870 12.1248
    300 IT - 2.6105e+04 3.3267e+04 4.0252e+04 6.2172e+04 7.4024e+04
    CPU - 5.6167 7.1635 8.6136 13.1339 15.7273

     | Show Table
    DownLoad: CSV
    Figure 3.  The convergence behaviors of RSE versus IT and CPU given by the GRMR method with dense coefficient matrix , , , , and different for Example 2.

    We consider the following two types of real-world test data: the SuiteSparse Matrix Collection and the sparse Netlib LP instances.

    Example 3. The SuiteSparse Matrix Collection[23]. In Table 9, the coefficient matrix is chosen from the SuiteSparse Matrix Collection. In testing the SuiteSparse Matrix Collection, we take . We compared RMR and GRMR with SKM, GSKM, PASKM1, and PASKM2 with the initial vector . For details, we list their sizes, densities, condition numbers (i.e., cond(A)), and squared Euclidean norms in Table 10, where the density of a matrix is defined by

    Table 9.  IT and CPU of six methods for Example 3 with .
    name ash219 ash958 ch7-8-b1 well1033 illc1850 ch6-6-b5
    SKM IT 555 24383 63036 4559 6114 17113
    CPU 0.0237 0.5705 12.3610 1.0811 3.0682 39.7281
    GSKM IT 381 1697 48929 3418 4552 12993
    CPU 0.0169 0.3626 10.4723 0.8127 2.2707 28.0804
    PASKM1 IT 258 1142 39865 2.650 3497 10292
    CPU 0.0111 0.2512 8.6279 0.6352 1.7700 46.4743
    PASKM2 IT 39 166 20123 1549 1993 6603
    CPU 0.0018 0.0381 4.4998 0.3785 0.9553 13.3613
    RMR IT 113 379 24167 1859 2221 619
    CPU 0.0012 0.0182 0.6021 0.0455 0.1575 0.1430
    GRMR IT 28 109 15527 551 697 104
    CPU 0.0006 0.0112 0.3188 0.0155 0.0592 0.0392

     | Show Table
    DownLoad: CSV
    Table 10.  The properties of different sparse matrices in Example 3.
    name ash219 ash958 ch7-8-b1 well1033 illc1850 ch6-6-b5
    density 2.35 0.68 3.57 1.43 0.66 0.14
    cond() 3.0249 3.2014 3.5819e+15 166.1333 1.404e+03 1
    12.1422 17.9630 49.0000 3.2635 4.5086 6.0000

     | Show Table
    DownLoad: CSV

    In Table 9, we list the IT and CPU of SKM, GSKM, PASKM1, PASKM2, RMR, and GRMR methods for the linear feasible problem in Example 3 with , , , and . We observe that the GRMR method outperforms SKM, GSKM, PASKM, and RMR methods in terms of both the iteration counts and the CPU time from Table 9.

    Example 4. Netlib LP instances. We follow the standard framework used by De Loera et al. [19] and Morshed et al. [20] in their work on linear feasibility problems. The problems are transformed from standard LP problems (i.e., min subject to , with optimum value ) to an equivalent linear feasibility formulation (i.e., , where and ). In testing the sparse Netlib LP instances, we take and initial vectors .

    In Table 11, we list the IT and CPU of SKM, GSKM, PASKM2, RMR, and GRMR methods for the linear feasible problem in Example 4. From Table 11, we know that Algorithm GRMR takes less computing time compared to the other algorithms.

    Table 11.  CPU time comparisons of five methods for different matrices in Example 4.
    name Dimensions SKM GSKM PASKM2 RMR GRMR
    lp_sc50b -0.1 0.9 0.1746 0.1886 0.3636 0.1059 0.0874
    lp_adlittle 0.01 0.85 0.0222 0.0258 0.1173 0.0149 0.0076
    lp_recipe -0.2 1.05 2.6012 2.1534 3.1206 2.2878 1.9061
    degen2 -0.1 1.05 0.0107 0.0184 0.0140 0.0073 0.0068

     | Show Table
    DownLoad: CSV

    In this paper, based on partial rows of the residual vector, the RMR method and its general framework (GRMR) are provided to solve the linear feasibility problems. The GRMR method unifies various RMR-type algorithms and adds the relaxation parameter . The convergence results are proved. Some numerical examples, including synthetic data and real-world applications, demonstrate that the two methods often outperform the original methods. Especially, the GRMR method with takes less computing time. This implies that GRMR is a variant of the competitive row-action type for solving linear feasibility problems. Meanwhile, from the numerical results, it can be seen that the appropriate choice of parameters can lead to more effective methods for different types of problems. In future work, we intend to identify the optimal choices of .

    Hui Song: Writing the original draft and deriving the convergence. Wendi Bao: Conceptualization, Methodology. Lili Xing: Visualization, Software. Weiguo Li: Review, Conceptualization.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This article was supported by the National Natural Science Foundation of China [grant number 61931025] and the National Natural Science Foundation of China [grant number 42374156].

    The authors declare there is no conflict of interest.



    [1] S. Karczmarz, Angenherte auflsung von systemen linearer gleichungen, Bull. Int. Acad. Polon. Sci. Lett., 35 (1937), 355–357.
    [2] G. Richard, B. Robert, H. T. Gabor, Algebraic Reconstruction Techniques (ART) for three-dimensional electron microscopy and X-ray photography, J. Theor. Biol., 29 (1970), 471–481. https://doi.org/10.1016/0022-5193(70)90109-8 doi: 10.1016/0022-5193(70)90109-8
    [3] T. Strohmer, R. Vershynin, A randomized Kaczmarz algorithm with exponential convergence, J. Fourier Anal. Appl., 15 (2009), 262–278. https://doi.org/10.1007/s00041-008-9030-4 doi: 10.1007/s00041-008-9030-4
    [4] Z. Z. Bai, W. T. Wu, On convergence rate of the randomized Kaczmarz method, Linear Algebra Appl., 553 (2018), 252–269. https://doi.org/10.1016/j.laa.2018.05.009 doi: 10.1016/j.laa.2018.05.009
    [5] Y. C. Eldar, D. Needell, Acceleration of randomized Kaczmarz method via the Johnson-Lindenstrauss lemma, Numer. Algorithms, 58 (2011), 163–177. https://doi.org/10.1007/s11075-011-9451-z doi: 10.1007/s11075-011-9451-z
    [6] J. H. Guo, W. G. Li, The randomized Kaczmarz method with a new random selection rule, Numer. Math. J. Chin. Univ., 40 (2018), 65–75.
    [7] J. J. Zhang, A new greedy Kaczmarz algorithm for the solution of very large linear systems, Appl. Math. Lett., 91 (2019), 207–212. https://doi.org/10.1016/j.aml.2018.12.022 doi: 10.1016/j.aml.2018.12.022
    [8] Y. Liu, C. Q. Gu, Variant of greedy randomized Kaczmarz for ridge regression, Appl. Numer. Math., 143 (2019), 223–246. https://doi.org/10.1016/j.apnum.2019.04.008 doi: 10.1016/j.apnum.2019.04.008
    [9] T. Elfving, Block-iterative methods for consistent and inconsistent linear equations, Numer. Math., 35 (1980), 1–12. https://doi.org/10.1007/BF01396365 doi: 10.1007/BF01396365
    [10] D. Needell, J. A. Tropp, Paved with good intentions: Analysis of a randomized block Kaczmarz method, Linear Algebra Appl., 441 (2014), 199–221. https://doi.org/10.1016/j.laa.2012.12.022 doi: 10.1016/j.laa.2012.12.022
    [11] R. M. Gower, P. Richtárik, Randomized iterative methods for linear systems, SIAM J. Matrix Anal. Appl., 36 (2015), 1660–1690. https://doi.org/10.1137/15M1025487 doi: 10.1137/15M1025487
    [12] J. Q. Chen, Z. D. Huang, On a fast deterministic block Kaczmarz method for solving large-scale linear systems, Numer. Algorithms, 89 (2022), 1007–1029. https://doi.org/10.1007/s11075-021-01143-4 doi: 10.1007/s11075-021-01143-4
    [13] Z. Z. Bai, W. T. Wu, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput., 40 (2018), A592–A606. https://doi.org/10.1137/17M1137747 doi: 10.1137/17M1137747
    [14] N. C. Wu, L. X. Cui, Q. Zuo, On the relaxed greedy deterministic row and column iterative methods, Appl. Math. Comput., 432 (2022), 127339. https://doi.org/10.1016/j.amc.2022.127339 doi: 10.1016/j.amc.2022.127339
    [15] S. Yousef, Iterative Methods for Sparse Linear Systems, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2003.
    [16] D. Leventhal, A. S. Lewis, Randomized methods for linear constraints: Convergence rates and conditioning, Math. Oper. Res., 35 (2010), 641–654. https://doi.org/10.1287/moor.1100.0456 doi: 10.1287/moor.1100.0456
    [17] S. Agmon, The relaxation method for linear inequalities, Can. J. Math., 6 (1954), 382–392. https://doi.org/10.4153/CJM-1954-037-2 doi: 10.4153/CJM-1954-037-2
    [18] T. S. Motzkin, I. J. Schoenberg, The relaxation method for linear inequalities, Can. J. Math., 6 (1954), 393–404. https://doi.org/10.4153/CJM-1954-038-x doi: 10.4153/CJM-1954-038-x
    [19] J. A. De Loera, J. Haddock, D. Needell, A sampling Kaczmarz-Motzkin algorithm for linear feasibility, SIAM J. Sci. Comput., 39 (2017), S66–S87. https://doi.org/10.1137/16M1073807 doi: 10.1137/16M1073807
    [20] M. S. Morshed, M. S. Islam, M. Noor-E-Alam, Sampling Kaczmarz-Motzkin method for linear feasibility problems: Generalization and acceleration, Math. Program., 194 (2022), 719–779. https://doi.org/10.1007/s10107-021-01649-8 doi: 10.1007/s10107-021-01649-8
    [21] D. Leventhal, A. S. Lewis, Randomized methods for linear constraints: Convergence rates and conditioning, Math. Oper. Res., 35 (2010), 641–654. https://doi.org/10.1287/moor.1100.0456 doi: 10.1287/moor.1100.0456
    [22] A. J. Hoffman, On approximate solutions of systems of linear inequalities, J. Res. Nat. Bur. Stand., 49 (1952), 263–265.
    [23] S. P. Kolodziej, A. Mohsen, B Matthew, D. Jarrett, A. D. Timothy, H. Matthew, et al., The SuiteSparse matrix collection website interface, J. Open Source Software, 4 (2019), 1244. https://doi.org/10.21105/joss.01244 doi: 10.21105/joss.01244
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(587) PDF downloads(31) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog