Research article

Accelerated preconditioning Krasnosel'skiĭ-Mann method for efficiently solving monotone inclusion problems

  • Received: 22 August 2023 Revised: 22 September 2023 Accepted: 11 October 2023 Published: 18 October 2023
  • MSC : 47H04, 47H10, 65K05, 90C25

  • In this article, we propose a strongly convergent preconditioning method for finding a zero of the sum of two monotone operators. The proposed method combines a preconditioning approach with the robustness of the Krasnosel'skiĭ-Mann method. We show the strong convergence result of the sequence generated by the proposed method to a solution of the monotone inclusion problem. Finally, we provide numerical experiments on the convex minimization problem.

    Citation: Natthaphon Artsawang. Accelerated preconditioning Krasnosel'skiĭ-Mann method for efficiently solving monotone inclusion problems[J]. AIMS Mathematics, 2023, 8(12): 28398-28412. doi: 10.3934/math.20231453

    Related Papers:

    [1] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
    [2] Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077
    [3] Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382
    [4] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [5] Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
    [6] Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad . Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems. AIMS Mathematics, 2023, 8(10): 24590-24608. doi: 10.3934/math.20231254
    [7] Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194
    [8] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [9] Sani Salisu, Poom Kumam, Songpon Sriwongsa . One step proximal point schemes for monotone vector field inclusion problems. AIMS Mathematics, 2022, 7(5): 7385-7402. doi: 10.3934/math.2022412
    [10] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
  • In this article, we propose a strongly convergent preconditioning method for finding a zero of the sum of two monotone operators. The proposed method combines a preconditioning approach with the robustness of the Krasnosel'skiĭ-Mann method. We show the strong convergence result of the sequence generated by the proposed method to a solution of the monotone inclusion problem. Finally, we provide numerical experiments on the convex minimization problem.



    Let H be a real Hilbert space equipped with an inner product denoted by ,, and let denote the norm induced by this inner product.

    The monotone inclusion problem (MIP) is to find a point xH such that

    0Ax+Bx, (1.1)

    where A:H2H is a set-value operator and B:HH is an operator.

    This problem arises in a wide range of applications, including optimization, convex minimization problems, equilibrium problems, variational inequality problems, signal and image processing, machine learning, mechanics and partial differential equations (see, for example, references [1,2,3,4,5,6,7,8,9]). To tackle the monotone inclusion problem, various techniques have been developed, including the method of alternating projections and the proximal point algorithm [10]. The forward-backward method is a well-known algorithm for solving the monotone inclusion problem involving two operators. The method, also known as the proximal gradient method or the iterative soft-thresholding algorithm, was introduced by Lions and Mercier in [11]. Recall some definitions of maximal monotone and cocoercive operators. The operator A:H2H is called maximally monotone when no proper monotone extension of the graph of A exists. For L>0, the operator B:HH is said to be L-cocoercive if it satisfies LBxBy2xy,BxBy, x,yH. Their method is defined as follows:

    xk+1=(I+λkA)1(IλkB)xk,  kN, (1.2)

    where A:H2H is maximal monotone operator, and B:HH is a 1/L-cocoercive. They presented weak convergence of their algorithm under the assumption that λk(0,2/L).

    In 1964, Polyak [12] introduced several innovative ideas aimed at improving the convergence speed of iterative algorithms. These approaches entail modifications to traditional iterative procedures, such as incorporating variable relaxation parameters and implementing acceleration methods that incorporate an inertial extrapolation term, denoted as θn(xnxn1), where θn is a sequence satisfying specific assumptions. Subsequently, inertial extrapolation has gained substantial attention and has been thoroughly investigated by numerous researchers; for more details, please refer to [13,14,15,16,17,18,19].

    Afterward, Moudafi and Oliny [20] applied the concept of the inertial method and the forward-backward method to introduce a new algorithm for solving MIP. The method is as follows:

    {zk=xk+θk(xkxk1),xk+1=(I+λkA)1(zkλkB(xk)),  kN, (1.3)

    where A:H2H is a maximal monotone operator, (θk)k0[0,1) is an inertial parameter sequence and B:HH is a 1/L-cocoercive. It has been shown that the generated sequence (xk)k0 converges weakly to a solution of MIP when k1θkxkxk12<+ and λk(2/L).

    To accelerate the algorithm for solving MIP, preconditioners are frequently employed. One such approach is the preconditioning forward-backward algorithm for solving MIP, defined as follows:

    (LP15)  {zk=xk+θk(xkxk1),xk+1=(I+λkM1A)1(zkλkM1B(xk)),  kN, (1.4)

    where A:H2H is a maximal monotone operator and B:HH is a M-cocoercive, where the precoditioner M is a bounded linear operator. This method was introduced by Lorenz and Pock [21]. Furthermore, under appropriate assumptions on the parameters, it converges weakly to a solution of MIP.

    In 2022, Altiparmak and Karahan [22] presented a new preconditioning forward-backward algorithm for solving MIP, incorporating the concept of viscosity. The algorithm is defined as follows:

    (AK22)  {zk=xk+θk(xkxk1),wk=T(αk(zk)+(1αk)T(zk)),xk+1=βkf(wk)+(1βk)T(wk),  kN, (1.5)

    where T:=(I+λM1A)1(IλM1B), λ(0,1], (θk)k0[0,θ] with θ[0,1), (αk)k0 and (βk)k0 are sequences in (0,1]. They proved the strong convergence results of this method.

    Alternatively, for a certain nonexpansive operator T:HH with Fix T, where Fix T:={xH:Tx=x}, the celebrated Krasnosel'skiĭ-Mann method [23, Theorem 2.2] for finding a point in Fix T has the following form:

    xk+1:=xk+λk(T(xk)xk),

    where x1H is arbitrarily chosen, (λk)k1(0,1) is a real sequence. It is well known that the sequence generated by Krasnosel'skiĭ-Mann method converges weakly to a point in Fix T. In order to deal with strong convergence result of Krasnosel'skiĭ-Mann type method, Boţ, Csetnek and Meier [24] proposed a modified Krasnosel'skiĭ-Mann method [24, Scheme (2)] of the following form:

    (BCM19)  xk+1=αkδkxk+(1αk)Tδkxk,k1, (1.6)

    where x1H and (αk)k1, (δk)k1 are sequences in (0,1]. They proved that the generated sequence converges strongly to a point xFix T. It is worth noting that such a point x has a special feature in the sense that it captures the minimal norm value compared to other fixed points of T. The modified Krasnosel'skiĭ-Mann method (1.6) has been studied and generalized extensively in some aspects, see for example [14,25,26,27]. Recently, many researchers have proposed iterative methods to solve fixed point problems; see, e.g., [28,29,30] and the references therein.

    The main contribution of this work is the introduction of an iterative method designed to find the zero of two monotone operators, as presented in problem (1.1). This method is built upon the principles of preconditioning and a modified Krasnosel'skiĭ-Mann method (1.6). Under certain conditions on the control sequences, we establish the strong convergence of our proposed algorithm to address the problem (1.1). To illustrate the effectiveness of our approach, we present a series of numerical experiments focused on the convex minimization problem.

    In this section, we present results from real Hilbert spaces that are pertinent to this study, particularly in the context of convergence analysis.

    Consider C as a nonempty closed convex subset of H. For any zH, there exists a unique xC satisfying zx=infxCzx. Furthermore, if we define projC:HC by projC(z)=x for all zH, we refer to projC as the metric projection (or nearest point projection) from H onto C.

    Let A:H2H be a set-value operator. We denote the graph of A as gra(A):={(x,w)H×H:wAx}. The operator A is said to be monotone if xy,wz0 for all (x,w),(y,z)gra(A), and it is called maximally monotone when no proper monotone extension of the graph of A exists.

    For a function h:H(,+], we say that h is proper if there exists at least one xH such that h(x)<+. The subdifferential of h at xH, where h(x)R, is defined as follows:

    h(x)={wH:h(y)h(x)w,yx yH}.

    We say that and h is subdifferentiable at xH if h(x). The elements of h(x) are referred to as the subgradients of h at x. It is a well-established fact that the subdifferential of a proper convex lower semicontinuous function constitutes a maximally monotone operator.

    Let M:HH be a bounded linear operator. M is characterized as self-adjoint if M=M, where M represents the adjoint of the operator M. A self-adjoint operator is considered positive definite if M(x),x>0 for every 0xH [31].

    x,yM=x,M(y),   x,yH.

    Using the self-adjoint, positive and bounded linear operator M, we define the M-inner product as follows:

    x2M=x,M(x),   xH.

    Definition 2.1. [32] Let C be a nonempty subset of H, and let M:HH be a positive definite operator. Then an operator T:CH is said to be:

    (i) nonexpansive operator with respect to M-norm if it satisfies: TxTyMxyM,   x,yH,

    (ii) M-cocoercive operator if it satisfies: TxTy2M1xy,TxTy,   x,yH.

    Lemma 2.1. [32] Let A:H2H be a maximal monotone operator, and let B:HH be a M-cocoercive operator, where M:HH is a bounded linear self-adjoint operator. Assume that M is a positive definite operator and λ(0,1]. Then (I+λM1A)1(IλM1B) is nonexpansive with respect to M-norm.

    Lemma 2.2. [32] Let A:H2H be a maximal monotone operator, and let B:HH be a M-cocoercive operator, where M:HH is a bounded linear self-adjoint operator. Assume that M is a positive definite operator and λ(0,). Then xH is a solution of monoton inclusion problem (1.1) if and only if x is a fixed point of (I+λM1A)1(IλM1B).

    The subsequent lemma constitutes an essential characterization of the metric projection.

    Lemma 2.3. [33,34] Let (sk)k1 and (μk)k1 be sequences of nonnegative real numbers and satisfy the inequality

    sk+1(1δk)sk+μk+εk  k1,

    where 0δk1 for all k1. Assume that k1εk<+. Then the following statement hold:

    (i) If μkcδk (where c0), then (sk)k1 is bounded.

    (ii) If k1δk= and lim supk+μkδk0, then the sequence (sk)k1 converges to 0.

    Lemma 2.4. [1] Let T be a nonexpansive operator from H into itself. Let (xk)k1 be a sequence in H and xH such that xkx as k+ (i.e., (xk)k1 converges weakly to x) and xkTxk0 as k+ (i.e., (xkTxk)k1 converges strongly to 0). Then xFix(T).

    This section discuss the convergence analysis of the proposed algorithm.

    Algorithm 1
      Initialization: Given the real sequences (αk)k1 and (δk)k1 in (0,1] and λ(0,1]. Choose an aebitrary initial point x1H.
      Iterative Steps: For an iterate xkH, define xk+1H as
        xk+1:=(I+λM1A)1(IλM1B)(αk(δkxk)+(1αk)(I+λM1A)1(IλM1B)(δkxk)).
      Update k:=k+1.

    To prove the convergence of Algorithm 1, we assume the following assumption throughout this work.

    Assumption 3.1. Let (αk)k1 and (δk)k1 be sequences in (0,1]. Assume the conditions are verifiable, as follows:

    (1)lim infk+αk>0 and k1|αkαk1|<+,

    (2)limk+δk=1, k0(1δk)=+ and n1|δkδk1|<+.

    We have verified Assumption 3.1 as shown in the following remark.

    Remark 3.1. Let zH. We set δk=11k+1 and αk=121(k+1)2. It's easy to see that the Assumption 3.1 is satisfied.

    Theorem 3.1. Let M be a bounded linear self-adjoint and positive definite operator on H, A:H2H be a maximally monotone and let B:HH be M-cocoercive operator such that (A+B)1(0). Let (xk)k1 be generated by Algorithm 1. Assume that (αk)k1 and (δk)k1 satisfy Assumption 3.1. Then, the sequence (xk)k1 strongly converges to projFix((A+B)1(0))(0).

    Proof. Let x(A+B)1(0). We define ΓA,Bλ,M:=(I+λM1A)1(IλM1B) for ease of reference and convenience. From Algorithm 1, we obtain that

    xk+1xM=ΓA,Bλ,M(αk(δkxk)+(1αk)ΓA,Bλ,M(δkxk))xMαk(δkxk)+(1αk)ΓA,Bλ,M(δkxk)xMδkxkxM. (3.1)

    Let us consider δkxkxM in the inequality (3.1),

    δkxkxM=δk(xkx)+(δk1)xMδkxkxM+(1δk)xM. (3.2)

    Combining (3.1) and (3.2), we have

    xk+1xMδkxkxM+(1δk)xMmax{xkxM,xM}   max{x0xM,xM}, (3.3)

    for all k0, hence (xk)k1 is bounded.

    Next, we cliam that xk+1xk0 as k+. Let us consider,

    xk+1xkM=ΓA,Bλ,M(αk(δkxk)+(1αk)ΓA,Bλ,M(δkxk))    ΓA,Bλ,M(αk1(δk1xk1)+(1αk1)ΓA,Bλ,M(δkxk1))Mαk(δkxkδk1xk1)+(αkαk1)δk1xk1M   +(1αk)(ΓA,Bλ,M(δkxk)ΓA,Bλ,M(δk1xk1))+(αkαk1)ΓA,Bλ,M(δk1xk1)Mδkxkδk1xk1M+|αkαk1|(δk1xk1M+ΓA,Bλ,M(δk1xk1)M). (3.4)

    By the boundedness of a sequence (xk)k1 and the definition of ΓA,Bλ,M, we have there exists C1>0 such that

    δk1xk1M+ΓA,Bλ,M(δk1xk1)MC1,k1.

    It follows that

    xk+1xkMδkxkδk1xk1M+|αkαk1|C1,k1. (3.5)

    Next, we will consider the term δkxkδk1xk1M in the inequality (3.5).

    Let us consider,

    δkxkδk1xk1M=δk(xkxk1)+(δkδk1)xk1Mδkxkxk1M+|δkδk1|(xk1M),k1. (3.6)

    By the boundedness of a sequence (xk)k1, there exists C2>0 such that

    xk1MC2,k1.

    From inequality (3.6), we have

    δkxkδk1xk1Mδkxkxk1M+|δkδk1|C2,k1. (3.7)

    Combining (3.5) and (3.7), we get that

    xk+1xkMδkxkxk1M+|δkδk1|C2+|αkαk1|MC1. (3.8)

    By applying Lemma 2.3 and Assumption 3.1, we obtain that xk+1xkM0 as k+.

    Now, we prove that ΓA,Bλ,M(δkxk)δkxkM0 as k+. We observe that

    ΓA,Bλ,M(δkxk)δkxkM=ΓA,Bλ,M(δkxk)xk+1+xk+1δkxkM|ΓA,Bλ,M(δkxk)xk+1M+xk+1δkxkM|δkxkαk(δkxk)(1αk)ΓA,Bλ,M(δkxk)M    +(1δk)xk+1+δkxk+1δkxkM(1αk)ΓA,Bλ,M(δkxk)δkxkM    +(1δk)xk+1M+δkxk+1xkM. (3.9)

    It follows that

    ΓA,Bλ,M(δkxk)δkxkM1αk((1δk)xk+1M+δkxk+1xkM). (3.10)

    Since limk+xk+1xkM=0 and considering the properties of the sequences involved, we have

    limk+ΓA,Bλ,MδkxkδkxkM=0.

    Next, we will show that (xk)k1 strongly converges to projFix((A+B)1(0))(0):=¯x. From inequality (3.1) and Lemma 2.3, we implies that

    xk+1¯x2Mδkxk¯x2M=δkxkδk¯x+δk¯x¯x2Mδ2kxk¯x2M+2δk(1δk)¯x,xk¯xM+(1δk)2¯x2Mδkxk¯x2M+(1δk)(2δk¯x,xk¯xM+(1δk)¯x2M), (3.11)

    for all k0.

    In order to show that the sequence (xk)k1 strongly converges to ¯x, it is sufficient to prove that

    lim supk+¯x,xk¯xM0. (3.12)

    On the other hand, assume that the inequality (3.12) does not hold. In this case, there exists a real number l>0 and a subsequence (xki)i1 such that

    ¯x,xki¯xMl>0  i1.

    For a sequence (xk)k1 bounded in a Hilbert space H, we can identify a subsequence (xki)i1 of (xk)k1 that weakly converges to a point zH. Without loss of generality, we can assume that xkiz as i+. Therefore,

    0<llimi+¯x,xki¯xM=¯x,z¯xM. (3.13)

    Notice that limk+δk=1, we get δkixkiz as i+. Applying Lemma 2.4, we obtain that zFix(ΓA,Bλ,M). Hence, we obtain that ¯x,z¯xM0, which is a contradiction. Therefore, the inequality (3.12) is verifyed. It follows that

    lim supk+(2δk¯x,xk¯xM+(1δk)¯x2M)0.

    Using Lemma 2.3 and (3.11), we can conclude that limk+xk=¯x. Then the proof is complete.

    Now, Let us consider the following convex minimization problem (CMP):

    minimize f(x)+g(x),subject to xH, (3.14)

    where f:HR is a proper convex lower semi-continuous function and g:HR is a differentiable function with the gradient of g being a Lipschitz continuous operator with constant Lg. Moreover, since the function g is differentiable, and by using the Baillon-Haddad Theorem (see [1]), g is cocoercive with respect to 1Lg. Furthermore, if f:HR is a proper convex lower semi-continuous function, then f is maximal monotone. It is well-known that a point x is a solution of convex minimization problem (3.14) if and only if 0f(x)+g(x). In Theorem 3.1, set A=f, B=g, and M(x)=Lg(x). As a result, we can deduce the following corollary.

    Corollary 3.1. Let f:HR be a proper convex lower semi-continuous function, and let g:HR be a differentiable function with the gradient of g being Lipschitz continuous operator with constant Lg. Assume that the solution set of convex minimization problem (3.14) is nonempty, and parameters (αk)k1 and (δk)k1 satisfy Assumption 3.1. Let (xk)k1 be a sequence generated by:

    {x1H,xk+1:=(I+λL1gf)1(IλL1gg)(αk(δkxk)+(1αk)(I+λL1gf)1(IλL1gg)(δkxk)). (3.15)

    Then, the sequence (xk)k1 strongly converges to a solution x of the convex minimization problem.

    In this section, we present numerical results comparing the performance of Algorithm 1, AK22 [22], BCM19 [24] and LP15 [21] in solving the convex minimization problem.

    We demonstrate the effectiveness of our proposed iterative method by presenting a numerical example in the context of convex minimization. We also compare the convergence performance of our algorithm with existing methods from the literature. All experiments were conducted using MATLAB 9.19 (R2022b) and performed all computations on a MacBook Pro 14-inch 2021 with an Apple M1 Pro processor and 16 GB memory.

    Let f:RsR be defied by f(x)=x1 for all xRs and g:RsR be defined by g(x)=Kxb22, where K:RsRl is a non-zero linear transformation, bRl for all xRs, we consider the following minimization problem:

    minimize x1+Kxb22,subject to xRs. (4.1)

    The problem (4.1) can be written in the form of the monotone inclusion problem (1.1) as:

     find xRs such that 0f(x)+g(x), (4.2)

    where A=f() and B=g().

    We generate vectors x0=x1Rs and bRl by random generating between (1,1), and the matrix KRl×s is also generated using the same method of random generation between (1,1).

    In this numerical experiment, we terminate the algorithms by the stopping criterion

    max{xkxk1,xkxk1xk+1}t.

    All computational times are given in seconds (sec.). In Theorem 3.1, we set M(x)=K2(x) and λ=1. For AK22, we set M(x)=K2(x), λ=1, f(x)=0.99x for all xRs, and

    θk={min{1,1(k+1)2xkxk1},ifxkxk1,1,otherwise. (4.3)

    For BCM19, we set λ=1K2. For LP15, we set λk=1, M(x)=K2(x) and θk defined as in (4.3).

    The optimal parameter combinations for each method are as follows: Combination of each method are as follows:

    Algorithm1: αk=0.1+1k+1 and δk=10.0005k+1.

    AK22: αk=0.2+1k+1 and βk=18k.

    BCM19: αk=0.1+1k+1 and δk=10.0005k+1.

    For detailed parameter combinations, please refer to Appendices 1, 2 and 3.

    Next, we present the behavior of Algorithm 1, AK22, BCM19 and LP15 for the average computational running time. We performed all methods for different sizes of (l,s). The results are plotted in Figures 13.

    Figure 1.  Behaviors of Algorithm 1, AK22, BCM19 and LP15 for fixed dimension l=100.
    Figure 2.  Behaviors of Algorithm 1, AK22, BCM19 and LP15 for fixed dimension l=200.
    Figure 3.  Behaviors of Algorithm 1, AK22, BCM19 and LP15 for fixed dimension l=500.

    The objective of this study was to address a monotone inclusion problem guided by a maximal monotone operator and a cocoercive operator. We employed a combination of the preconditioning and Krasnosel'skiĭ-Mann method and proved the strong convergence of the generated sequence of iterates towards a solution to the considered problem. Numerical experiments reveal that under certain suitable parameters, the proposed method exhibits superior convergence behavior compared to existing algorithms.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    N. Artsawang has received funding support from Naresuan University [grant number R2566C026].

    The author declare no conflict of interest.

    Parameter combinations of Algorithm 1

    We start with the investigation of several parameter combinations which are chosen as in Algorithm 1 for minimization problem (4.1) by running algorithm and terminate it by the stopping criterion t=107. We present the average computational running time for various choices of the parameters αk and δk, when the dimension of K is (100,400) in Table 1.

    Table 1.  The average computational running time for several choices of parameters αk=α+1k+1 and δk=1δk+1.
    α 0.1 0.2 0.3 0.4 0.5
    δ = 0.0001 2.1098 2.4669 2.5797 2.4906 2.6156
    δ = 0.0005 2.0055 2.4642 2.7755 2.5717 2.6156
    δ = 0.001 2.2431 2.7309 2.3649 2.5245 2.4612
    δ = 0.005 2.1603 2.5758 2.5024 2.7668 2.7267
    δ = 0.01 2.7637 3.0821 2.7610 2.9218 2.9517
    δ = 0.05 5.5459 6.0621 5.6462 5.8979 6.2985
    δ = 0.1 8.3909 8.2648 8.3734 7.8792 8.7616

     | Show Table
    DownLoad: CSV

    Table 1 shows that using the combination of αk=0.1+1k+1 with the relaxation parameter δk=10.0005k+1 resulted in the shortest computational running time (2.0055 seconds).

    Parameter combinations of AK22

    In this section, we present some parameter combinations of AK22. All experimental settings are the same as mentioned above.

    Table 2 presents the average computational running time for different parameter choices of λk and δk. The combination of αk=0.2+1k+1 and βk=18k resulted in the shortest computational running time of 2.0045 seconds.

    Table 2.  The average computational running time for several choices of parameters αk=α+1k+1 and βk=1σk.
    α 0.1 0.2 0.3 0.4 0.5
    σ=2 2.6500 2.4602 2.5416 2.6181 2.7257
    σ=4 2.3598 2.1852 2.3602 2.4580 2.4626
    σ=6 2.4725 2.1835 2.1933 2.7005 2.3785
    σ=8 2.0482 2.0045 2.2490 2.6526 2.5000
    σ=10 2.1977 2.2389 2.3762 2.3268 2.6241

     | Show Table
    DownLoad: CSV

    Parameter combinations of BCM19

    In this section, we present some parameter combinations of BCM19.

    Table 3 shows that using the combination of αk=0.1+1k+1 with the relaxation parameter δk=10.0005k+1 resulted in the shortest computational running time (2.1517 seconds).

    Table 3.  The average computational running time for several choices of parameters αk=α+1k+1 and δk=1δk+1.
    α 0.1 0.2 0.3 0.4 0.5
    δ=0.0001 2.1876 2.4516 2.7168 2.8837 11.3226
    δ=0.0005 2.1517 2.3143 2.4915 2.9669 11.999
    δ=0.001 2.2124 2.7103 2.7338 3.2380 12.1952
    δ=0.005 2.2525 2.3032 2.8382 3.2330 12.6409
    δ=0.01 2.4413 3.1847 2.8669 3.3555 13.8754
    δ=0.05 4.1909 4.8588 5.0916 5.8662 20.3472
    δ=0.1 6.6171 6.7856 6.7514 6.9288 30.8285

     | Show Table
    DownLoad: CSV


    [1] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer Cham, 2011. https://doi.org/10.1007/978-3-319-48311-5
    [2] B. Engquist, Encyclopedia of applied and computational mathematics, Berlin: Springer, 2015. https://doi.org/10.1007/978-3-540-70529-1
    [3] J. Eckstein, D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Math. Program., 55 (1992), 293–318. https://doi.org/10.1007/BF01581204 doi: 10.1007/BF01581204
    [4] N. Artsawang, K. Ungchittrakool, A new forward-backward penalty scheme and its convergence for solving monotone inclusion problems, Carpathian. J. Math., 35 (2019), 349–363.
    [5] N. Artsawang, K. Ungchittrakool, A new splitting forward-backward algorithm and convergence for solving constrained convex optimization problem in Hilbert spaces, J. Nonlinear Convex Anal., 22 (2021), 1003–1023.
    [6] D. Kitkuan, P. Kumam, J. Martínez-Moreno, Generalized Halpern-type forward-backward splitting methods for convex minimization problems with application to image restoration problems, Optimization, 69 (2020), 1557–1581. https://doi.org/10.1080/02331934.2019.1646742 doi: 10.1080/02331934.2019.1646742
    [7] V. Dadashi, M. Postolache, Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators, Arab. J. Math., 9 (2020), 89–99. https://doi.org/10.1007/s40065-018-0236-2 doi: 10.1007/s40065-018-0236-2
    [8] J. S. Jung, A general iterative algorithm for split variational inclusion problems and fixed point problems of a pseudocontractive mapping, J. Nonlinear Funct. Anal., 2022 (2022), 13. https://doi.org/10.23952/jnfa.2022.13 doi: 10.23952/jnfa.2022.13
    [9] V. A. Uzor, T. O. Alakoya, O. T. Mewomo, Modified forward-backward splitting method for split equilibrium, variational inclusion, and fixed point problems, Appl. Set-Valued Anal. Optim., 5 (2023), 95–119.
    [10] M. R. Hestenes, Multiplier and gradient methods, J. Optim. Theory Appl., 4 (1969), 303–320. https://doi.org/10.1007/BF00927673 doi: 10.1007/BF00927673
    [11] P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. https://doi.org/10.1137/0716071 doi: 10.1137/0716071
    [12] B. T. Polyak, Some methods of speeding up the convergence of iterative methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [13] L. Liu, S. Y. Cho, J. C. Yao, Convergence analysis of an inertial Tseng's extragradient algorithm for solving pseudomonotone variational inequalities and applications, J. Nonlinear Var. Anal., 5 (2021), 627–644.
    [14] N. Artsawang, K. Ungchittrakool, Inertial Mann-type algorithm for a nonexpansive mapping to solve monotone inclusion and image restoration problems, Symmetry, 12 (2020), 750. https://doi.org/10.3390/sym12050750 doi: 10.3390/sym12050750
    [15] Y. E. Nesterov, A method for solving a convex programming problem with convergence rate O(1/k2), Dokl. Akad. Nauk SSSR, 269 (1983), 543–547.
    [16] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11.
    [17] A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. https://doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
    [18] F. Alvarez, Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space, SIAM J. Optim., 14 (2004), 773–782. https://doi.org/10.1137/S1052623403427859 doi: 10.1137/S1052623403427859
    [19] H. Attouch, J. Bolte, B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods, Math. Program., 137 (2009), 91–129. https://doi.org/10.1007/s10107-011-0484-9 doi: 10.1007/s10107-011-0484-9
    [20] A. Moudfi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. https://doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
    [21] D. A. Lorenz, T. Pock, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis., 51 (2015), 311–325. https://doi.org/10.1007/s10851-014-0523-2 doi: 10.1007/s10851-014-0523-2
    [22] E. Altiparmak, I. Karahan, A new preconditioning algorithm for finding a zero of the sum of two monotone operators and its application to image restoration problems, Int. J. Comput. Math., 99 (2022), 2482–2498. https://doi.org/10.1080/00207160.2022.2068146 doi: 10.1080/00207160.2022.2068146
    [23] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
    [24] R. I. Boţ, E. R. Csetnek, D. Meier, Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces, Optim. Method. Softw., 34 (2019), 489–514. https://doi.org/10.1080/10556788.2018.1457151 doi: 10.1080/10556788.2018.1457151
    [25] K. Ungchittrakool, S. Plubtieng, N. Artsawang, P. Thammasiri, Modified Mann-type algorithm for two countable families of nonexpansive mappings and application to monotone inclusion and image restoration problems, Mathematics, 11 (2023), 2927. https://doi.org/10.3390/math11132927 doi: 10.3390/math11132927
    [26] R. I. Boţ, D. Meier, A strongly convergent Krasnosel'skiǐ-Mann-type algorithm for finding a common fixed point of a countably infinite family of nonexpansive operators in Hilbert spaces, J. Comput. Appl. Math., 395 (2021), 113589. https://doi.org/10.1016/j.cam.2021.113589 doi: 10.1016/j.cam.2021.113589
    [27] B. Tan, S. Y. Cho, An inertial Mann-like algorithm for fixed points of nonexpansive mappings in Hilbert spaces, J. Appl. Numer. Optim., 2 (2020), 335–351. https://doi.org/10.23952/jano.2.2020.3.05 doi: 10.23952/jano.2.2020.3.05
    [28] B. Tan, S. Li, Strong convergence of inertial Mann algorithms for solving hierarchical fixed point problems, J. Nonlinear Var. Anal., 4 (2020), 337–355. https://doi.org/10.23952/jnva.4.2020.3.02 doi: 10.23952/jnva.4.2020.3.02
    [29] B. Tan, S. Y. Cho, J. C. Yao, Accelerated inertial subgradient extragradient algorithms with non-monotonic step sizes for equilibrium problems and fixed point problems, J. Nonlinear Var. Anal., 6 (2022), 89–122. https://doi.org/10.23952/jnva.6.2022.1.06 doi: 10.23952/jnva.6.2022.1.06
    [30] B. Tan, S. Xu, S. Li, Modified inertial Hybrid and shrinking projection algorithms for solving fixed point problems, Mathematics, 8 (2020), 236. https://doi.org/10.3390/math8020236 doi: 10.3390/math8020236
    [31] B. V. Limaye, Functional analysis, New Age International, 1996.
    [32] A. Dixit, D. R. Sahu, P. Gautam, T. Som, J. C. Yao, An accelerated forward backward splitting algorithm for solving inclusion problems with applications to regression and link prediction problems, J. Nonlinear Var. Anal., 5 (2021), 79–101. https://doi.org/10.23952/jnva.5.2021.1.06 doi: 10.23952/jnva.5.2021.1.06
    [33] H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
    [34] P. E. Mainge, Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl., 325 (2007), 469–479. https://doi.org/10.1016/j.jmaa.2005.12.066 doi: 10.1016/j.jmaa.2005.12.066
  • This article has been cited by:

    1. Purit Thammasiri, Vasile Berinde, Narin Petrot, Kasamsuk Ungchittrakool, Double Tseng’s Algorithm with Inertial Terms for Inclusion Problems and Applications in Image Deblurring, 2024, 12, 2227-7390, 3138, 10.3390/math12193138
    2. Xiao Guo, Chuanpei Xu, Zhibin Zhu, Benxin Zhang, Nonmonotone variable metric Barzilai-Borwein method for composite minimization problem, 2024, 9, 2473-6988, 16335, 10.3934/math.2024791
    3. Kasamsuk Ungchittrakool, Natthaphon Artsawang, A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications, 2024, 9, 2473-6988, 23632, 10.3934/math.20241149
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1419) PDF downloads(66) Cited by(3)

Figures and Tables

Figures(3)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog