
In this article, we propose a strongly convergent preconditioning method for finding a zero of the sum of two monotone operators. The proposed method combines a preconditioning approach with the robustness of the Krasnosel'skiĭ-Mann method. We show the strong convergence result of the sequence generated by the proposed method to a solution of the monotone inclusion problem. Finally, we provide numerical experiments on the convex minimization problem.
Citation: Natthaphon Artsawang. Accelerated preconditioning Krasnosel'skiĭ-Mann method for efficiently solving monotone inclusion problems[J]. AIMS Mathematics, 2023, 8(12): 28398-28412. doi: 10.3934/math.20231453
[1] | Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108 |
[2] | Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077 |
[3] | Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382 |
[4] | Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154 |
[5] | Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309 |
[6] | Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad . Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems. AIMS Mathematics, 2023, 8(10): 24590-24608. doi: 10.3934/math.20231254 |
[7] | Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194 |
[8] | Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208 |
[9] | Sani Salisu, Poom Kumam, Songpon Sriwongsa . One step proximal point schemes for monotone vector field inclusion problems. AIMS Mathematics, 2022, 7(5): 7385-7402. doi: 10.3934/math.2022412 |
[10] | Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986 |
In this article, we propose a strongly convergent preconditioning method for finding a zero of the sum of two monotone operators. The proposed method combines a preconditioning approach with the robustness of the Krasnosel'skiĭ-Mann method. We show the strong convergence result of the sequence generated by the proposed method to a solution of the monotone inclusion problem. Finally, we provide numerical experiments on the convex minimization problem.
Let H be a real Hilbert space equipped with an inner product denoted by ⟨⋅,⋅⟩, and let ‖⋅‖ denote the norm induced by this inner product.
The monotone inclusion problem (MIP) is to find a point x∗∈H such that
0∈Ax+Bx, | (1.1) |
where A:H→2H is a set-value operator and B:H→H is an operator.
This problem arises in a wide range of applications, including optimization, convex minimization problems, equilibrium problems, variational inequality problems, signal and image processing, machine learning, mechanics and partial differential equations (see, for example, references [1,2,3,4,5,6,7,8,9]). To tackle the monotone inclusion problem, various techniques have been developed, including the method of alternating projections and the proximal point algorithm [10]. The forward-backward method is a well-known algorithm for solving the monotone inclusion problem involving two operators. The method, also known as the proximal gradient method or the iterative soft-thresholding algorithm, was introduced by Lions and Mercier in [11]. Recall some definitions of maximal monotone and cocoercive operators. The operator A:H→2H is called maximally monotone when no proper monotone extension of the graph of A exists. For L>0, the operator B:H→H is said to be L-cocoercive if it satisfies L‖Bx−By‖2≤⟨x−y,Bx−By⟩,∀ x,y∈H. Their method is defined as follows:
xk+1=(I+λkA)−1(I−λkB)xk, ∀ k∈N, | (1.2) |
where A:H→2H is maximal monotone operator, and B:H→H is a 1/L-cocoercive. They presented weak convergence of their algorithm under the assumption that λk∈(0,2/L).
In 1964, Polyak [12] introduced several innovative ideas aimed at improving the convergence speed of iterative algorithms. These approaches entail modifications to traditional iterative procedures, such as incorporating variable relaxation parameters and implementing acceleration methods that incorporate an inertial extrapolation term, denoted as θn(xn−xn−1), where θn is a sequence satisfying specific assumptions. Subsequently, inertial extrapolation has gained substantial attention and has been thoroughly investigated by numerous researchers; for more details, please refer to [13,14,15,16,17,18,19].
Afterward, Moudafi and Oliny [20] applied the concept of the inertial method and the forward-backward method to introduce a new algorithm for solving MIP. The method is as follows:
{zk=xk+θk(xk−xk−1),xk+1=(I+λkA)−1(zk−λkB(xk)), ∀ k∈N, | (1.3) |
where A:H→2H is a maximal monotone operator, (θk)k≥0⊆[0,1) is an inertial parameter sequence and B:H→H is a 1/L-cocoercive. It has been shown that the generated sequence (xk)k≥0 converges weakly to a solution of MIP when ∑k≥1θk‖xk−xk−1‖2<+∞ and λk∈(2/L).
To accelerate the algorithm for solving MIP, preconditioners are frequently employed. One such approach is the preconditioning forward-backward algorithm for solving MIP, defined as follows:
(LP15) {zk=xk+θk(xk−xk−1),xk+1=(I+λkM−1A)−1(zk−λkM−1B(xk)), ∀ k∈N, | (1.4) |
where A:H→2H is a maximal monotone operator and B:H→H is a M-cocoercive, where the precoditioner M is a bounded linear operator. This method was introduced by Lorenz and Pock [21]. Furthermore, under appropriate assumptions on the parameters, it converges weakly to a solution of MIP.
In 2022, Altiparmak and Karahan [22] presented a new preconditioning forward-backward algorithm for solving MIP, incorporating the concept of viscosity. The algorithm is defined as follows:
(AK22) {zk=xk+θk(xk−xk−1),wk=T(αk(zk)+(1−αk)T(zk)),xk+1=βkf(wk)+(1−βk)T(wk), ∀ k∈N, | (1.5) |
where T:=(I+λM−1A)−1(I−λM−1B), λ∈(0,1], (θk)k≥0⊆[0,θ] with θ∈[0,1), (αk)k≥0 and (βk)k≥0 are sequences in (0,1]. They proved the strong convergence results of this method.
Alternatively, for a certain nonexpansive operator T:H→H with Fix T≠∅, where Fix T:={x∈H:Tx=x}, the celebrated Krasnosel'skiĭ-Mann method [23, Theorem 2.2] for finding a point in Fix T has the following form:
xk+1:=xk+λk(T(xk)−xk), |
where x1∈H is arbitrarily chosen, (λk)k≥1⊂(0,1) is a real sequence. It is well known that the sequence generated by Krasnosel'skiĭ-Mann method converges weakly to a point in Fix T. In order to deal with strong convergence result of Krasnosel'skiĭ-Mann type method, Boţ, Csetnek and Meier [24] proposed a modified Krasnosel'skiĭ-Mann method [24, Scheme (2)] of the following form:
(BCM19) xk+1=αkδkxk+(1−αk)Tδkxk,∀k≥1, | (1.6) |
where x1∈H and (αk)k≥1, (δk)k≥1 are sequences in (0,1]. They proved that the generated sequence converges strongly to a point x∗∈Fix T. It is worth noting that such a point x∗ has a special feature in the sense that it captures the minimal norm value compared to other fixed points of T. The modified Krasnosel'skiĭ-Mann method (1.6) has been studied and generalized extensively in some aspects, see for example [14,25,26,27]. Recently, many researchers have proposed iterative methods to solve fixed point problems; see, e.g., [28,29,30] and the references therein.
The main contribution of this work is the introduction of an iterative method designed to find the zero of two monotone operators, as presented in problem (1.1). This method is built upon the principles of preconditioning and a modified Krasnosel'skiĭ-Mann method (1.6). Under certain conditions on the control sequences, we establish the strong convergence of our proposed algorithm to address the problem (1.1). To illustrate the effectiveness of our approach, we present a series of numerical experiments focused on the convex minimization problem.
In this section, we present results from real Hilbert spaces that are pertinent to this study, particularly in the context of convergence analysis.
Consider C as a nonempty closed convex subset of H. For any z∈H, there exists a unique x∗∈C satisfying ‖z−x∗‖=infx∈C‖z−x‖. Furthermore, if we define projC:H→C by projC(z)=x∗ for all z∈H, we refer to projC as the metric projection (or nearest point projection) from H onto C.
Let A:H→2H be a set-value operator. We denote the graph of A as gra(A):={(x,w)∈H×H:w∈Ax}. The operator A is said to be monotone if ⟨x−y,w−z⟩≥0 for all (x,w),(y,z)∈gra(A), and it is called maximally monotone when no proper monotone extension of the graph of A exists.
For a function h:H→(−∞,+∞], we say that h is proper if there exists at least one x∈H such that h(x)<+∞. The subdifferential of h at x∈H, where h(x)∈R, is defined as follows:
∂h(x)={w∈H:h(y)−h(x)≥⟨w,y−x⟩ ∀y∈H}. |
We say that and h is subdifferentiable at x∈H if ∂h(x)≠∅. The elements of ∂h(x) are referred to as the subgradients of h at x. It is a well-established fact that the subdifferential of a proper convex lower semicontinuous function constitutes a maximally monotone operator.
Let M:H→H be a bounded linear operator. M is characterized as self-adjoint if M∗=M, where M∗ represents the adjoint of the operator M. A self-adjoint operator is considered positive definite if ⟨M(x),x⟩>0 for every 0≠x∈H [31].
⟨x,y⟩M=⟨x,M(y)⟩, ∀ x,y∈H. |
Using the self-adjoint, positive and bounded linear operator M, we define the M-inner product as follows:
‖x‖2M=⟨x,M(x)⟩, ∀ x∈H. |
Definition 2.1. [32] Let C be a nonempty subset of H, and let M:H→H be a positive definite operator. Then an operator T:C→H is said to be:
(i) nonexpansive operator with respect to M-norm if it satisfies: ‖Tx−Ty‖M≤‖x−y‖M, ∀ x,y∈H,
(ii) M-cocoercive operator if it satisfies: ‖Tx−Ty‖2M−1≤⟨x−y,Tx−Ty⟩, ∀ x,y∈H.
Lemma 2.1. [32] Let A:H→2H be a maximal monotone operator, and let B:H→H be a M-cocoercive operator, where M:H→H is a bounded linear self-adjoint operator. Assume that M is a positive definite operator and λ∈(0,1]. Then (I+λM−1A)−1(I−λM−1B) is nonexpansive with respect to M-norm.
Lemma 2.2. [32] Let A:H→2H be a maximal monotone operator, and let B:H→H be a M-cocoercive operator, where M:H→H is a bounded linear self-adjoint operator. Assume that M is a positive definite operator and λ∈(0,∞). Then x∈H is a solution of monoton inclusion problem (1.1) if and only if x is a fixed point of (I+λM−1A)−1(I−λM−1B).
The subsequent lemma constitutes an essential characterization of the metric projection.
Lemma 2.3. [33,34] Let (sk)k≥1 and (μk)k≥1 be sequences of nonnegative real numbers and satisfy the inequality
sk+1≤(1−δk)sk+μk+εk ∀k≥1, |
where 0≤δk≤1 for all k≥1. Assume that ∑k≥1εk<+∞. Then the following statement hold:
(i) If μk≤cδk (where c≥0), then (sk)k≥1 is bounded.
(ii) If ∑k≥1δk=∞ and lim supk→+∞μkδk≤0, then the sequence (sk)k≥1 converges to 0.
Lemma 2.4. [1] Let T be a nonexpansive operator from H into itself. Let (xk)k≥1 be a sequence in H and x∈H such that xk⇀x as k→+∞ (i.e., (xk)k≥1 converges weakly to x) and xk−Txk→0 as k→+∞ (i.e., (xk−Txk)k≥1 converges strongly to 0). Then x∈Fix(T).
This section discuss the convergence analysis of the proposed algorithm.
Algorithm 1 |
Initialization: Given the real sequences (αk)k≥1 and (δk)k≥1 in (0,1] and λ∈(0,1]. Choose an aebitrary initial point x1∈H.
Iterative Steps: For an iterate xk∈H, define xk+1∈H as xk+1:=(I+λM−1A)−1(I−λM−1B)(αk(δkxk)+(1−αk)(I+λM−1A)−1(I−λM−1B)(δkxk)). Update k:=k+1. |
To prove the convergence of Algorithm 1, we assume the following assumption throughout this work.
Assumption 3.1. Let (αk)k≥1 and (δk)k≥1 be sequences in (0,1]. Assume the conditions are verifiable, as follows:
(1)lim infk→+∞αk>0 and ∑k≥1|αk−αk−1|<+∞,
(2)limk→+∞δk=1, ∑k≥0(1−δk)=+∞ and ∑n≥1|δk−δk−1|<+∞.
We have verified Assumption 3.1 as shown in the following remark.
Remark 3.1. Let z∈H. We set δk=1−1k+1 and αk=12−1(k+1)2. It's easy to see that the Assumption 3.1 is satisfied.
Theorem 3.1. Let M be a bounded linear self-adjoint and positive definite operator on H, A:H→2H be a maximally monotone and let B:H→H be M-cocoercive operator such that (A+B)−1(0)≠∅. Let (xk)k≥1 be generated by Algorithm 1. Assume that (αk)k≥1 and (δk)k≥1 satisfy Assumption 3.1. Then, the sequence (xk)k≥1 strongly converges to projFix((A+B)−1(0))(0).
Proof. Let x∗∈(A+B)−1(0). We define ΓA,Bλ,M:=(I+λM−1A)−1(I−λM−1B) for ease of reference and convenience. From Algorithm 1, we obtain that
‖xk+1−x∗‖M=‖ΓA,Bλ,M(αk(δkxk)+(1−αk)ΓA,Bλ,M(δkxk))−x∗‖M≤‖αk(δkxk)+(1−αk)ΓA,Bλ,M(δkxk)−x∗‖M≤‖δkxk−x∗‖M. | (3.1) |
Let us consider ‖δkxk−x∗‖M in the inequality (3.1),
‖δkxk−x∗‖M=‖δk(xk−x∗)+(δk−1)x∗‖M≤δk‖xk−x∗‖M+(1−δk)‖x∗‖M. | (3.2) |
Combining (3.1) and (3.2), we have
‖xk+1−x∗‖M≤δk‖xk−x∗‖M+(1−δk)‖x∗‖M≤max{‖xk−x∗‖M,‖x∗‖M} ⋮≤max{‖x0−x∗‖M,‖x∗‖M}, | (3.3) |
for all k≥0, hence (xk)k≥1 is bounded.
Next, we cliam that ‖xk+1−xk‖→0 as k→+∞. Let us consider,
‖xk+1−xk‖M=‖ΓA,Bλ,M(αk(δkxk)+(1−αk)ΓA,Bλ,M(δkxk)) −ΓA,Bλ,M(αk−1(δk−1xk−1)+(1−αk−1)ΓA,Bλ,M(δkxk−1))‖M≤‖αk(δkxk−δk−1xk−1)+(αk−αk−1)δk−1xk−1‖M +‖(1−αk)(ΓA,Bλ,M(δkxk)−ΓA,Bλ,M(δk−1xk−1))+(αk−αk−1)ΓA,Bλ,M(δk−1xk−1)‖M≤‖δkxk−δk−1xk−1‖M+|αk−αk−1|(‖δk−1xk−1‖M+‖ΓA,Bλ,M(δk−1xk−1)‖M). | (3.4) |
By the boundedness of a sequence (xk)k≥1 and the definition of ΓA,Bλ,M, we have there exists C1>0 such that
‖δk−1xk−1‖M+‖ΓA,Bλ,M(δk−1xk−1)‖M≤C1,∀k≥1. |
It follows that
‖xk+1−xk‖M≤‖δkxk−δk−1xk−1‖M+|αk−αk−1|C1,∀k≥1. | (3.5) |
Next, we will consider the term ‖δkxk−δk−1xk−1‖M in the inequality (3.5).
Let us consider,
‖δkxk−δk−1xk−1‖M=‖δk(xk−xk−1)+(δk−δk−1)xk−1‖M≤δk‖xk−xk−1‖M+|δk−δk−1|(‖xk−1‖M),∀k≥1. | (3.6) |
By the boundedness of a sequence (xk)k≥1, there exists C2>0 such that
‖xk−1‖M≤C2,∀k≥1. |
From inequality (3.6), we have
‖δkxk−δk−1xk−1‖M≤δk‖xk−xk−1‖M+|δk−δk−1|C2,∀k≥1. | (3.7) |
Combining (3.5) and (3.7), we get that
‖xk+1−xk‖M≤δk‖xk−xk−1‖M+|δk−δk−1|C2+|αk−αk−1|MC1. | (3.8) |
By applying Lemma 2.3 and Assumption 3.1, we obtain that ‖xk+1−xk‖M→0 as k→+∞.
Now, we prove that ‖ΓA,Bλ,M(δkxk)−δkxk‖M→0 as k→+∞. We observe that
‖ΓA,Bλ,M(δkxk)−δkxk‖M=‖ΓA,Bλ,M(δkxk)−xk+1+xk+1−δkxk‖M≤‖|ΓA,Bλ,M(δkxk)−xk+1‖M+‖xk+1−δkxk‖M≤‖|δkxk−αk(δkxk)−(1−αk)ΓA,Bλ,M(δkxk)‖M +‖(1−δk)xk+1+δkxk+1−δkxk‖M≤(1−αk)‖ΓA,Bλ,M(δkxk)−δkxk‖M +(1−δk)‖xk+1‖M+δk‖xk+1−xk‖M. | (3.9) |
It follows that
‖ΓA,Bλ,M(δkxk)−δkxk‖M≤1αk((1−δk)‖xk+1‖M+δk‖xk+1−xk‖M). | (3.10) |
Since limk→+∞‖xk+1−xk‖M=0 and considering the properties of the sequences involved, we have
limk→+∞‖ΓA,Bλ,Mδkxk−δkxk‖M=0. |
Next, we will show that (xk)k≥1 strongly converges to projFix((A+B)−1(0))(0):=¯x. From inequality (3.1) and Lemma 2.3, we implies that
‖xk+1−¯x‖2M≤‖δkxk−¯x‖2M=‖δkxk−δk¯x+δk¯x−¯x‖2M≤δ2k‖xk−¯x‖2M+2δk(1−δk)⟨−¯x,xk−¯x⟩M+(1−δk)2‖¯x‖2M≤δk‖xk−¯x‖2M+(1−δk)(2δk⟨−¯x,xk−¯x⟩M+(1−δk)‖¯x‖2M), | (3.11) |
for all k≥0.
In order to show that the sequence (xk)k≥1 strongly converges to ¯x, it is sufficient to prove that
lim supk→+∞⟨−¯x,xk−¯x⟩M≤0. | (3.12) |
On the other hand, assume that the inequality (3.12) does not hold. In this case, there exists a real number l>0 and a subsequence (xki)i≥1 such that
⟨−¯x,xki−¯x⟩M≥l>0 ∀i≥1. |
For a sequence (xk)k≥1 bounded in a Hilbert space H, we can identify a subsequence (xki)i≥1 of (xk)k≥1 that weakly converges to a point z∈H. Without loss of generality, we can assume that xki⇀z as i→+∞. Therefore,
0<l≤limi→+∞⟨−¯x,xki−¯x⟩M=⟨−¯x,z−¯x⟩M. | (3.13) |
Notice that limk→+∞δk=1, we get δkixki⇀z as i→+∞. Applying Lemma 2.4, we obtain that z∈Fix(ΓA,Bλ,M). Hence, we obtain that ⟨−¯x,z−¯x⟩M≤0, which is a contradiction. Therefore, the inequality (3.12) is verifyed. It follows that
lim supk→+∞(2δk⟨−¯x,xk−¯x⟩M+(1−δk)‖¯x‖2M)≤0. |
Using Lemma 2.3 and (3.11), we can conclude that limk→+∞xk=¯x. Then the proof is complete.
Now, Let us consider the following convex minimization problem (CMP):
minimize f(x)+g(x),subject to x∈H, | (3.14) |
where f:H→R is a proper convex lower semi-continuous function and g:H→R is a differentiable function with the gradient of g being a Lipschitz continuous operator with constant Lg. Moreover, since the function g is differentiable, and by using the Baillon-Haddad Theorem (see [1]), ∇g is cocoercive with respect to 1Lg. Furthermore, if f:H→R is a proper convex lower semi-continuous function, then ∂f is maximal monotone. It is well-known that a point x∗ is a solution of convex minimization problem (3.14) if and only if 0∈∂f(x∗)+∇g(x∗). In Theorem 3.1, set A=∂f, B=∇g, and M(x)=Lg(x). As a result, we can deduce the following corollary.
Corollary 3.1. Let f:H→R be a proper convex lower semi-continuous function, and let g:H→R be a differentiable function with the gradient of g being Lipschitz continuous operator with constant Lg. Assume that the solution set of convex minimization problem (3.14) is nonempty, and parameters (αk)k≥1 and (δk)k≥1 satisfy Assumption 3.1. Let (xk)k≥1 be a sequence generated by:
{x1∈H,xk+1:=(I+λL−1g∂f)−1(I−λL−1g∇g)(αk(δkxk)+(1−αk)(I+λL−1g∂f)−1(I−λL−1g∇g)(δkxk)). | (3.15) |
Then, the sequence (xk)k≥1 strongly converges to a solution x∗ of the convex minimization problem.
In this section, we present numerical results comparing the performance of Algorithm 1, AK22 [22], BCM19 [24] and LP15 [21] in solving the convex minimization problem.
We demonstrate the effectiveness of our proposed iterative method by presenting a numerical example in the context of convex minimization. We also compare the convergence performance of our algorithm with existing methods from the literature. All experiments were conducted using MATLAB 9.19 (R2022b) and performed all computations on a MacBook Pro 14-inch 2021 with an Apple M1 Pro processor and 16 GB memory.
Let f:Rs→R be defied by f(x)=‖x‖1 for all x∈Rs and g:Rs→R be defined by g(x)=‖Kx−b‖22, where K:Rs→Rl is a non-zero linear transformation, b∈Rl for all x∈Rs, we consider the following minimization problem:
minimize ‖x‖1+‖Kx−b‖22,subject to x∈Rs. | (4.1) |
The problem (4.1) can be written in the form of the monotone inclusion problem (1.1) as:
find x∈Rs such that 0∈∂f(x)+∇g(x), | (4.2) |
where A=∂f(⋅) and B=∇g(⋅).
We generate vectors x0=x1∈Rs and b∈Rl by random generating between (–1,1), and the matrix K∈Rl×s is also generated using the same method of random generation between (–1,1).
In this numerical experiment, we terminate the algorithms by the stopping criterion
max{‖xk−xk−1‖,‖xk−xk−1‖‖xk+1‖}≤t. |
All computational times are given in seconds (sec.). In Theorem 3.1, we set M(x)=‖K‖2(x) and λ=1. For AK22, we set M(x)=‖K‖2(x), λ=1, f(x)=0.99x for all x∈Rs, and
θk={min{1,1(k+1)2‖xk−xk−1‖},ifxk≠xk−1,1,otherwise. | (4.3) |
For BCM19, we set λ=1‖K‖2. For LP15, we set λk=1, M(x)=‖K‖2(x) and θk defined as in (4.3).
The optimal parameter combinations for each method are as follows: Combination of each method are as follows:
● Algorithm1: αk=0.1+1k+1 and δk=1−0.0005k+1.
● AK22: αk=0.2+1k+1 and βk=18k.
● BCM19: αk=0.1+1k+1 and δk=1−0.0005k+1.
For detailed parameter combinations, please refer to Appendices 1, 2 and 3.
Next, we present the behavior of Algorithm 1, AK22, BCM19 and LP15 for the average computational running time. We performed all methods for different sizes of (l,s). The results are plotted in Figures 1–3.
The objective of this study was to address a monotone inclusion problem guided by a maximal monotone operator and a cocoercive operator. We employed a combination of the preconditioning and Krasnosel'skiĭ-Mann method and proved the strong convergence of the generated sequence of iterates towards a solution to the considered problem. Numerical experiments reveal that under certain suitable parameters, the proposed method exhibits superior convergence behavior compared to existing algorithms.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
N. Artsawang has received funding support from Naresuan University [grant number R2566C026].
The author declare no conflict of interest.
Parameter combinations of Algorithm 1
We start with the investigation of several parameter combinations which are chosen as in Algorithm 1 for minimization problem (4.1) by running algorithm and terminate it by the stopping criterion t=10−7. We present the average computational running time for various choices of the parameters αk and δk, when the dimension of K is (100,400) in Table 1.
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
δ = 0.0001 | 2.1098 | 2.4669 | 2.5797 | 2.4906 | 2.6156 |
δ = 0.0005 | 2.0055 | 2.4642 | 2.7755 | 2.5717 | 2.6156 |
δ = 0.001 | 2.2431 | 2.7309 | 2.3649 | 2.5245 | 2.4612 |
δ = 0.005 | 2.1603 | 2.5758 | 2.5024 | 2.7668 | 2.7267 |
δ = 0.01 | 2.7637 | 3.0821 | 2.7610 | 2.9218 | 2.9517 |
δ = 0.05 | 5.5459 | 6.0621 | 5.6462 | 5.8979 | 6.2985 |
δ = 0.1 | 8.3909 | 8.2648 | 8.3734 | 7.8792 | 8.7616 |
Table 1 shows that using the combination of αk=0.1+1k+1 with the relaxation parameter δk=1−0.0005k+1 resulted in the shortest computational running time (2.0055 seconds).
Parameter combinations of AK22
In this section, we present some parameter combinations of AK22. All experimental settings are the same as mentioned above.
Table 2 presents the average computational running time for different parameter choices of λk and δk. The combination of αk=0.2+1k+1 and βk=18k resulted in the shortest computational running time of 2.0045 seconds.
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
σ=2 | 2.6500 | 2.4602 | 2.5416 | 2.6181 | 2.7257 |
σ=4 | 2.3598 | 2.1852 | 2.3602 | 2.4580 | 2.4626 |
σ=6 | 2.4725 | 2.1835 | 2.1933 | 2.7005 | 2.3785 |
σ=8 | 2.0482 | 2.0045 | 2.2490 | 2.6526 | 2.5000 |
σ=10 | 2.1977 | 2.2389 | 2.3762 | 2.3268 | 2.6241 |
Parameter combinations of BCM19
In this section, we present some parameter combinations of BCM19.
Table 3 shows that using the combination of αk=0.1+1k+1 with the relaxation parameter δk=1−0.0005k+1 resulted in the shortest computational running time (2.1517 seconds).
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
δ=0.0001 | 2.1876 | 2.4516 | 2.7168 | 2.8837 | 11.3226 |
δ=0.0005 | 2.1517 | 2.3143 | 2.4915 | 2.9669 | 11.999 |
δ=0.001 | 2.2124 | 2.7103 | 2.7338 | 3.2380 | 12.1952 |
δ=0.005 | 2.2525 | 2.3032 | 2.8382 | 3.2330 | 12.6409 |
δ=0.01 | 2.4413 | 3.1847 | 2.8669 | 3.3555 | 13.8754 |
δ=0.05 | 4.1909 | 4.8588 | 5.0916 | 5.8662 | 20.3472 |
δ=0.1 | 6.6171 | 6.7856 | 6.7514 | 6.9288 | 30.8285 |
[1] | H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer Cham, 2011. https://doi.org/10.1007/978-3-319-48311-5 |
[2] | B. Engquist, Encyclopedia of applied and computational mathematics, Berlin: Springer, 2015. https://doi.org/10.1007/978-3-540-70529-1 |
[3] |
J. Eckstein, D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Math. Program., 55 (1992), 293–318. https://doi.org/10.1007/BF01581204 doi: 10.1007/BF01581204
![]() |
[4] | N. Artsawang, K. Ungchittrakool, A new forward-backward penalty scheme and its convergence for solving monotone inclusion problems, Carpathian. J. Math., 35 (2019), 349–363. |
[5] | N. Artsawang, K. Ungchittrakool, A new splitting forward-backward algorithm and convergence for solving constrained convex optimization problem in Hilbert spaces, J. Nonlinear Convex Anal., 22 (2021), 1003–1023. |
[6] |
D. Kitkuan, P. Kumam, J. Martínez-Moreno, Generalized Halpern-type forward-backward splitting methods for convex minimization problems with application to image restoration problems, Optimization, 69 (2020), 1557–1581. https://doi.org/10.1080/02331934.2019.1646742 doi: 10.1080/02331934.2019.1646742
![]() |
[7] |
V. Dadashi, M. Postolache, Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators, Arab. J. Math., 9 (2020), 89–99. https://doi.org/10.1007/s40065-018-0236-2 doi: 10.1007/s40065-018-0236-2
![]() |
[8] |
J. S. Jung, A general iterative algorithm for split variational inclusion problems and fixed point problems of a pseudocontractive mapping, J. Nonlinear Funct. Anal., 2022 (2022), 13. https://doi.org/10.23952/jnfa.2022.13 doi: 10.23952/jnfa.2022.13
![]() |
[9] | V. A. Uzor, T. O. Alakoya, O. T. Mewomo, Modified forward-backward splitting method for split equilibrium, variational inclusion, and fixed point problems, Appl. Set-Valued Anal. Optim., 5 (2023), 95–119. |
[10] |
M. R. Hestenes, Multiplier and gradient methods, J. Optim. Theory Appl., 4 (1969), 303–320. https://doi.org/10.1007/BF00927673 doi: 10.1007/BF00927673
![]() |
[11] |
P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. https://doi.org/10.1137/0716071 doi: 10.1137/0716071
![]() |
[12] |
B. T. Polyak, Some methods of speeding up the convergence of iterative methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
![]() |
[13] | L. Liu, S. Y. Cho, J. C. Yao, Convergence analysis of an inertial Tseng's extragradient algorithm for solving pseudomonotone variational inequalities and applications, J. Nonlinear Var. Anal., 5 (2021), 627–644. |
[14] |
N. Artsawang, K. Ungchittrakool, Inertial Mann-type algorithm for a nonexpansive mapping to solve monotone inclusion and image restoration problems, Symmetry, 12 (2020), 750. https://doi.org/10.3390/sym12050750 doi: 10.3390/sym12050750
![]() |
[15] | Y. E. Nesterov, A method for solving a convex programming problem with convergence rate O(1/k2), Dokl. Akad. Nauk SSSR, 269 (1983), 543–547. |
[16] | F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. |
[17] |
A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. https://doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
![]() |
[18] |
F. Alvarez, Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space, SIAM J. Optim., 14 (2004), 773–782. https://doi.org/10.1137/S1052623403427859 doi: 10.1137/S1052623403427859
![]() |
[19] |
H. Attouch, J. Bolte, B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods, Math. Program., 137 (2009), 91–129. https://doi.org/10.1007/s10107-011-0484-9 doi: 10.1007/s10107-011-0484-9
![]() |
[20] |
A. Moudfi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. https://doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
![]() |
[21] |
D. A. Lorenz, T. Pock, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis., 51 (2015), 311–325. https://doi.org/10.1007/s10851-014-0523-2 doi: 10.1007/s10851-014-0523-2
![]() |
[22] |
E. Altiparmak, I. Karahan, A new preconditioning algorithm for finding a zero of the sum of two monotone operators and its application to image restoration problems, Int. J. Comput. Math., 99 (2022), 2482–2498. https://doi.org/10.1080/00207160.2022.2068146 doi: 10.1080/00207160.2022.2068146
![]() |
[23] |
C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
![]() |
[24] |
R. I. Boţ, E. R. Csetnek, D. Meier, Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces, Optim. Method. Softw., 34 (2019), 489–514. https://doi.org/10.1080/10556788.2018.1457151 doi: 10.1080/10556788.2018.1457151
![]() |
[25] |
K. Ungchittrakool, S. Plubtieng, N. Artsawang, P. Thammasiri, Modified Mann-type algorithm for two countable families of nonexpansive mappings and application to monotone inclusion and image restoration problems, Mathematics, 11 (2023), 2927. https://doi.org/10.3390/math11132927 doi: 10.3390/math11132927
![]() |
[26] |
R. I. Boţ, D. Meier, A strongly convergent Krasnosel'skiǐ-Mann-type algorithm for finding a common fixed point of a countably infinite family of nonexpansive operators in Hilbert spaces, J. Comput. Appl. Math., 395 (2021), 113589. https://doi.org/10.1016/j.cam.2021.113589 doi: 10.1016/j.cam.2021.113589
![]() |
[27] |
B. Tan, S. Y. Cho, An inertial Mann-like algorithm for fixed points of nonexpansive mappings in Hilbert spaces, J. Appl. Numer. Optim., 2 (2020), 335–351. https://doi.org/10.23952/jano.2.2020.3.05 doi: 10.23952/jano.2.2020.3.05
![]() |
[28] |
B. Tan, S. Li, Strong convergence of inertial Mann algorithms for solving hierarchical fixed point problems, J. Nonlinear Var. Anal., 4 (2020), 337–355. https://doi.org/10.23952/jnva.4.2020.3.02 doi: 10.23952/jnva.4.2020.3.02
![]() |
[29] |
B. Tan, S. Y. Cho, J. C. Yao, Accelerated inertial subgradient extragradient algorithms with non-monotonic step sizes for equilibrium problems and fixed point problems, J. Nonlinear Var. Anal., 6 (2022), 89–122. https://doi.org/10.23952/jnva.6.2022.1.06 doi: 10.23952/jnva.6.2022.1.06
![]() |
[30] |
B. Tan, S. Xu, S. Li, Modified inertial Hybrid and shrinking projection algorithms for solving fixed point problems, Mathematics, 8 (2020), 236. https://doi.org/10.3390/math8020236 doi: 10.3390/math8020236
![]() |
[31] | B. V. Limaye, Functional analysis, New Age International, 1996. |
[32] |
A. Dixit, D. R. Sahu, P. Gautam, T. Som, J. C. Yao, An accelerated forward backward splitting algorithm for solving inclusion problems with applications to regression and link prediction problems, J. Nonlinear Var. Anal., 5 (2021), 79–101. https://doi.org/10.23952/jnva.5.2021.1.06 doi: 10.23952/jnva.5.2021.1.06
![]() |
[33] |
H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
![]() |
[34] |
P. E. Mainge, Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl., 325 (2007), 469–479. https://doi.org/10.1016/j.jmaa.2005.12.066 doi: 10.1016/j.jmaa.2005.12.066
![]() |
1. | Purit Thammasiri, Vasile Berinde, Narin Petrot, Kasamsuk Ungchittrakool, Double Tseng’s Algorithm with Inertial Terms for Inclusion Problems and Applications in Image Deblurring, 2024, 12, 2227-7390, 3138, 10.3390/math12193138 | |
2. | Xiao Guo, Chuanpei Xu, Zhibin Zhu, Benxin Zhang, Nonmonotone variable metric Barzilai-Borwein method for composite minimization problem, 2024, 9, 2473-6988, 16335, 10.3934/math.2024791 | |
3. | Kasamsuk Ungchittrakool, Natthaphon Artsawang, A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications, 2024, 9, 2473-6988, 23632, 10.3934/math.20241149 |
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
δ = 0.0001 | 2.1098 | 2.4669 | 2.5797 | 2.4906 | 2.6156 |
δ = 0.0005 | 2.0055 | 2.4642 | 2.7755 | 2.5717 | 2.6156 |
δ = 0.001 | 2.2431 | 2.7309 | 2.3649 | 2.5245 | 2.4612 |
δ = 0.005 | 2.1603 | 2.5758 | 2.5024 | 2.7668 | 2.7267 |
δ = 0.01 | 2.7637 | 3.0821 | 2.7610 | 2.9218 | 2.9517 |
δ = 0.05 | 5.5459 | 6.0621 | 5.6462 | 5.8979 | 6.2985 |
δ = 0.1 | 8.3909 | 8.2648 | 8.3734 | 7.8792 | 8.7616 |
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
σ=2 | 2.6500 | 2.4602 | 2.5416 | 2.6181 | 2.7257 |
σ=4 | 2.3598 | 2.1852 | 2.3602 | 2.4580 | 2.4626 |
σ=6 | 2.4725 | 2.1835 | 2.1933 | 2.7005 | 2.3785 |
σ=8 | 2.0482 | 2.0045 | 2.2490 | 2.6526 | 2.5000 |
σ=10 | 2.1977 | 2.2389 | 2.3762 | 2.3268 | 2.6241 |
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
δ=0.0001 | 2.1876 | 2.4516 | 2.7168 | 2.8837 | 11.3226 |
δ=0.0005 | 2.1517 | 2.3143 | 2.4915 | 2.9669 | 11.999 |
δ=0.001 | 2.2124 | 2.7103 | 2.7338 | 3.2380 | 12.1952 |
δ=0.005 | 2.2525 | 2.3032 | 2.8382 | 3.2330 | 12.6409 |
δ=0.01 | 2.4413 | 3.1847 | 2.8669 | 3.3555 | 13.8754 |
δ=0.05 | 4.1909 | 4.8588 | 5.0916 | 5.8662 | 20.3472 |
δ=0.1 | 6.6171 | 6.7856 | 6.7514 | 6.9288 | 30.8285 |
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
δ = 0.0001 | 2.1098 | 2.4669 | 2.5797 | 2.4906 | 2.6156 |
δ = 0.0005 | 2.0055 | 2.4642 | 2.7755 | 2.5717 | 2.6156 |
δ = 0.001 | 2.2431 | 2.7309 | 2.3649 | 2.5245 | 2.4612 |
δ = 0.005 | 2.1603 | 2.5758 | 2.5024 | 2.7668 | 2.7267 |
δ = 0.01 | 2.7637 | 3.0821 | 2.7610 | 2.9218 | 2.9517 |
δ = 0.05 | 5.5459 | 6.0621 | 5.6462 | 5.8979 | 6.2985 |
δ = 0.1 | 8.3909 | 8.2648 | 8.3734 | 7.8792 | 8.7616 |
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
σ=2 | 2.6500 | 2.4602 | 2.5416 | 2.6181 | 2.7257 |
σ=4 | 2.3598 | 2.1852 | 2.3602 | 2.4580 | 2.4626 |
σ=6 | 2.4725 | 2.1835 | 2.1933 | 2.7005 | 2.3785 |
σ=8 | 2.0482 | 2.0045 | 2.2490 | 2.6526 | 2.5000 |
σ=10 | 2.1977 | 2.2389 | 2.3762 | 2.3268 | 2.6241 |
α | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
δ=0.0001 | 2.1876 | 2.4516 | 2.7168 | 2.8837 | 11.3226 |
δ=0.0005 | 2.1517 | 2.3143 | 2.4915 | 2.9669 | 11.999 |
δ=0.001 | 2.2124 | 2.7103 | 2.7338 | 3.2380 | 12.1952 |
δ=0.005 | 2.2525 | 2.3032 | 2.8382 | 3.2330 | 12.6409 |
δ=0.01 | 2.4413 | 3.1847 | 2.8669 | 3.3555 | 13.8754 |
δ=0.05 | 4.1909 | 4.8588 | 5.0916 | 5.8662 | 20.3472 |
δ=0.1 | 6.6171 | 6.7856 | 6.7514 | 6.9288 | 30.8285 |