1.
Introduction
Let Rk be a Euclidean space with inner product ⟨⋅,⋅⟩ and its induced norm ‖⋅‖. Let m be a fixed natural number. In this work, we focus on the convex optimization problem of the following form:
where f:Rk→R is a real-valued differentiable convex function, h:Rk→R is a real-valued (possibly) non-differentiable convex function, and the constrained set X is the intersection of a simple closed convex set X0⊂Rk and a finite number of a level set Lev(gi,0):={x∈Rk:gi(x)≤0} of a real-valued convex function gi:Rk→R for all index i=1,…,m. Throughout this work, we denote by X∗ and ϕ∗ the set of all minimizers and the optimal value of the problem (1.1), respectively. The problem in the form of problem (1.1) arises in some practical situations such as image processings [1,2,3], signal recovery [4,5], and statistics [6,7,8], to name but a few.
As the function f is a differentiable convex function and the function h is a convex function, it is well known that the objective function f+h in the problem (1.1) is, of course, a non-differentiable convex function. Therefore, one might attempt to solve problem (1.1) by using existing methods for non-differentiable convex optimization problems, for instance, the subgradient methods or proximal methods. It has been suggested and discussed that the proximal algorithm is generally preferable to the subgradient algorithm since it can converge without any additional assumption on step-size sequence and can archive a convergence rate of order O(1/N) for the objective function values. Nevertheless, computing the proximal operator for the sum of functions can be challenging. In this situation, methods for solving problems with the additive structure of the objective function, like the problem (1.1), often utilize the specific structure of each function f and h when constructing the solution methods; see [9] for more information. Focusing on iterative methods for dealing with the objective function in the form of the sum of two convex functions, the well-known one is nothing else than the so-called proximal gradient method, which suggests constructing a sequence {xn}∞n=0 as follows:
where αn is a positive step size, and ∇f(xn) is a gradient of f at xn.
Let us notice that the proximal gradient method given in (1.2) may not be appropriate for the problem (1.1) by virtue of the fact that the constraint set X of the current form is the intersection of a finite number of closed convex sets. This is because, in updating the iterate xn+1 for every iteration n≥0, one is required to solve a constrained optimization subproblem over the intersection of finitely many closed convex sets. To tackle this, Nedić and Necoara [10] proposed a subgradient type method [10, Methods (2a)–(2c)] for solving the problem (1.1) in the case when the objective function is only a function h with a strong convexity assumption and the constrained set is an infinite number of constraint functions. The strategy is to separate the problem into two parts, namely the step for minimizing the objective function ϕ over the simple set X0, and the second one is a parallel computation for the feasible intersection ⋂mi=1Lev(gi,0) via the classical subgradient scheme of each constraint function gi for all index i=1,…,m. They analyzed its convergence results and showed that the method had a sublinear convergence rate. Note that this strategy reduces the difficulty of dealing with the whole constrained set by minimizing the function over a simple set and then minimizing feasibility violations through parallel computation on each component of the functional constraints.
Since the calculation of the subgradient of each function gi is needed in the feasibility update, it may face time-consuming difficulty due to the complication structures of the functions gi, i=1,…,m. To overcome this drawback, the concept of the approximate subgradient of the functions gi has been utilized. Apart from this mentioned issue, the concept of approximate subgradient also arises in the duality theorem [11] and network optimization [12]. Moreover, the notion of an approximate subgradient has been widely studied in various aspects when solving optimization problems, such as ϵ-subgradient methods [13,14,15], projection ϵ-subgradient methods [16,17], and its variant methods [18,19,20,21]. Even if the main contributions of the approximate subgradient type methods is to reduce the complication of the subgradient computation, it can be noted that within some acceptable error tolerance ϵ, the method with an approximate subgradient can improve the efficiency of its non-error of tolerance type method, (see Table 1 below).
Motivated by the above discussions, we present in this work a distributed approximate subgradient method based on the ideas of the proximal gradient method and the approximate subgradient method. The main difference between the proposed method and the method proposed by Nedić and Necoara [10] is that we used the proximal gradient method for dealing with the objective function and the approximate subgradient method for the feasibility constrained set. The remainder of this work is organized as follows: In Section 2, we recall the notations and auxiliary results that are needed for our convergence work. In Section 3, we present an approximate subgradient method for solving the considered problems. Subsequently, after building all the needed tools, we investigate the convergence results and the convergence rate in Section 4. In Section 5, we present numerical experiments. Finally, in Section 6, we give a conclusion.
2.
Preliminaries
In this section, we recall some basic definitions and useful facts that will be used in the following sections; readers may consult the books [22,23,24].
Let f:Rk→R be a real-valued function. We call f a convex function if
for all x,y∈Rk and α∈(0,1). For each α∈R, an α-level set (in short, level set) of f at the level α is defined by
Note that, if f is continuous, the α-level set Lev(f,α) is a closed set for all α∈R. Moreover, if f is a convex function, then its α-level set Lev(f,α) is a convex set for all α∈R.
For a given x∈Rk and ϵ≥0, we call a vector sf(x)∈Rk an ϵ-subgradient of f at x if
holds for all y∈Rk. The set of all ϵ-subgradients of f at x is denoted by ∂ϵf(x) and is called the ϵ-subdifferential of f at x. In the case of ϵ=0, we obtain a (usual) subgradient of f at x and denoted the subdifferential set by ∂f(x):=∂0f(x). For a convex function f:Rk→R and x∈Rk, we note that
for all ϵ1,ϵ2≥0 with ϵ1<ϵ2.
We note that convexity is a sufficient condition for approximate subdifferentiability. Namely, for a convex function f:Rk→R, the ϵ-subdifferential set ∂ϵf(x) is a nonempty set for all x∈Rk and ϵ≥0, see [24, Theorem 2.4.9] for more details. Additionally, if X0⊂Rk is a nonempty bounded set, then the set ⋃x∈X0∂ϵf(x) is a bounded set for all ϵ≥0, see [24, Theorem 2.4.13].
Let X0⊂Rk be a nonempty closed convex set, and x∈Rk. The normal cone to X0 at x is given by
The indicator function of X0, δX0:Rk→(−∞,∞], is the function defined by
If the function f:Rk→R is convex and the set X0⊂Rk is nonempty closed convex, then for every x∈X0, we have
Let f:Rk→R be a function, and X0⊂Rk be a nonempty closed convex set. The set of all minimizers of f over X0 is denoted by
If the function f:Rk→R is convex and the set X0⊂Rk is a nonempty closed convex, then x∗∈argminx∈X0f(x) if and only if,
that is, there exists sf(x)∈∂f(x∗) for which
for any x∈X0.
Let X0⊂Rk be a nonempty closed convex set, and x∈Rk. We call a point y∈X0 the projection of x onto X0 if
for all z∈X0 and denoted by y=:PX0(x). It is well known that the projection onto X0 is uniquely determined. Actually, we denote the distance from x to X0 by dist(x,X0):=‖PX0(x)−x‖.
3.
Algorithm and assumptions
In this section, we start our investigation by proposing a distributed approximate subgradient method for solving the considered constrained convex optimization problem (1.1). We subsequently discuss the important assumptions for analyzing the convergence behaviors of the proposed method.
Some important comments relating to Algorithm 1 are in order.
Remark 3.1. (i) Since the objective function h(⋅)+12αn−1‖⋅−xn−1‖2+⟨∇f(xn−1),⋅−xn−1⟩ is a strongly convex function and the constrained set X0⊂Rk is a nonempty closed convex set, we can ensure the existence and uniqueness of its minimizer, namely the iterate vn for all n≥1. This means that the iterate vn is well-defined for all n≥1.
(ii) Since the function gi is a real-valued convex function, we note that the function g+i=max{gi,0} is also a convex function. This implies that the ϵn,i-subdifferential set ∂ϵn,ig+i(vn) is nonempty for all n≥1. Moreover, if g+i(vn)>0, it follows that 0∉∂g+i(vn). Indeed, since Yi≠∅ and Yi={x∈Rk:g+i(x)≤0}, there exists a point x∈Rk such that g+i(x)≤0 and hence minx∈Rkg+i(x)≤0<g+i(vn) which implies that vn is not a minimizer of the function g+i, and hence 0∉∂g+i(vn). Also, it follows from the properties of the ϵn,i-subdifferential set that ∂g+i(vn)⊂∂ϵn,ig+i(vn) which implies that the well-definiteness choice of a nonzero vector dn,i is guaranteed.
The following assumption will play an important role throughout the convergence results of this work:
Assumption 3.2. The constrained set X0 is bounded.
As a consequence of Assumption 3.2, we state here the boundedness properties of some related sequences and subdifferential sets as the following proposition.
Proposition 3.3. The following statements hold true:
(i) There exists a positive constant M such that for all i=1,…,m, we have
for all x∈X0.
(ii) There exists a positive constant B such that
for all x∈X0.
(iii) There exists a positive constant D such that for all n≥1 and for all i=1,…,m, we have
(iv) The sequences {xn}∞n=0, {vn}∞n=1, {ˉzn}∞n=1, {dn,i}∞n=1 and {zn,i}∞n=1, i=1,…,m, are bounded.
Proof. (ⅰ) For each i=1,2,…,m, since the function g+i is continuous and the set X0 is compact, we obtain that the image of g+i is also bounded over the set X0. Hence, such an M>0 exists.
(ⅱ) Since the ϵ-subdifferential set is bounded on a bounded set X0, for each ϵ≥0, we get that the vectors ∇f(x),sh(x), and sϕ(x) are bounded for all x∈X0, which implies (ⅱ).
(ⅲ) By the same reasoning in (ⅱ) and the definition of vn and dn,i that dn,i is bounded for all n≥1 and for all i=1,2,…,m.
(ⅳ) The boundedness of X0 implies that the sequences {xn}∞n=0 and {vn}∞n=1 are bounded, while the boundedness of dn,i and vn and the definition of ¯zn and zn,i implies that {ˉzn}∞n=1 and {zn,i}∞n=1,i=1,2,…,m, are bounded. □
The following conditions on parameters are needed to guarantee the convergence result of Algorithm 1.
Assumption 3.4. The sequence {αn}∞n=0 and {ϵn,i}∞n=1,i=1,…,m, satisfy the following properties:
(i) ∑∞n=0αn=∞ and ∑∞n=0α2n<∞.
(ii) ∑∞n=1ϵn,i<∞ for all i=1,2,…,m.
Remark 3.5. One may choose sequences αn:=α(n+1)a and ϵn,i:=ϵ(n+1)b, where a,b,α,ϵ>0 such that a∈(12,1] and b>1, for particular examples of the sequences {αn}∞n=0 and {ϵn,i}∞n=1, i=1,2,…,m, satisfy Assumption 3.4.
The following assumption forms a key tool in proving the convergence result.
Assumption 3.6. There exists a real number c>0 such that
Remark 3.7. Assumption 3.6 can be seen as a deterministic version of the assumptions proposed in [25, Assumption 2] and [10, Assumption 2]. The condition given in Assumption 3.6 is also related to the notion of linear regularity of a finite collection of sets; see [25, pages 231–232] for further details. A simple example of the constraint set X that satisfies Assumption 3.6 is, for instance, X:=X0∩Y1∩Y2 where X0:=[0,1]×[0,1], Y1={x:=(x1,x2)∈R2:g1(x):=x1−x2≤0} and Y2={x:=(x1,x2)∈R2:g2(x):=−2x1+x2≤0}. It can be seen that for all c≥1, we have dist2(x,X)≤c2((g+1(x))2+(g+2(x))2) for all x∈X0.
4.
Convergence results
In this section, we will consider the convergence properties of the generated sequences. We divide this section into three parts. Namely, we start with the first subsection by providing some useful technical relations for the generated sequences. We subsequently prove the convergence of the generated sequences to an optimal solution in the second subsection. We close this section by deriving the rate of convergence of the function values of iterate to the optimal value of the considered problem.
4.1. Technical lemmas
The following lemma provides an essential relation between the iterates vn+1 and xn which are used to derive some consequence relations for the generated iterates.
Lemma 4.1. Let {xn}∞n=0 and {vn}∞n=1 be the sequences generated by Algorithm 1. Then, for every n≥0, η>0 and x∈X0, we have
Proof. Let n≥0, η>0, and x∈X0 be given. We first note that
Now, it follows from the definition of vn+1 and the optimality condition for constrained optimization that
which is the same as
This, along with the facts that x∈X0 and vn+1∈X0, yield
or, equivalently, that
Thus, we employ the relation (4.2) in (4.1) and obtain the following:
We note from the first-order characterization of the convex function that
Now, for the upper bound of the term 2αn⟨∇f(xn),xn−vn+1⟩, we note from the well known Young's inequality that
Moreover, for a given subgradient sh(xn)∈∂h(xn), we note that
By using the obtained relations (4.4)–(4.6) in the inequality (4.3), we derive that
which is nothing else than the required inequality. □
Lemma 4.2. Let {xn}∞n=0 and {vn}∞n=1 be the sequences generated by Algorithm 1. Then, for every n≥0, η>0 and x∗∈X∗, we have
Proof. Let n≥0, η>0, and x∗∈X∗ be given. For a given sϕ(x∗)∈∂ϕ(x∗), we note from the definition of subgradient that
where the second inequality holds true by the necessary and sufficient optimality conditions for convex constrained optimization. Similarly, for a given sϕ(PX(xn))∈∂ϕ(PX(xn)), we have
By adding these two obtained relations (4.7) and (4.8) and subsequently multiplying by 12, we obtain
Applying this together with the inequality in Lemma 4.1, we obtain that
as desired. □
We now derive a relation between the iterates vn and zn,i.
Lemma 4.3. Let {vn}∞n=1 and {zn,i}∞n=1, i=1,…,m, be the sequences generated by Algorithm 1. Then, for every n≥1, i=1,2,…,m and x∈X, we have
Proof. Let n≥1, i=1,2,…,m and x∈X be given. We note from the definition of zn,i that
If g+i(vn)=0, then it is clearly that
We now consider the case g+i(vn)>0 as follows: Since dn,i∈∂ϵn,ig+i(vn)∖{0}, we note that
which is
We also note that
and
Applying these obtained relations in (4.9), we get
as desired. □
In order to derive the relation between dist2(xn,X) and dist2(vn,X), we need the following fact:
Proposition 4.4. Let z1,z2,…,zm∈Rk and ˉz=1m∑mi=1zi be given. Then, for every x∈Rk
Proof. It is straightforward based on the properties of the inner product and norm. □
We are now considering the relation between dist2(xn,X) and dist2(vn,X) in the following lemma.
Lemma 4.5. Let {xn}∞n=0 and {vn}∞n=1 be the sequences generated by Algorithm 1. Then, for every n≥1, we have
where ¯D=max{D,1}.
Proof. Let n≥1 be given. Since xn=PX0(ˉzn), it is noted, for all x∈X⊂X0, that
Since ˉzn=1mm∑i=1zn,i, we note from Proposition 4.4 that for all x∈X,
which implies that the inequality (4.10) becomes
for all x∈X. By summing up the inequality obtained in Lemma 4.3 for i=1 to m, and, subsequently, dividing by m for both sides of the inequality, we have
This, together with inequality (4.11), means that for all x∈X,
Invoking the bounds given in Proposition 3.3, we have for every x∈X that
Putting x=PX(vn)∈X in the inequality (4.12), we obtain
and hence
□
As a consequence of the preceding lemmas, we obtain the following relation that is used to prove the convergence of the sequence {‖vn−x∗‖2}∞n=1 for all x∗∈X∗.
Lemma 4.6. Let {xn}∞n=0 and {vn}∞n=1 be the sequences generated by Algorithm 1. Then, for every n≥0, η>0 and x∗∈X∗, we have
where ¯D=max{D,1}.
Proof. Let n≥1, η>0, and x∗∈X∗ be given. By using the inequality (4.12) and replacing x∈X by x∗, which also belongs to X, we note that
Furthermore, by applying Young's inequality, we note that
Invoking these two relations in the inequality obtained in Lemma 4.2, we get that
which is the required inequality. □
4.2. Convergence of iterates
In order to obtain the existence of the limit of the sequence {‖vn−x∗‖}∞n=1, we need the following proposition:
Proposition 4.7. [26] Let {an}∞n=1, {bn}∞n=1 and {cn}∞n=1 be sequences of nonnegative real numbers. If it holds that an+1≤an+bn−cn for all n≥1, and ∑∞n=1bn<∞, then limn→∞an exists and ∑∞n=1cn<∞.
Now, we are in a position to prove the main convergence theorem. The theorem guarantees that both generated sequences {vn}∞n=1 and {xn}∞n=0 converge to a point in the solution set X∗.
Theorem 4.8. The sequences {vn}∞n=1 and {xn}∞n=0 generated by Algorithm 1 converge to an optimal solution in X∗.
Proof. Let n≥1 be given. Since vn∈X0, it follows from Assumption 3.5 that there exists a constant c>0 such that
Now, putting ¯c>0 such that
we have
This, together with (4.13) and Lemma 4.5, imply that
and then
By applying this obtained relation together with the inequality in Lemma 4.6, we have for all η>0
Now, by putting η:=q(1−q)>0 in the above inequality, we can neglect the non-negative term (q(1−q)−η)dist2(xn,X) so that the above inequality can be written as
which implies that
Invoking Assumption 3.4 (ⅱ) together with Proposition 4.7 in (4.17), we conclude that the limit
and, as well as,
On the other hand, by applying the inequality (4.12) in the relation obtained in Lemma 4.1 by replacing x by x∗, we note that
and then
Now, let us fix a positive integer N≥1. Summing up the above relation for n=1 to N, we have
By approaching N→∞, we obtain that
Next, we show that lim infn→∞(ϕ(xn)−ϕ∗)≤0. Suppose, to the contrary, that there exists N′∈N and β>0 such that ϕ(xn)−ϕ∗≥β for all n≥N′. Since
we also have lim infn→∞(ϕ(xn)−ϕ∗)≤0, that is, lim infn→∞ϕ(xn)≤ϕ∗.
Now, since the sequence {xn}∞n=1 is bounded, there exists a subsequence {xnl}∞l=1 of {xn}∞n=1 such that liml→∞ϕ(xnl)=lim infn→∞ϕ(xn)≤ϕ∗. Moreover, since {xnl}∞l=1 is also a bounded sequence, there exists a subsequence {xnlj}∞i=1 of {xnl}∞l=1 and a point ˆx∈Rk in which xnlj→ˆx. Thus, by using (4.18), we have vnlj→ˆx.
On the other hand, by utilizing the relation in (4.19), we note that for all j≥1
Now, by using the fact that the sequence {|ϕ(xn)−ϕ∗|}∞n=1 is bounded, the continuity of each g+i, i=1,2,…,m and letting j→∞, we obtain
which implies that ˆx∈⋂mi=1Lev(gi,0). Since the sequence {xnlj}∞j=1⊂X0, the closedness of X0 yields that ˆx∈X0, and hence ˆx∈X:=X0∩⋂mi=1Lev(gi,0).
Finally, by using the continuity of ϕ, we obtain that
which implies that ˆx∈X∗. Therefore, we conclude that the sequence {vn}∞n=1 converges to the point ˆx∈X∗. This yields that the sequnce {xn}∞n=1 also converges to the point ˆx∈X∗. This completes the proof. □
4.3. Rate analysis
In this subsection, we consider the rate of convergence of the objective values {ϕ(xn)}∞n=0 to the optimal value ϕ∗.
Theorem 4.9. Let {xn}∞n=0 and {vn}∞n=1 be the sequences generated by Algorithm 1. Then, for a positive integer N≥1, it holds that
where ¯c>max{c,1¯D2} and ¯D=max{D,1}.
Proof. We note from the inequality (4.20) that for all positive integers N≥1
which implies that
By using the property of ¯c given in (4.14) that ¯c>max{c,1¯D2}, the definition of q in (4.15) that 0<q:=1¯c¯D2<1 and the definition of η that η:=q(1−q)>0, we note that
This yields that
and hence
as required. □
To obtain the convergence rate of the objective function, we need the following proposition:
Proposition 4.10. [22, Lemma 8.26] Let f:[a−1,b+1]→R be a continuous, nonincreasing real-valued function over [a−1,b+1], where a and b are integers such that a≤b. Then
We close this subsection by considering a particular stepsize sequence {αn}∞n=0 in Theorem 4.9 to obtain the O(1N1−a) rate of convergence of the function values of iterate to the optimal value of the considered problem, where a∈(0.5,1).
Corollary 4.11. Let {xn}∞n=0 and {vn}∞n=1 be the sequences generated by Algorithm 1. If the sequence {αn}∞n=0 is given by
for all n≥0, where a∈(0.5,1), then for a positive integer N≥1, it holds that
Proof. Let us note from Proposition 4.10 that
and
which implies that
Furthermore, we note that (N+2N+3)1−a≥(34)1−a and (2N+3)1−a≤(12)1−a, we have
Hence, by putting M1:=2Mmm∑i=1∞∑n=1ϵn,i and applying the inequality derived in Theorem 4.9, we obtain that
and the proof is completed. □
5.
Numerical example
In this section, we consider the numerical behaviors of the proposed method (Algorithm 1) for solving the minimum-norm solution to the intersection of a finite number of closed balls and a box constraint of the following form. Let ci∈Rk, i=1,…,m, be given vectors, and a,b∈R. The problem is to find a vector x∈Rk that solves the problem
This problem can be written in another form as
which is clearly a particular situation of the problem (1.1) in the case when f(x)=0.5‖x‖2, h(x)=0, gi(x)=‖x−ci‖−1 for all i=1,…,m, and X0=[0,1.5]k.
In the first experiment, we examine the influence of the step size αn:=α(n+1). We perform Algorithm 1 for solving the problem (5.1) when the number of target sets m=1000. We set ϵn,i=0 for all n≥0 and i=1,…,m. We choose the vector ci by randomly choosing its coordinates in the interval (1,1.5), and the initial vector x0 is the vector whose all coordinates are 0.1. We consider the parameter α∈[0.1,0.9] in the step size αn and the dimensions k=2,5 and 10. We perform 10 independent random tests for each choice of α and k and terminate Algorithm 1 when the relative error ‖vn+1−vn‖‖vn‖+1 reaches the optimal error tolerance of 10−5. The averaged results of computational runtimes in seconds and the number of iterations and computational runtimes in seconds for each choice of α and k are plotted in Figure 1. As we can see from Figure 1, for each dimension k, the parameter α=0.1 gives the least number of iterations and computational runtimes.
In the following experiment, we examine the influence of the error tolerances {ϵn,i}∞n=1 for all i=1,2,…,m. We consider the parameter ϵ=0,0.1,0.3,0.5,0.7 and 0.9 in the error tolerance ϵn,i=ϵ(n+1)2 with various dimensions k and number of target sets m. It is noted that for this tested problem, Algorithm 1 with ϵ=0 is a particular case of the method proposed by Nedic and Necoara [10], where the batch size is m. We set the step size αn:=0.1(n+1) and performed the above experiment. The averaged results of computational runtimes in seconds are given in Table 1.
One can see from the results presented in Table 1 that the averaged runtime increases for all k and m increases. It can be seen that the proposed method with the error-tolerance parameter ϵ≠0 requires less averaged runtimes compared to the case when ϵ=0 for almost all the number of target sets m. Even if we can not point out which choice of error-tolerance parameters ϵ≠0 yields the best performance for all k and m, the results show us that the averaged runtime can be improved by some suitable choices of error tolerances. This also underlines the benefit of the approximate subgradient-type method proposed in this work.
6.
Conclusions
We concentrated on addressing the convex minimization problem across the intersection of a finite number of convex level sets. Our approach centered on introducing the distributed approximate subgradient method tailored to tackle this particular problem. To guarantee the convergence of our proposed method, we provided a rigorous proof demonstrating that the sequences generated by the method converge to an optimal solution. Furthermore, the O(1N1−a) rate of convergence of the function values of iterate to the optimal value of the considered problem, where a∈(0.5,1). Additionally, we illustrated our findings through several numerical examples aimed at examining the impact of error tolerances. While identifying the optimal error tolerance remains a significant consideration, our experimental results indicate that the average runtime can be enhanced by selecting suitable nonzero error tolerances as opposed to omitting them altogether. This observation suggests an intriguing avenue for future research exploration.
Author contributions
Jedsadapong Pioon: Conceptualization, Methodology, Validation, Convergence analysis, Investigation, Writing-original draft preparation, Writing-review and editing; Narin Petrot: Conceptualization, Methodology, Software, Validation, Convergence analysis, Investigation, Writing-review and editing, Visualization; Nimit Nimana: Conceptualization, Methodology, Software, Validation, Convergence analysis, Investigation, Writing-original draft preparation, Writing-review and editing, Visualization, Supervision, Project administration, Funding acquisition. All authors have read and approved the final version of the manuscript for publication.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
The authors would like to thank three anonymous referees for their useful suggestions and comments.
This work has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F650018].
Conflict of interest
The authors declare that there is no conflict of interest regarding the publication of this article.