1.
Introduction
This article concentrates on the AVE, involving a square matrix A in Rn×n, vectors y as well as b in Rn, and |∗| signifying the absolute value. The equation is as follows:
The formulation of the AVE provided by Eq (1.1) is a standard expression of a more general framework that can be summarized as follows:
with A and B are matrices of dimensions n×n. Therefore, the general form (1.2) can be simplified to (1.1) when B=−I, where I is the identity matrix. The general concept of AVE was first introduced in [1] and has since been extensively explored and examined in the literature discussed in [2,3,4] and in the references mentioned therein. AVEs are crucial nonlinear and nondifferentiable systems that are commonly found in optimization landscapes. This type of system manifests in a variety of domains, such as contact problems, bimatrix games, quadratic programming, network pricing, and linear programming; refer to [1,5,6,7,8,9] and associate literature for more knowledge. Hence, the investigation of robust numerical procedures and theoretical frameworks for AVEs has great scientific significance, as well as the potential for wide-ranging applications and significant economic benefits.
The applications of numerical techniques to AVEs cover a wide range of structural considerations, mathematical frameworks, algebraic configurations, and innovative enactments of high-quality preconditioners alongside high-efficiency numerical strategies. In recent times, there has been a considerable surge in the exploration of numerical methods for AVEs, with numerous scholarly works suggesting diverse methodologies. For example, Yilmaz and Sahiner [10] produced a non-Lipschitz generalization of AVE and determined it utilizing two smoothing strategies. Ali [11] successively presented two fixed-point iterative strategies for AVE (1.1) and examined various kinds of convergence theorems. Zhou et al. [12] offered a Newton-based matrix splitting strategy that is capable of acquiring a linear convergence when A is an H-matrix or a positive definite matrix. Prokopyev [4] has shed light on the unique solvability of AVE, as well as its intricate ties with mixed integer programming and linear complementarity problems (LCP). Meanwhile, Hu and Huang [6] incorporated the AVE framework into the standard LCP format, providing insights into solving AVE (1.1). Salkuyeh [13] introduced the Picard-HSS iterative technique for the solution of AVEs and examined its convergence conditions. Khan et al. [14] have proposed an innovative method for solving AVEs based on Simpson's rule and generalized Newton's method. Noor et al. [15] have showcased minimization processes for Eq (1.1) and examined the convergence of these procedures under some appropriate states. Ke and Ma [16] provided an SOR (successive overrelaxation)-like method for Eq (1.1) and studied its application conditions, while Dong et al. [17] conducted an in-depth investigation of an SOR-like method to solve AVEs, exploring its convergence conditions, which differ from those presented by [16]. Tang [18] offered an innovative, inexact Newton-type method designed to deal with large-scale AVEs and outlined a detailed analysis of its convergence characteristics. Rohn et al. [19] presented another method for addressing AVE (1.1) that effectively reduces to the well-known Picard iteration method. The method is outlined below:
where y0=A−1b is the initial vector. Tang and Zhou [20] have demonstrated in their study a quadratically convergent descent method to solve the AVE (1.2) problem and have discussed different properties of convergence. Mangasarian and Meyer [7] provided an important and widely used theoretical result concerning the solvability of the AVE (1.1). Their findings stated that if ‖A−1‖<1 (or equivalently, σmin(A)>1, where σmin(A) represents the smallest singular value of A), then a unique solution y∗ exists to the AVE (1.1) for any b∈Rn. Furthermore, Mangasarian [3] offered an approximated generalized Newton approach tailored for addressing the AVE as prescribed in Eq (1.2). The findings display that this approach achieves linear convergence from any initial guess, provided it reaches the unique solution of AVE (1.2) under the circumstances ‖A−1‖<14. Caccetta et al. [21] explored a smoothing Newton procedure designed for addressing the AVE as defined in Eq (1.2). The authors have demonstrated that this method is not only global but also has quadratic convergence, provided that the weak condition ‖A−1‖<1 is satisfied. In a related study, Zhang and Wei [22] developed a generalized Newton method to address the same AVE (1.2). The authors demonstrate that their method is capable of achieving both global convergence as well as finite convergence when [A−I,A+I] is regular, a scenario that includes the case where ‖A−1‖<1. Wang et al. [23], Lian et al. [24], Cao et al. [25], Zhou et al. [26], Lv and Miao [27,28], Zhang and Miao [29], and Wu and Li [30] also investigated various approaches for AVEs and presented some fascinating convergence results.
The aims of this paper is to provide new iterative techniques for the computation of AVEs, supported by an extensive theoretical analysis. Our contributions are as follows: Firstly, we enhance the generalized Gauss-Seidel method [31] by splitting coefficient matrix A into three parts shown by Eq (2.2) and adding an extra parameter ψ to speed up convergence speed. Subsequently, we conduct thorough examination of their properties under specific conditions to ensure they work effectively.
The structure of this paper unfolds as follows: In Section 2, we unveil the offered methods and their convergence for addressing AVE (1.1). Moving forward, Section 3 encapsulates our numerical findings, while Section 4 contains our concluding remarks.
We will use the following notation throughout this article. Let A=(aij)∈Rn×n represent a matrix. We use |A|=(|aij|) to denote its absolute value and ||A||∞ for its infinity norm. A matrix A∈Rn×n is termed a Z-matrix if aij≤0 for i≠j, and it is called an M-matrix if it's a non-singular Z-matrix and satisfies A−1≥0.
Lemma 1.1. [32] Suppose we have two vectors y and u, both belonging to Rn×n. Then ‖|y|−|u|‖∞≤‖y−u‖∞.
2.
Suggested methods
Here, we explore the proposed methods. Both strategies are extensions of the generalized Gauss-Seidel method (GGSM). We will refer to these new methods as Extended GGSM Ⅰ (EGGSM Ⅰ) and Extended GGSM Ⅱ (EGGSM Ⅱ). There are two sections in this part. Section 2.1 examines EGGSM I and its convergence, while Section 2.2 examines EGGSM II and its convergence.
2.1. EGGSM Ⅰ for AVE
By redefining the AVE (1.1), we obtain
If we multiply both sides by ψ, we obtain,
Let
where AD, AL, and AU represent the diagonal, strictly lower triangular, and upper triangular parts of matrix A, respectively. By employing Eqs (2.1) and (2.2), we propose the EGGSM I as follows:
Through the iterative process, the previously stated equations can be reformulated as
where i=0,1,2,..., and ψ∈(0,1.5]. In addition, the EGGSM I algorithm is presented below:
(1) Choose a parameter ψ, pick an initial vector y0∈Rn, and then assign i=0.
(2) Calculate ri=yi+1≈A−1(|yi|+b),
(3) Calculate
(4) If yi+1=yi, then stop. If not, set i=i+1 and return to Step 2.
Now, the subsequent theorem demonstrates the convergence of EGGSM I.
Theorem 2.1. Assume that the problem denoted as AVE (1.1) is solvable. Suppose the diagonal elements of matrix A exceed one and meet the subsequent condition:
then the sequence {yi} of EGGSM I converges to the unique solution y⋆ of AVE.
Proof. Initially, we demonstrate that ‖(AD−ψAL)−1‖∞<1. Obviously, if AL=0, then ‖(AD−ψAL)−1‖∞=‖A−1D‖∞<1. Now, if we suppose that AL≠0, then we proceed as follows:
If we consider
When we consider both perspectives using the inverse of matrix AD, we obtain
where ℜ=ψA−1DAL and w=(1,1,...,1)T. Also, we have
So, by utilizing Eqs (2.6) and (2.7), we derive the following:
This implies
To verify the uniqueness of the solution, consider two different solutions denoted as y⋆ and ϑ⋆ for the AVE Eq (1.1). Employing Eq (2.4), we derive
From (2.8) and (2.9), we get
Applying Lemma 2.1 in conjunction with Eq (2.5), the above expression can be reformulated as
This implies a contradiction, so we conclude that y⋆=ϑ⋆.
To ensure convergence, suppose that y⋆ represents the unique solution of AVE (1.1). As a result, by considering Eq (2.8) and
we deduce
By utilizing the infinity norm alongwith Lemma 2.1, we obtain
and since ‖(AD−ψAL)−1‖∞<1, it follows that
This inequality suggests that the EGGSM I converges when condition (2.5) is fulfilled. □
2.2. EGGSM II for AVE
In this section, we introduce the EGGSM II. Utilizing Eqs (2.1) and (2.2), we establish the framework of EGGSM II to address AVE (1.1) in the following manner:
Alternatively, we can write as follows:
Below are the procedural steps of the EGGSM II algorithm:
(1) Choose a parameter ψ, pick an initial vector y0∈Rn, and then assign i=0.
(2) Calculate ri=yi+1≈A−1(|yi|+b).
(3) Calculate
(4) If yi+1=yi, then stop. If not, set i=i+1 and go to Step 2.
Now, to ensure the convergence of the EGGSM II, let us examine the following theorem.
Theorem 2.2. Suppose the diagonal elements of matrix A exceed one and meet the subsequent condition:
then the sequence {yi} of EGGSM II converges to the unique solution y⋆ of AVE.
Proof. The uniqueness of EGGSM II is derived directly from Theorem 2.1. To ensure convergence, suppose that y⋆ represents the unique solution of AVE (1.1). We consider Eq (2.10)
expressed as
where yi+1≈A−1(|yi|+b). Suppose y⋆ is the unique solution of AVE (1.1). Then, we obtain
Subtracting Eq (2.13) from Eq (2.12) yields we deduce
By utilizing the infinity norm alongwith Lemma 2.1, we obtain
and since ‖(AD−ψAL)−1‖∞<1, so we have
The inequality mentioned above indicates that the EGGSM II reaches convergence when condition (2.11) is satisfied. □
3.
Numerical expriments
The objective of this section is to showcase several numerical tests. These tests aim to exemplify the significance of new strategies from three stances:
● Iteration steps (ITs).
● Processing time in seconds (CPU).
● Norm of absolute residual vectors (RES).
Where (RES) is defined as
The computations were performed with an Intel (C) Core (TM) i5-3337U processor, 4 GB of RAM, and MATLAB (2018a). In addition, we consider the value of ψ=1.2 for all examples.
Example 3.1. [33] Let v denote a predetermined positive integer, and n=v2. Let's delve into the AVE (1.1). Here, we assume A belongs to Rn×n and can be represented as A=M+I, where
is a block-tridiagonal matrix, and
and
In Examples 3.1 and 3.2, we consider the initial estimate y∗=(1,0,1,0,...,1,0,...)T∈Rn and contrast EGGSM I and EGGSM II with various iterative techniques: the AOR iteration method [34], the mixed-type splitting (MS) method [33], the fixed-point method (FM) [35], and the GGS method (GGSM) [31]. The tabulated numerical outcomes are presented in Table 1.
Example 3.2. [33] Given a positive integer v, let n=v2. Suppose the expression AVE (1.1), where A is a real n×n matrix represented by A=M+4I, where
is a block-tridiagonal matrix, and
and b=Ay∗−|y∗| with y∗=((−1)h,h=1,2,...,n)T∈Rn. Table 2 lists the actual results.
Tables 1 and 2 of our study provide a thorough comparison of the numerical findings acquired by applying multiple approaches: AOR, MS, FM, GGSM, and novel techniques we have developed. These comparisons are conducted across a variety of values for n. Upon analysis of the numerical data, it is evident that our newly proposed EGGSM I and EGGSM II show superior performance to all other methods. Specifically, they surpass the others in terms of both iterations (ITs) required for convergence and the computational time (CPU) needed to achieve these results.
Example 3.3. [15] Let A be an n×n matrix with elements defined as follows: aii=4n, ai,i+1=ai+1,i=n, and aij=0.5 for i=1,2,…,n. Consider b=(A−I)e, where I is the identity matrix of order n and e is an n×1 vector with all elements equal to unity, such that y=(1,1,…,1)T represents the exact solution. Specifically, we initialize the vector as y(0)=(y1,y2,…,yi,…)T, where yi=0.001⋅i. In this scenario, we evaluate our proposed methods against the minimization technique introduced in [15] (referred to as MT), the modified search direction iteration method [36] (denoted as MDM), the fixed-point method (FM) [35], and GGSM [31]. The results are summarized in Table 3.
In Table 3 of our study, we present an analysis of the numerical results obtained using various methodologies: MT, MDM, FM, and GGSM, as well as innovative approaches that have been developed. Various values of n are considered in these assessments. Upon scrutinizing the numerical data, it becomes apparent that our newly introduced EGGSM I and EGGSM II exhibit superior efficacy compared to alternative methods. Moreover, they outperform others in terms of both ITs and CPU required to attain results. As a result, we deduce that the offered approaches are admirably effective and practical for implementation.
4.
Conclusions
In this study, we have explored two extended versions of the Gauss-Seidel method, known as EGGSM I and EGGSM II to solve AVEs. We meticulously investigated the requisite conditions for the convergence of these novel iterative techniques. In addition, we presented several numerical results demonstrating our approaches' effectiveness. The computational results demonstrate the relevance of the proposed methodologies, particularly when handling large, sparse AVEs, and their considerable superiority over existing approaches. In both theoretical and empirical analyses, our proposed algorithms have been shown to solve AVEs with high efficiency.
Author contributions
Rashid Ali: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Resources, Supervision and Writing-original draft; Fuad A. Awwad: Software, Visualization, Funding acquisition, Writing-review and editing; Emad A. A. Ismail: Formal Analysis, Project administration, Writing-original draft and Writing-review and editing.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
Researchers supporting project number (RSPD2024R1060), King Saud University, Riyadh, Saudi Arabia. Furthermore, this study was also supported by the Zhejiang Normal University Research Fund under Grant YS304123950, China.
Conflict of interest
The authors declare that they do not have any conflicts of interest.