Loading [MathJax]/jax/element/mml/optable/MathOperators.js
Research article

Error bounds for generalized vector inverse quasi-variational inequality Problems with point to set mappings

  • The goal of this paper is further to study a kind of generalized vector inverse quasi-variational inequality problems and to obtain error bounds in terms of the residual gap function, the regularized gap function, and the global gap function by utilizing the relaxed monotonicity and Hausdorff Lipschitz continuity. These error bounds provide effective estimated distances between an arbitrary feasible point and the solution set of generalized vector inverse quasi-variational inequality problems.

    Citation: S. S. Chang, Salahuddin, M. Liu, X. R. Wang, J. F. Tang. Error bounds for generalized vector inverse quasi-variational inequality Problems with point to set mappings[J]. AIMS Mathematics, 2021, 6(2): 1800-1815. doi: 10.3934/math.2021108

    Related Papers:

    [1] Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064
    [2] Bancha Panyanak, Chainarong Khunpanuk, Nattawut Pholasa, Nuttapol Pakkaranang . A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators. AIMS Mathematics, 2023, 8(4): 9692-9715. doi: 10.3934/math.2023489
    [3] Feng Ma, Bangjie Li, Zeyan Wang, Yaxiong Li, Lefei Pan . A prediction-correction based proximal method for monotone variational inequalities with linear constraints. AIMS Mathematics, 2023, 8(8): 18295-18313. doi: 10.3934/math.2023930
    [4] Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194
    [5] Muhammad Aslam Noor, Khalida Inayat Noor, Bandar B. Mohsen . Some new classes of general quasi variational inequalities. AIMS Mathematics, 2021, 6(6): 6406-6421. doi: 10.3934/math.2021376
    [6] Nagendra Singh, Sunil Kumar Sharma, Akhlad Iqbal, Shahid Ali . On relationships between vector variational inequalities and optimization problems using convexificators on the Hadamard manifold. AIMS Mathematics, 2025, 10(3): 5612-5630. doi: 10.3934/math.2025259
    [7] Xiuzhi Yang, G. Farid, Waqas Nazeer, Muhammad Yussouf, Yu-Ming Chu, Chunfa Dong . Fractional generalized Hadamard and Fejér-Hadamard inequalities for m-convex functions. AIMS Mathematics, 2020, 5(6): 6325-6340. doi: 10.3934/math.2020407
    [8] Shahram Rezapour, Maryam Iqbal, Afshan Batool, Sina Etemad, Thongchai Botmart . A new modified iterative scheme for finding common fixed points in Banach spaces: application in variational inequality problems. AIMS Mathematics, 2023, 8(3): 5980-5997. doi: 10.3934/math.2023301
    [9] Sevda Sezer, Zeynep Eken . The Hermite-Hadamard type inequalities for quasi p-convex functions. AIMS Mathematics, 2023, 8(5): 10435-10452. doi: 10.3934/math.2023529
    [10] Jurancy Ereú, Luz E. Marchan, Liliana Pérez, Henry Rojas . Some results on the space of bounded second κ-variation functions. AIMS Mathematics, 2023, 8(9): 21872-21892. doi: 10.3934/math.20231115
  • The goal of this paper is further to study a kind of generalized vector inverse quasi-variational inequality problems and to obtain error bounds in terms of the residual gap function, the regularized gap function, and the global gap function by utilizing the relaxed monotonicity and Hausdorff Lipschitz continuity. These error bounds provide effective estimated distances between an arbitrary feasible point and the solution set of generalized vector inverse quasi-variational inequality problems.


    In 2014, Li et al. [1] suggested a new class of inverse mixed variational inequality in Hilbert spaces that has simple problem of traffic network equilibrium control, market equilibrium issues as applications in economics and telecommunication network problems. The concept of gap function plays an important role in the development of iterative algorithms, an evaluation of their convergence properties and useful stopping rules for iterative algorithms, see [2,3,4,5]. Error bounds are very important and useful because they provide a measure of the distance between a solution set and a feasible arbitrary point. Solodov [6] developed some merit functions associated with a generalized mixed variational inequality, and used those functions to achieve mixed variational error limits. Aussel et al. [7] introduced a new inverse quasi-variational inequality (IQVI), obtained local (global) error bounds for IQVI in terms of certain gap functions to demonstrate the applicability of IQVI, and provided an example of road pricing problems, also see [8,9]. Sun and Chai [10] introduced regularized gap functions for generalized vector variation inequalities (GVVI) and obtained GVVI error bounds for regularized gap functions. Wu and Huang [11] implemented generalized f-projection operators to deal with mixed variational inequality. Using the generalized f-projection operator, Li and Li [12] investigated a restricted mixed set-valued variational inequality in Hilbert spaces and proposed four merit functions for the restricted mixed set valued variational inequality and obtained error bounds through these functions.

    Our goal in this paper is to present a problem of generalized vector inverse quasi-variational inequality problems. They propose three gap functions, the residual gap function, the regularized gap function, and the global gap function, and obtain error bounds for generalized vector inverse quasi-variational inequality problem using these gap functions and generalized f-projection operator under the monotonicity and Lipschitz continuity of underlying mappings.

    Throughout this article, R+ denotes the set of non-negative real numbers, 0 denotes the origins of all finite dimensional spaces, and , denotes the norms and the inner products in finite dimensional spaces, respectively. Let Ω,F,P:RnRn be the set-valued mappings with nonempty closed convex values, Ni:Rn×RnRn(i=1,2,,m) be the bi-mappings, B:RnRn be the single-valued mappings, and fi:RnR(i=1,2,,m) be real-valued convex functions. We put

    f=(f1,f2,,fm),N(,)=(N1(,),N2(,),,Nm(,)),

    and for any x,wRn,

    N(x,x),w=(N1(x,x),w,N2(x,x),w,,Nn(x,x),w).

    In this paper, we consider the following generalized vector inverse quasi-variational inequality for finding ˉxΩ(ˉx), ˉuF(ˉx) and ˉvP(ˉx) such that

    N(ˉu,ˉv),yB(ˉx)+f(y)f(B(ˉx))intRm+,yΩ(ˉx), (2.1)

    and solution set is denoted by .

    Special cases:

    (i) If P is a zero mapping and N(,)=N(), then (2.1) reduces to the following problem for finding ˉxΩ(ˉx) and ˉuF(ˉx) such that

    N(ˉu),yB(ˉx)+f(y)f(B(ˉx))intRm+,yΩ(ˉx), (2.2)

    studied in[13] and solution set is denoted by 1.

    (ii) If F is single valued mapping, then (2.2) reduces to the following vector inverse mixed quasi-variational inequality for finding ˉxΩ(ˉx) such that

    N(ˉx),yB(ˉx)+f(y)f(B(ˉx))intRm+,yΩ(ˉx), (2.3)

    studied in [14] and solution set is denoted by 2.

    (iii) If CRn is a nonempty closed and convex subset, B(x)=x and Ω(x)=C for all xRn, then (2.3) collapses to the following generalized vector variational inequality for finding ˉxC such that

    N(ˉx),yx+f(y)f(ˉx)intRm+,yC, (2.4)

    which is considered in [10].

    (iv) If f(x)=0 for all xRn, then (2.4) reduces to vector variational inequality for finding ˉxC such that

    N(ˉx),yxintRm+,yC, (2.5)

    studied in [15].

    (v) If Rm+=R+ then (2.5) reduces to variational inequality for finding ˉxC such that

    N(ˉx),yx0,yC, (2.6)

    studied in [16].

    Definition 2.1 [7] Let G:RnRn and g:RnRn be two maps.

    (i) (G,g) is said to be a strongly monotone if there exists a constant μg>0 such that

    G(y)G(x),g(y)g(x)μgyx2,x,yRn;

    (ii) g is said to be Lg-Lipschitz continuous if there exists a constant Lg>0 such that

    g(x)g(y)Lgxy,x,yRn.

    For any fixed γ>0, let G:RnטΩ(,+] be a function defined as follows:

    G(φ,x)=x22φ,x+φ2+2γf(x),φRn,x˜Ω, (2.7)

    where ˜ΩRn is a nonempty closed and convex subset, and f:RnR is convex.

    Definition 2.2 [11] We say that f˜Ω:Rn2˜Ω is a generalized f-projection operator if

    f˜Ωφ={w˜Ω:G(φ,w)=infy˜ΩG(φ,y)},φRn.

    Remark 2.3 If f(x)=0 for all x˜Ω, then the generalized f-projection operator f˜Ω is equivalent to the following metric projection operator:

    P˜Ω(φ)={w˜Ω:wφ=infy˜Ωyφ},φRn.

    Lemma 2.4 [1,11] The following statements hold:

    (i) For any given φRn,f˜Ωφ is nonempty and single-valued;

    (ii) For any given φRn,x=f˜Ωφ if and only if

    xφ,yx+γf(y)γf(x)0,y˜Ω;

    (iii) f˜Ω:RnΩ is nonexpansive, that is,

    f˜Ωxf˜Ωyxy,x,yRn.

    Lemma 2.5 [17] Let m be a positive number, BRn be a nonempty subset such that

    dmfor alldB.

    Let Ω:RnRn be a set-valued mapping such that, for each xRn, Ω(x) is a closed convex set, and let f:RnR be a convex function on Rn. Assume that

    (i) there exists a constant τ>0 such that

    D(Ω(x),Ω(y))τxy,x,yRn,

    where D(,) is a Hausdorff metric defined on Rn;

    (ii) 0wRnΩ(w);

    (iii) f is -Lipschitz continuous on Rn. Then there exists a constant κ=6τ(m+γ) such that

    fΩ(x)zfΩ(x)zκxy,x,yRn,zB.

    Lemma 2.6 A function r:RnR is said to be a gap function for a generalized vector inverse quasi-variational inequality on a set ˜SRn if it satisfies the following properties:

    (i) r(x)0for anyx˜S; (ii) r(ˉx)=0,ˉx˜S if and only if ˉx is a solution of (2.1).

    Definition 2.7 Let B:RnRn be the single-valued mapping and N:Rn×RnRn be a bi-mapping.

    (i) (N,B) is said to be a strongly monotone with respect to the first argument of N and B, if there exists a constant μB>0 such that

    N(y,)N(x,),B(y)B(x)μByx2,x,yRn;

    (ii) (N,B) is said to be a relaxed monotone with respect to the second argument of N and B, if there exists a constant ζB>0 such that

    N(,y)N(,x),B(y)B(x)ζByx2,x,yRn;

    (iii) N is said to be σ-Lipschitz continuous with respect to the first argument with constant σ>0 and -Lipschitz continuous with respect to the second argument with constant >0 such that

    N(x,ˉx)N(y,ˉy)σxy+ˉxˉy,x,ˉx,y,ˉyRn.

    (iv) B is said to be -Lipschitz continuous if there exists a constant >0 such that

    B(x)B(y)xy,x,yRn.

    Example 2.8 The variational inequality (2.6) can be solved by transforming it into an equivalent optimization problem for the so-called merit function r(;τ):X=RnR{+} defined by

    r(x;τ)=sup{N(ˉx),yxXτˉxx2X|xC} for ˉxC,

    where τ is a nonnegative parameter. If X is finite dimensional, the function r(;0) is usually called the gap function for τ=0, and the function r(;τ) for τ>0 is called the regularized gap function.

    Example 2.9 Assume that N:RnRn be a given mapping and C a closed convex set in Rn. Let and \curlywedge be given scalar satisfying \curlyvee > \curlywedge > 0 then (2.6) has a D -gap function if

    \mathbb{N}_{\curlyvee\curlywedge}(x) = \mathbb{N}_{\curlyvee}(x)-\mathbb{N}_{\curlywedge}(x), \forall x\in \mathbf{R}^n

    where D stands for difference.

    In this section, we discuss the residual gap function for generalized vector inverse quasi-variational inequality problem by using the strong monotonicity, relaxed monotonicity, Hausdorff Lipschitz continuity and prove error bounds related to the residual gap function. We define the residual gap function for (2.1) as follows:

    \begin{equation} r_\gamma(x) = \min\limits_{1\leq i\leq m}\{\|\mathbb{B}(x) -\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)- \gamma \mathbb{N}_i(u, v)]\|\}, \; \; x \in \mathbf{R}^n, \; u\in \mathbb{F}(x), v\in \mathbb{P}(x), \; \gamma \gt 0. \end{equation} (3.1)

    Theorem 3.1 Suppose that \mathbb{F}, \mathbb{P}:\mathbf{R}^n\to \mathbf{R}^n are set-valued mappings and \mathbb{N}_i:\mathbf{R}^n\times \mathbf{R}^n\to \mathbf{R}^n (i = 1, 2, \cdots, m) are the bi-mappings. Assume that \mathbb{B}:\mathbf{R}^n \to \mathbf{R}^n is single-valued mapping, then for any \gamma > 0, r_\gamma(x) is a gap function for (2.1) on \mathbf{R}^n.

    Proof. For any x \in \mathbf{R}^n ,

    r_\gamma(x) \geq 0.

    On the other side, if

    r_\gamma(\bar{x}) = 0,

    then there exists 0 \leq i_0 \leq m such that

    \mathbb{B}(\bar{x}) = \gimel^{f_{i_0}}_{\Omega(\bar{x})} [\mathbb{B}(\bar{x}) - \gamma \mathbb{N}_{i_0} (\bar{u}, \bar{v})], \; \forall \bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x}).

    From Lemma 2.4, we have

    \langle \mathbb{B}(\bar{x}) - [\mathbb{B}(\bar{x}) - \gamma \mathbb{N}_{i_0} (\bar{u}, \bar{v})], y - \mathbb{B}(\bar{x})\rangle + \gamma f(y) - \gamma f(\mathbb{B}(\bar{x})) \leq 0, \forall y \in \Omega(\bar{x}), \bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x})

    and

    \langle \mathbb{N}_{i_0} (\bar{u}, \bar{v}), y - \mathbb{B}(\bar{x})\rangle + f(y) - f(\mathbb{B}(\bar{x})) \leq 0, \; \forall y \in \Omega(\bar{x}), \; \bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x}).

    It gives that

    \langle \mathbb{N}(\bar{u}, \bar{v}), y - \mathbb{B}(\bar{x})\rangle + f(y) - f(\mathbb{B}(\bar{x})) \not\in -{\it int}R^m_{+}, \; \forall y \in \Omega(\bar{x}), \; \bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x}).

    Thus, \bar{x} is a solution of (2.1).

    Conversely, if \bar{x} is a solution of (2.1), there exists 1 \leq i_0 \leq m such that

    \langle \mathbb{N}_{i_0} (\bar{u}, \bar{v}), y - \mathbb{B}(\bar{x})\rangle + f_{i_0}(y) - f_{i_0} (\mathbb{B}(\bar{x})) \geq 0, \; \forall y \in \Omega(\bar{x}), \bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x}).

    By using the Lemma 2.4, we have

    \mathbb{B}(\bar{x}) = \gimel^{f_{i_0}}_{\Omega(\bar{x})}[\mathbb{B}(\bar{x}) - \gamma \mathbb{N}_{i_0}(\bar{u}, \bar{v})], \; \forall \bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x}).

    This means that

    r_\gamma(\bar{x}) = \min\limits_{1\leq i\leq m}\{\|\mathbb{B}(\bar{x}) -\gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(\bar{x}) - \gamma \mathbb{N}_i(\bar{u}, \bar{v})]\|\} = 0.

    The proof is completed.

    Next we will give the residual gap function r_\gamma , error bounds for (2.1).

    Theorem 3.2 Let \mathbb{F}, \mathbb{P}:\mathbf{R}^n \to \mathbf{R}^n be \mathcal{D} - \vartheta^{\mathbb{F}} -Lipschitz continuous and \mathcal{D} - \varrho^{\mathbb{P}} -Lipschitz continuous mappings, respectively. Let \mathbb{N}_i:\mathbf{R}^n\times \mathbf{R}^n \to \mathbf{R}^n (i = 1, 2, \cdots, m) be \sigma_i -Lipschitz continuous with respect to the first argument and \wp_i -Lipschits continuous with respect to the second argument, and \mathbb{B}:\mathbf{R}^n \to \mathbf{R}^n be \ell -Lipschitz continuous, and (\mathbb{N}_i, \mathbb{B}) be strongly monotone with respect to the first argument of \mathbb{N}_i and \mathbb{B} with positive constant \mu^\mathbb{B}_i, and relaxed monotone with respect to the second argument of \mathbb{N}_i and \mathbb{B} with positive constant \zeta^\mathbb{B}_i . Let

    \bigcap\limits^{m}_{i = 1} (\mho^i) \neq \emptyset.

    Assume that there exists \kappa_i \in \left(0, \dfrac{\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}}{\sigma_i\vartheta^{\mathbb{F}}+\varrho^{\mathbb{P}}\wp_i}\right) such that

    \begin{align} \|\gimel^{f_i}_{\Omega(x)}z - \gimel^{f_i}_{\Omega(y)}z\|\leq \kappa_i\|x - y\|, \; \; & \forall x, y \in \mathbf{R}^n, \; u\in \mathbb{F}(x), \; v\in \mathbb{P}(x), \\ & z \in \{w\mid w = \mathbb{B}(x)-\gamma \mathbb{N}_i(u, v)\}. \end{align} (3.2)

    Then, for any x \in \mathbf{R}^n and \mu_i^\mathbb{B} > \zeta_i^\mathbb{B}+\kappa_i(\sigma_i\vartheta^{\mathbb{F}}+\wp_i\varrho^{\mathbb{P}}),

    \gamma \gt \dfrac{\kappa_i\ell}{\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}-\kappa_i(\sigma_i\vartheta^{\mathbb{F}}+\wp_i\varrho^{\mathbb{P}})},
    d(x, \mho) \leq \dfrac{\gamma (\sigma_i\vartheta^\mathbb{F}-\wp_i\varrho^\mathbb{P}) + \ell}{\gamma(\mu_i^\mathbb{B}-\zeta_i^\mathbb{B} - \kappa_i(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P})) - \kappa_i\ell}\; r_\gamma(x),

    where

    d(x, \mho) = \inf\limits_{\bar{x}\in \mho} \|x - \bar{x}\|

    denotes the distance between the point x and the solution set \mho.

    Proof. Since

    \bigcap\limits^m_{i = 1} (\mho^i) \neq \emptyset.

    Let \bar{x} \in \Omega(\bar{x}) be the solution of (2.1) and thus for any i \in \{1, \cdots, m\}, we have

    \begin{equation} \langle \mathbb{N}_i(\bar{u}, \bar{v}), y - \mathbb{B}(\bar{x})\rangle + f_i(y) - f_i(\mathbb{B}(\bar{x})) \geq 0, \; \forall y \in \Omega(\bar{x}), \bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x}). \end{equation} (3.3)

    From the definition of \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)], and Lemma 2.4, we have

    \begin{align} \langle \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x)& - \gamma \mathbb{N}_i(u, v)] - (\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)), \; y - \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\rangle\\ &+ \gamma f_i(y) - \gamma f_i(\gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]) \geq 0, \; \forall y \in \Omega(\bar{x}), \; u\in \mathbb{F}(x), v\in \mathbb{P}(x). \end{align} (3.4)

    Since

    \bar{x} \in \bigcap\limits^m_{i = 1}(\mho^i), \; \; \; \text{and}\; \; \; \mathbb{B}(\bar{x}) \in \Omega(\bar{x}).

    Replacing y by \mathbb{B}(\bar{x}) in (3.4), we get

    \begin{align} \langle \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x)& - \gamma \mathbb{N}_i(u, v)]-(\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)), \; \mathbb{B}(\bar{x}) - \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\rangle\\ &+ \gamma f_i(\mathbb{B}(\bar{x})) - \gamma f_i(\gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]) \geq 0, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x). \end{align} (3.5)

    Since

    \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] \in \Omega(\bar{x}),

    from (3.3), it follows that

    \begin{equation} \langle \gamma \mathbb{N}_i(\bar{u}, \bar{v}), \; \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] - \mathbb{B}(\bar{x})\rangle + \gamma f_i(\gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]) - \gamma f_i(\mathbb{B}(\bar{x})) \geq 0. \end{equation} (3.6)

    Utilizing (3.5) and (3.6), we have

    \langle \gamma \mathbb{N}_i(\bar{u}, \bar{v}) - \gamma \mathbb{N}_i(u, v) -\gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] + \mathbb{B}(x), \; \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]- \mathbb{B}(\bar{x})\rangle \geq 0,

    which implies that

    \langle \gamma \mathbb{N}_i(\bar{u}, \bar{v}) - \gamma \mathbb{N}_i(u, v), \; \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] - \mathbb{B}(x)\rangle -\langle \gamma \mathbb{N}_i(\bar{u}, \bar{v}) - \gamma \mathbb{N}_i(u, v), \mathbb{B}(\bar{x}) - \mathbb{B}(x)\rangle
    + \langle \mathbb{B}(x) - \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)], \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] - \mathbb{B}(x)\rangle
    + \langle \mathbb{B}(x) - \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)], \mathbb{B}(x) - \mathbb{B}(\bar{x})\rangle \geq 0.

    Since \mathbb{F} is \mathcal{D} - \vartheta^\mathbb{F} -Lipschitz continuous, \mathbb{P} is \mathcal{D} - \varrho^\mathbb{P} -Lipschits continuous and \mathbb{N}_i is \sigma_i -Lipschitz continuous with respect to the first argument and \wp_i -Lipschitz continuous with respect to the second argument, we have

    \begin{align} \|\bar{u}-u\|\leq \mathcal{D}(\mathbb{F}(\bar{x}), \mathbb{F}(x))&\leq\vartheta^\mathbb{F}\|\bar{x}-x\|;\\ \|\bar{v}-v\|\leq \mathcal{D}(\mathbb{P}(\bar{x}), \mathbb{P}(x))&\leq\varrho^\mathbb{P}\|\bar{x}-x\|;\\ \|\mathbb{N}_i(\bar{u}, \bar{v})-\mathbb{N}_i(u, v)\|&\leq \sigma_i\|\bar{u}-u\|+\wp_i\|\bar{v}-v\|. \end{align} (3.7)

    Again, for i = 1, 2, \cdots, m , (\mathbb{N}_i, \mathbb{B}) are strongly monotone with respect to the first argument of \mathbb{N}_i and \mathbb{B} with a positive constant \mu^\mathbb{B}_i, , and relaxed monotone with respect to the second argument of \mathbb{N}_i and \mathbb{B} with a positive constant \zeta^\mathbb{B}_i , we have

    \langle \gamma \mathbb{N}_i(\bar{u}, \bar{v}) -\gamma \mathbb{N}_i(u, v), \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] - \mathbb{B}(x)\rangle-\|\mathbb{B}(x) -\gimel^{f_i}_{\Omega(\bar{u})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\|^2
    + \langle \mathbb{B}(x) - \gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)], \mathbb{B}(x) - \mathbb{B}(\bar{x})\rangle \geq \gamma\mu^\mathbb{B}_i\|x - \bar{x}\|^2-\gamma\zeta^\mathbb{B}_i\|x - \bar{x}\|^2.

    By adding \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] and using the Cauchy-Schwarz inequality along with the triangular inequality, we have

    \|\gamma \mathbb{N}_i(\bar{u}, \bar{v}) - \gamma \mathbb{N}_i(u, v)\|\big\{\|\gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] - \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\|
    +\|\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] - \mathbb{B}(x)\|\big\}
    +\|\mathbb{B}(x) - \mathbb{B}(\bar{x})\|\big\{\|\mathbb{B}(x) -\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\|+\|\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]
    -\gimel^{f_i}_{\Omega(\bar{x})}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\|\big\} \geq \gamma \mu_i^\mathbb{B}\|x - \bar{x}\|^2-\gamma \zeta_i^\mathbb{B}\|x - \bar{x}\|^2.

    Using the (3.7) and condition (3.2), we have

    (\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P})\gamma\|\bar{x} - x\| \big\{\kappa_i\|\bar{x} - x\| +\|\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)] - \mathbb{B}(x)\|\big\}
    + \ell \|x - \bar{x}\| \big\{\|\mathbb{B}(x)-\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)- \gamma \mathbb{N}_i(u, v)]\|+ \kappa_i\|x - \bar{x}\|\big\}\geq \gamma(\mu_i^\mathbb{B}-\zeta_i^\mathbb{B})\|x - \bar{x}\|^2.

    Hence, for any x \in \mathbf{R}^n and i \in \{1, 2, \cdots, m\} , \mu_i^\mathbb{B} > \zeta_i^\mathbb{B}+\kappa_i(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P}),

    \gamma \gt \dfrac{\kappa_i\ell}{\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}-\kappa_i(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P})},

    we have

    \|x - \bar{x}\|\leq \dfrac{\gamma(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P}) + \ell}{\gamma(\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}-\kappa_i(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P})) - \kappa_i\ell}\|\mathbb{B}(x) -\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\|, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x).

    This implies

    \|x - \bar{x}\|\leq\dfrac{\gamma(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P}) + \ell}{\gamma(\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}-\kappa_i(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P})) - \kappa_i\ell}\; \; \min\limits_{1\leq i\leq m}\left\{\|\mathbb{B}(x) -\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\|\right\}

    which means that

    d(x, \mho) \leq \|x - \bar{x}\|\leq \dfrac{\gamma(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P}) + \ell}{\gamma(\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}-\kappa_i(\sigma_i\vartheta^\mathbb{F}+\wp_i\varrho^\mathbb{P})) - \kappa_i\ell}\; r_\gamma(x).

    The proof is completed.

    The regularized gap function for (2.1) is defined for all x \in \mathbf{R}^n as follows:

    \phi_\gamma(x) = \min\limits_{1\leq i\leq m}\sup\limits_{\substack{y\in \Omega(x), \\u\in \mathbb{F}(x), v\in \mathbb{P}(x)}}\big\{\langle \mathbb{N}_i(u, v), \mathbb{B}(x) - y\rangle + f_i(\mathbb{B}(x)) - f_i(y) - \dfrac{1}{2\gamma}\|\mathbb{B}(x) - y\|^2\big\}

    where \gamma > 0 is a parameter.

    Lemma 4.1 We have

    \begin{equation} \phi_\gamma(x) = \min\limits_{1\leq i\leq m}\big\{\langle \mathbb{N}_i(u, v), \mathbf{R}^i_\gamma(x)\rangle + f_i(\mathbb{B}(x)) - f_i(\mathbb{B}(x) - \mathbf{R}^i_\gamma(x))- \dfrac{1}{2\gamma}\|\mathbf{R}^i_\gamma(x)\|^2\big\}, \end{equation} (4.1)

    where

    \mathbf{R}^i_\gamma(x) = \mathbb{B}(x)-\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\gamma \mathbb{N}_i(u, v)], \forall x\in \mathbf{R}^n, u\in \mathbb{F}(x), v\in \mathbb{P}(x)

    and if

    x \in \mathbb{B}^{-1}(\Omega)

    and

    \mathbb{B}^{-1}(\Omega) = \big\{\xi \in \mathbf{R}^n\big\vert \mathbb{B}(\xi)\in \Omega(\xi)\big\},

    then

    \begin{equation} \phi_\gamma(x)\geq \dfrac{1}{2\gamma} r_\gamma(x)^2. \end{equation} (4.2)

    Proof. For given x \in \mathbf{R}^n, u\in \mathbb{F}(x), v\in \mathbb{P}(x) and i\in\{1, 2, \cdots, m\} , set

    \psi_i(x, y) = \langle \mathbb{N}_i(u, v), \mathbb{B}(x) - y\rangle + f_i(\mathbb{B}(x)) - f_i(y) - \dfrac{1}{2\gamma}\|\mathbb{B}(x)-y\|^2, \; y \in \mathbf{R}^n.

    Consider the following problem:

    g_i(x) = \max\limits_{y\in \Omega(x)}\psi_i(x, y).

    Since \psi_i(x, \cdot) is a strongly concave function and \Omega(x) is nonempty closed convex, the above optimization problem has a unique solution z \in \Omega(x). Evoking the condition of optimality at z , we get

    0 \in \mathbb{N}_i(u, v) + \partial f_i(z) + \dfrac{1}{\gamma}(z - \mathbb{B}(x)) + \mathcal{N}_{\Omega(x)}(z),

    where \mathcal{N}_{\Omega(x)}(z) is the normal cone at z to \Omega(x) and \partial f_i(z) denotes the subdifferential of f_i at z. Therefore,

    \langle z - (\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)), y - z\rangle + \gamma f_i(y) - \gamma f_i(z) \geq 0, \; \forall y \in \Omega(x), u\in \mathbb{F}(x), v\in \mathbb{P}(x)

    and so

    z = \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)], \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x).

    Hence g_i(x) can be rewritten as

    g_i(x) = \langle \mathbb{N}_i(u, v), \mathbb{B}(x) -\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\rangle + f_i(\mathbb{B}(x)) - f_i(\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)])
    -\dfrac{1}{2\gamma}\|\mathbb{B}(x) -\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\|^2, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x).

    Letting

    \mathbf{R}^i_\gamma(x) = \mathbb{B}(x) - \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)], \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x),

    we get

    \begin{align} g_i(x) = \langle \mathbb{N}_i(u, v), \mathbf{R}^i_\gamma(x)\rangle + f_i(\mathbb{B}(x)) &- f_i(\mathbb{B}(x) - \mathbf{R}^i_\gamma(x))\\ & - \dfrac{1}{2\gamma}\|\mathbf{R}^i_\gamma(x)\|^2, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x), \end{align} (4.3)
    \begin{align} \end{align} (4.4)

    and so

    \phi_\gamma(x) = \min\limits_{1\leq i\leq m}\big\{\langle \mathbb{N}_i(u, v), \mathbf{R}^i_\gamma(x)\rangle + f_i(\mathbb{B}(x)) - f_i(\mathbb{B}(x) - \mathbf{R}^i_{\gamma}(x)) - \dfrac{1}{2\gamma}\|\mathbf{R}^i_\gamma(x)\|^2\big\}.

    From the definition of projection \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)], we have

    \begin{align} \langle \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)& - \gamma \mathbb{N}_i(u, v)]- \mathbb{B}(x) + \gamma \mathbb{N}_i(u, v), y -\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)]\rangle\\ &+ \gamma f_i(y) - \gamma f_i(\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \gamma \mathbb{N}_i(u, v)])\geq 0, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x). \end{align} (4.5)

    For any x\in \mathbb{B}^{-1}(\Omega), we have

    \mathbb{B}(x) \in \Omega(x).

    Therefore, putting y = \mathbb{B}(x) in (4.5), we get

    \langle \gamma \mathbb{N}_i(u, v) - \mathbf{R}^i_\gamma(x), \mathbf{R}^i_\gamma(x)\rangle + \gamma f_i(\mathbb{B}(x))- \gamma f_i(\mathbb{B}(x) - \mathbf{R}^i_\gamma(x)) \geq 0, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x),

    that is,

    \begin{align} \langle \mathbb{N}_i(u, v), \mathbf{R}^i_\gamma(x)\rangle + f_i(\mathbb{B}(x))- f_i(\mathbb{B}(x)-\mathbf{R}^i_\gamma(x)) &\geq \dfrac{1}{\gamma}\langle \mathbf{R}^i_\gamma(x), \mathbf{R}^i_\gamma(x)\rangle\\ & = \dfrac{1}{\gamma}\|\mathbf{R}^i_\gamma(x)\|^2. \end{align} (4.6)

    From the definition of r_\gamma(x) and (4.1), we get

    \phi_\gamma(x) \geq \dfrac{1}{2\gamma} r_\gamma(x)^2.

    The proof is completed.

    Theorem 4.2 For \gamma > 0, \; \phi_\gamma is a gap function for (2.1) on the set

    \mathbb{B}^{-1}(\Omega) = \{\xi \in \mathbf{R}^n\vert \mathbb{B}(\xi)\in \Omega(\xi)\}.

    Proof. From the definition of \phi_\gamma , we have

    \begin{align} \phi_\gamma(x) \geq \min\limits_{1\leq i\leq m}\big\{\langle \mathbb{N}_i(u, v), \mathbb{B}(x) - y\rangle &+ f_i(\mathbb{B}(x)) - f_i(y) -\dfrac{1}{2\gamma}\|\mathbb{B}(x)-y\|^2\big\}, \\ &\text{for all}\; \; y\in \Omega(x), u\in \mathbb{F}(x), v\in \mathbb{P}(x). \end{align} (4.7)

    Therefore, for any x\in \mathbb{B}^{-1}(\Omega), putting y = \mathbb{B}(x) in (4.7), we have

    \phi_\gamma(x) \geq 0.

    Suppose that \bar{x}\in \mathbb{B}^{-1}(\xi) with \phi_\gamma(\bar{x}) = 0. From (4.2), it follows that

    r_\gamma(\bar{x}) = 0,

    which implies that \bar{x} is the solution of (2.1).

    Conversely, if \bar{x} is a solution of (2.1), there exists 1 \leq i_0 \leq m such that

    \langle \mathbb{N}_{i_0}(\bar{u}, \bar{v}), \mathbb{B}(\bar{x})- y\rangle + f_{i_0}(\mathbb{B}(\bar{x}))- f_{i_0}(y) \leq 0, \; \forall y \in \Omega(\bar{x}), \bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x}),

    which means that

    \min\limits_{1\leq i \leq m}\big\{\sup\limits_{\substack{y\in \Omega(\bar{x}), \\\bar{u}\in \mathbb{F}(\bar{x}), \bar{v}\in \mathbb{P}(\bar{x})}}\big\{\langle \mathbb{N}_i(\bar{u}, \bar{v}), \mathbb{B}(\bar{x})-y\rangle + f_i(\mathbb{B}(\bar{x})) - f_i(y)-\dfrac{1}{2\gamma}\|\mathbb{B}(\bar{x})- y\|^2\big\}\big\} \leq 0.

    Thus,

    \phi_\gamma(\bar{x})\leq 0.

    The preceding claim leads to

    \phi_\gamma(\bar{x}) \geq 0

    and it implies that

    \phi_\gamma(\bar{x}) = 0.

    The proof is completed.

    Since \phi_\gamma can act as a gap function for (2.1), according to Theorem 4.2, investigating the error bound properties that can be obtained with \phi_\gamma is interesting. The following corollary is obtained directly by Theorem 3.2 and (3.5).

    Corollary 4.3 Let \mathbb{F}, \mathbb{P}:\mathbf{R}^n \to \mathbf{R}^n be \mathcal{D} - \vartheta^\mathbb{F} -Lipschitz continuous and \mathcal{D} - \varrho^\mathbb{P} -Lipschitz continuous mappings, respectively. Let \mathbb{N}_i:\mathbf{R}^n\times \mathbf{R}^n\to \mathbf{R}^n\; (i = 1, 2, \cdots, m) be \sigma_i -Lipschitz continuous with respect to the first argument and \wp_i -Lipschitz continuous with respect to the second argument, \mathbb{B}:\mathbf{R}^n\to \mathbf{R}^n be \ell -Lipschitz continuous, and (\mathbb{N}_i, \mathbb{B}) be strongly monotone with respect to the first argument of \mathbb{N} and \mathbb{B} with respect to the constant \mu_i^\mathbb{B} > 0 , and relaxed monotone with respect to the second argument of \mathbb{N} and \mathbb{B} with respect to the constant \zeta_i^\mathbb{B} > 0. Let

    \bigcap\limits^m_{i = 1} (\mho^i)\neq \emptyset.

    Assume that there exists \kappa_i\in\left(0, \dfrac{\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}}{\vartheta^\mathbb{F}\sigma_i+\wp_i\varrho^\mathbb{P}}\right) such that

    \|\gimel^{f_i}_{\Omega(x)}z -\gimel^{f_i}_{\Omega(y)}z\|\leq \kappa_i\|x-y\|, \; \forall x, y\in \mathbf{R}^n, u\in \mathbb{F}(x), v\in \mathbb{P}(x)\; \forall z\in \{w\mid w = \mathbb{B}(x)-\gamma \mathbb{N}_i(u, v)\}.

    Then, for any x\in \mathbb{B}^{-1}(\Omega) and any

    \gamma \gt \dfrac{\kappa_i\ell}{\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}-\kappa_i(\vartheta^\mathbb{F}\sigma_i+\wp_i\varrho^\mathbb{P})},
    d(x, \mho) \leq \dfrac{\gamma(\vartheta^\mathbb{F}\sigma_i+\wp_i\varrho^\mathbb{P}) + \ell}{\gamma(\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}-\kappa_i(\vartheta^\mathbb{F}\sigma_i+\wp_i\varrho^\mathbb{P})) - \kappa_i\ell}\; \sqrt{2\gamma \phi_\gamma(x)}.

    The regularized gap function \phi_\gamma does not provide global error bounds for (2.1) on \mathbf{R}^n. In this section, we first discuss the D -gap function, see [6] for (2.1), which gives \mathbf{R}^n the global error bound for (2.1).

    For (2.1) with \curlywedge > \curlyvee > 0, the D -gap function is defined as follows:

    \begin{align*} G_{\curlywedge\curlyvee}(x) & = \min\limits_{1\leq i\leq m}\big\{\sup\limits_{\substack{y\in \Omega(x), \\ u\in \mathbb{F}(x), v\in \mathbb{P}(x)}}\big\{\langle \mathbb{N}_i(u, v), \mathbb{B}(x)-y\rangle + f_i(\mathbb{B}(x))-f_i(y)- \dfrac{1}{2\curlywedge}\|\mathbb{B}(x)-y\|^2\big\}\\ &\quad -\sup\limits_{\substack{y\in \Omega(x)\\u\in \mathbb{F}(x), v\in \mathbb{P}(x)}}\big\{\langle \mathbb{N}_i(u, v), \mathbb{B}(x)-y\rangle + f_i(\mathbb{B}(x))-f_i(y)-\dfrac{1}{2\curlyvee}\|\mathbb{B}(x)-y\|^2\big\}\big\}. \end{align*}

    From (4.1), we know G_{\curlywedge\curlyvee} can be rewritten as

    \begin{align*} G_{\curlywedge\curlyvee}(x) & = \min\limits_{1\leq i\leq m}\big\{\langle \mathbb{N}_i(u, v), \mathbf{R}^i_\curlywedge(x)\rangle + f_i(\mathbb{B}(x)) - f_i(\mathbb{B}(x)- \mathbf{R}^i_{\curlywedge} (x))-\dfrac{1}{2\curlywedge}\|\mathbf{R}^i_{\curlywedge}(x)\|^2\\ &\quad-\big(\langle \mathbb{N}_i(u, v), \mathbf{R}^i_{\curlyvee}(x)\rangle + f_i(\mathbb{B}(x))-f_i(\mathbb{B}(x)-\mathbf{R}^i_{\curlyvee}(x)) -\dfrac{1}{2\curlyvee}\|\mathbf{R}^i_{\curlyvee}(x)\|^2\big)\big\}, \end{align*}

    where

    \mathbf{R}^i_\curlywedge(x) = \mathbb{B}(x)-\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlywedge \mathbb{N}_i(u, v)]

    and

    \mathbf{R}^i_{\curlyvee}(x) = \mathbb{B}(x)-\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlyvee \mathbb{N}_i(u, v)], \; \forall x \in \mathbf{ R}^n, \; u\in \mathbb{F}(x), v\in \mathbb{P}(x).

    Theorem 5.1 For any x\in \mathbf{R}^n, \; \curlywedge > \curlyvee > 0, we have

    \begin{equation} \dfrac{1}{2}\left(\dfrac{1}{\curlyvee}-\dfrac{1}{\curlywedge}\right)r^2_\curlyvee(x) \leq G_{\curlywedge\curlyvee}(x) \leq \dfrac{1}{2}\left(\dfrac{1}{\curlyvee} - \dfrac{1}{\curlywedge}\right) r^2_\curlywedge(x). \end{equation} (5.1)

    Proof. From the definition of G_{\curlywedge\curlyvee}(x), it follows that

    \begin{align*} G_{\curlywedge\curlyvee}(x) & = \min\limits_{1\leq i\leq m}\big\{\langle \mathbb{N}_i(u, v), \mathbf{R}^i_{\curlywedge}(x)-\mathbf{R}^i_\curlyvee(x)\rangle - f_i(\mathbb{B}(x) - \mathbf{R}^i_\curlywedge(x))\\ &-\dfrac{1}{2\curlywedge}\|\mathbf{R}^i_\curlywedge(x)\|^2 + f_i(\mathbb{B}(x) - \mathbf{R}^i_\curlyvee(x)) + \dfrac{1}{2\curlyvee}\|\mathbf{R}^i_\curlyvee(x)\|^2 \big\}, \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x). \end{align*}

    For any given i \in \{1, 2, \cdots, m\}, we set

    \begin{align} g^i_{\curlywedge\curlyvee}(x) & = \langle \mathbb{N}_i(u, v), \mathbf{R}^i_\curlywedge(x) - \mathbf{R}^i_\curlyvee(x)\rangle - f_i(\mathbb{B}(x) - \mathbf{R}^i_\curlywedge(x)) - \dfrac{1}{2\curlywedge}\|\mathbf{R}^i_\curlywedge(x)\|^2\\ &+ f_i(\mathbb{B}(x)-\mathbf{R}^i_\curlyvee(x)) + \dfrac{1}{2\curlyvee}\|\mathbf{R}^i_\curlyvee(x)\|^2, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x). \end{align} (5.2)

    From \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \curlyvee \mathbb{N}_i(u, v)] \in \Omega(x), by Lemma 2.4, we know

    \langle\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlywedge \mathbb{N}_i(u, v)] - (\mathbb{B}(x) - \curlywedge \mathbb{N}_i(u, v)), \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \curlyvee \mathbb{N}_i(u, v)]-\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlywedge \mathbb{N}_i(u, v)]\rangle
    + \curlywedge f_i(\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlyvee \mathbb{N}_i(u, v)])- \curlywedge f_i(\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x) - \curlywedge \mathbb{N}_i(u, v)])\geq 0, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x)

    which means that

    \begin{equation} \langle \curlywedge \mathbb{N}_i(u, v) - \mathbf{R}^i_\curlywedge(x), \mathbf{R}^i_\curlywedge(x) - \mathbf{R}^i_{\curlyvee}(x)\rangle + \curlywedge f_i(\mathbb{B}(x)-\mathbf{R}^i_\curlyvee(x)) - \curlywedge f_i(\mathbb{B}(x) - \mathbf{R}^i_\curlywedge(x)) \geq 0. \end{equation} (5.3)

    Combining (5.2) and (5.3), we get

    \begin{align} g^i_{\curlywedge\curlyvee}(x) &\geq \dfrac{1}{\curlywedge}\langle \mathbf{R}^i_\curlywedge(x), \mathbf{R}^i_{\curlywedge}(x) - \mathbf{R}^i_\curlyvee(x)\rangle - \dfrac{1}{2\curlywedge} \|\mathbf{R}^i_\curlywedge(x)\|^2 + \dfrac{1}{2\curlyvee}\|\mathbf{R}^i_{\curlyvee}(x)\|^2\\ & = \dfrac{1}{2\curlywedge}\|\mathbf{R}^i_\curlywedge(x) - \mathbf{R}^i_\curlyvee(x)\|^2 + \dfrac{1}{2}\left( \dfrac{1}{\curlyvee}-\dfrac{1}{\curlywedge} \right)\|\mathbf{R}^i_\curlyvee(x)\|^2. \end{align} (5.4)

    Since

    \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlywedge \mathbb{N}_i(u, v)] \in \Omega(x),

    from Lemma 2.4, we have

    \langle \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlyvee \mathbb{N}_i(u, v)]-(\mathbb{B}(x) - \curlyvee \mathbb{N}_i(u, v)), \gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlywedge \mathbb{N}_i(u, v)] -\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlyvee \mathbb{N}_i(u, v)]\rangle
    + \curlyvee f_i(\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlywedge \mathbb{N}_i(u, v)]) - \curlyvee f_i(\gimel^{f_i}_{\Omega(x)}[\mathbb{B}(x)-\curlyvee \mathbb{N}_i(u, v)])\geq0, \; \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x).

    Hence

    \langle \curlyvee \mathbb{N}_i(u, v)-\mathbf{R}^i_\curlyvee(x), \mathbf{R}^i_{\curlyvee}(x) -\mathbf{R}^i_{\curlywedge}(x)\rangle + \curlyvee f_i(\mathbb{B}(x) - \mathbf{R}^i_\curlywedge(x))
    - \curlyvee f_i(\mathbb{B}(x) - \mathbf{R}^i_\curlyvee(x)) \geq 0, \forall u\in \mathbb{F}(x), v\in \mathbb{P}(x)

    and so

    \begin{align*} \dfrac{1}{\curlyvee}\langle \mathbf{R}^i_\curlyvee(x), \mathbf{R}^i_\curlywedge(x) - \mathbf{R}^i_\curlyvee(x)\rangle &\geq \langle \mathbb{N}_i(u, v), \mathbf{R}^i_\curlywedge(x) - \mathbf{R}^i_\curlyvee(x)\rangle\\ &\quad- f_i(\mathbb{B}(x) - \mathbf{R}^i_\curlywedge(x)) + f_i(\mathbb{B}(x) - \mathbf{R}^i_\curlyvee(x)). \end{align*}

    It will require and (5.3),

    \begin{align} g^i_{\curlywedge\curlyvee}(x) &\leq \dfrac{1}{\curlyvee}\langle \mathbf{R}^i_\curlyvee(x), \mathbf{R}^i_\curlywedge(x) - \mathbf{R}^i_\curlyvee(x)\rangle - \dfrac{1}{2\curlywedge} \|\mathbf{R}^i_\curlywedge(x)\|^2 + \dfrac{1}{2\curlyvee}\|\mathbf{R}^i_\curlyvee(x)\|^2\\ & = - \dfrac{1}{2\curlyvee}\|\mathbf{R}^i_\curlywedge(x) - \mathbf{R}^i_\curlyvee(x)\|^2+\dfrac{1}{2}\left(\dfrac{1}{\curlyvee} - \dfrac{1}{\curlywedge}\right)\|\mathbf{R}^i_\curlywedge(x)\|^2. \end{align} (5.5)

    From (5.4) and (5.5), for any i \in \{1, 2, \cdots, m\}, we get

    \dfrac{1}{2}\left(\dfrac{1}{\curlyvee} -\dfrac{1}{\curlywedge}\right)\|\mathbf{R}^i_\curlyvee(x)\|^2 \leq g^i_{\curlywedge\curlyvee}(x) \leq \dfrac{1}{2}\left(\dfrac{1}{\curlyvee}- \dfrac{1}{\curlywedge}\right)\|\mathbf{R}^i_\curlywedge(x)\|^2.

    Hence

    \dfrac{1}{2}\left(\dfrac{1}{\curlyvee} -\dfrac{1}{\curlywedge}\right)\min\limits_{1\leq i\leq m}\big\{\|\mathbf{R}^i_\curlyvee(x)\|^2\big\} \leq \min\limits_{1\leq i\leq m}\big\{g^i_{\curlywedge\curlyvee}(x)\big\} \leq \dfrac{1}{2}\left(\dfrac{1}{\curlyvee}- \dfrac{1}{\curlywedge}\right)\min\limits_{1\leq i\leq m}\left\{\|\mathbf{R}^i_\curlywedge(x)\|^2\right\},

    and so

    \dfrac{1}{2}\left(\dfrac{1}{\curlyvee} - \dfrac{1}{\curlywedge}\right)r^2_\curlyvee(x) \leq G_{\curlywedge\curlyvee}(x) \leq \dfrac{1}{2}\left(\dfrac{1}{\curlyvee} - \dfrac{1}{\curlywedge}\right)r^2_\curlywedge(x).

    The proof is completed.

    Now we are in position to prove that G_{\curlywedge\curlyvee} in the set \mathbf{R}^n is a global gap function for (2.1).

    Theorem 5.2 For 0 < \curlyvee < \curlywedge , G_{\curlywedge\curlyvee} is a gap function for (2.1) on \mathbf{R}^n.

    Proof. From (5.2), we have

    G_{\curlywedge\curlyvee}(x) \geq 0, \; \; \forall x \in \mathbf{R}^n.

    Suppose that \bar{x} \in \mathbf{R}^n with

    G_{\curlywedge\curlyvee}(\bar{x}) = 0,

    then (5.2) implies that

    r_\curlyvee(\bar{x}) = 0.

    From Theorem 3.1, we know \bar{x} is a solution of (2.1).

    Conversely, if \bar{x} is a solution of (2.1), than from Theorem 3.1, it follows that

    r_\curlywedge(\bar{x}) = 0.

    Obviously, (5.2) shows that

    G_{\curlywedge\curlyvee}(\bar{x}) = 0.

    The proof is completed.

    Use Theorem 3.2 and (5.2), we immediately get a global error bound in the set \mathbf{R}^n for (2.1).

    Corollary 5.3 Let \mathbb{F}, \mathbb{P}:\mathbf{R}^n \to \mathbf{R}^n be \mathcal{D} - \vartheta^\mathbb{F} -Lipschitz continuous and \mathcal{D} - \varrho^\mathbb{P} -Lipschitz continuous mappings, respectively. Let \mathbb{N}_i:\mathbf{R}^n\times \mathbf{R}^n\to \mathbf{R}^n\; (i = 1, 2, \cdots, m) be \sigma_i -Lipschitz continuous with respect to the first argument and \wp_i -Lipschitz continuous with respect to the second argument, and \mathbb{B}:\mathbf{R}^n\to \mathbf{R}^n be \ell -Lipschitz continuous. Let (\mathbb{N}_i, \mathbb{B}) be the strongly monotone with respect to the first argument of \mathbb{N}_i and \mathbb{B} with constant \mu_i^\mathbb{B} and relaxed monotone with respect to the second argument of \mathbb{N} and \mathbb{B} with modulus \zeta_i^\mathbb{B}. Let

    \bigcap\limits^m_{i = 1} (\mho^i)\neq \emptyset.

    Assume that there exists \kappa_i \in \left(0, \dfrac{\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}}{\vartheta^\mathbb{F}\sigma_i+\wp_i\varrho^\mathbb{P}}\right) such that

    \|\gimel^{f_i}_{\Omega(x)}z -\gimel^{f_i}_{\Omega(y)}z\|\leq \kappa_i\|x - y\|, \forall x, y \in \mathbf{R}^n, u\in \mathbb{F}(x), v\in \mathbb{P}(x), \; z \in\{w\mid w = \mathbb{B}(x) - \curlyvee \mathbb{N}_i(u, v)\}.

    Then, for any x \in \mathbf{R}^n and

    \curlyvee \gt \dfrac{\kappa_i\ell}{\mu_i^\mathbb{B}-\zeta_i^\mathbb{B}-\kappa_i(\vartheta^\mathbb{F} \sigma_i+\wp_i\varrho^\mathbb{P})},
    d(x, \mho^i) \leq \dfrac{\curlyvee(\vartheta^\mathbb{F} \sigma_i+\varrho^\mathbb{P}\wp_i) + \ell}{\curlyvee(\mu_i^\mathbb{B}-\zeta_i^\mathbb{B} - \kappa_i(\vartheta^\mathbb{F}\sigma_i+\varrho^\mathbb{P}\wp_i)) - \kappa_i\ell}\sqrt{\dfrac{2\curlywedge\curlyvee}{\curlywedge - \curlyvee} G_{\curlywedge\curlyvee}(x)}.

    One of the traditional approaches to evaluating a variational inequality (VI) and its variants is to turn into an analogous optimization problem by notion of a gap function. In addition, gap functions play a pivotal role in deriving the so-called error bounds that provide a measure of the distances between the solution set and feasible arbitrary point. Motivated and inspired by the researches going on in this direction, the main purpose of this paper is to further study the generalized vector inverse quasi-variational inequality problem (1.2) and to obtain error bounds in terms of the residual gap function, the regularized gap function, and the global gap function by utilizing the relaxed monotonicity and Hausdorff Lipschitz continuity. These error bounds provide effective estimated distances between an arbitrary feasible point and the solution set of (1.2).

    The authors are very grateful to the referees for their careful reading, comments and suggestions, which improved the presentation of this article.

    This work was supported by the Scientific Research Fund of Science and Technology Department of Sichuan Provincial (2018JY0340, 2018JY0334) and the Scientific Research Fund of SiChuan Provincial Education Department (16ZA0331).

    The authors declare that they have no competing interests.



    [1] X. Li, X. S. Li, N. J. Huang, A generalized f-projection algorithm for inverse mixed variational inequalities, Optim. Lett., 8 (2014), 1063-1076. doi: 10.1007/s11590-013-0635-4
    [2] G. Y. Chen, C. J. Goh, X. Q. Yang, On gap functions for vector variational inequalities. In: F. Giannessi, (ed.), Vector variational inequalities and vector equilibria: mathematical theories, Kluwer Academic Publishers, Boston, 2000.
    [3] X. Q. Yang, J. C. Yao, Gap functions and existence of solutions to set-valued vector variational inequalities, J. Optim. Theory Appl., 115 (2002), 407-417. doi: 10.1023/A:1020844423345
    [4] Salahuddin, Regularization techniques for Inverse variational inequalities involving relaxed cocoercive mapping in Hilbert spaces, Nonlinear Anal. Forum, 19 (2014), 65-76.
    [5] S. J. Li, H. Yan, G. Y. Chen, Differential and sensitivity properties of gap functions for vector variational inequalities, Math. Methods Oper. Res., 57 (2003), 377-391. doi: 10.1007/s001860200254
    [6] M. V. Solodov, Merit functions and error bounds for generalized variational inequalities, J. Math. Anal. Appl., 287 (2003), 405-414. doi: 10.1016/S0022-247X(02)00554-1
    [7] D. Aussel, R. Gupta, A. Mehra, Gap functions and error bounds for inverse quasi-variational inequality problems, J. Math. Anal. Appl., 407 (2013), 270-280. doi: 10.1016/j.jmaa.2013.03.049
    [8] B. S. Lee, Salahuddin, Minty lemma for inverted vector variational inequalities, Optimization, 66 (2017), 351-359. doi: 10.1080/02331934.2016.1271799
    [9] J. Chen, E. Kobis, J. C. Yao, Optimality conditions for solutions of constrained inverse vector variational inequalities by means of nonlinear scalarization, J. Nonlinear Var. Anal., 1 (2017), 145-158.
    [10] X. K. Sun, Y. Chai, Gap functions and error bounds for generalized vector variational inequalities, Optim. Lett., 8 (2014), 1663-1673. doi: 10.1007/s11590-013-0685-7
    [11] K. Q. Wu, N. J. Huang, The generalised f-projection operator with an application, Bull. Aust. Math. Soc., 73 (2006), 307-317. doi: 10.1017/S0004972700038892
    [12] C. Q. Li, J. Li, Merit functions and error bounds for constrained mixed set-valued variational inequalities via generalized f-projection operators, Optimization, 65 (2016), 1569-1584. doi: 10.1080/02331934.2016.1163555
    [13] S. S. Chang, Salahuddin, L. Wang, G. Wang, Z. L. Ma, Error bounds for mixed set-valued vector inverse quasi variational inequalities, J. Inequal Appl., 2020:160 (2020), 1-16.
    [14] Z. B. Wang, Z. Y. Chen, Z. Chen, Gap functions and error bounds for vector inverse mixed quasi-variational inequality problems, Fixed Point Theory Appl., 2019:14 (2019), 1-14. doi: 10.17654/FP014010001
    [15] F. Giannessi, Vector variational inequalities and vector equilibria: mathematical theories, Kluwer Academic Publishers/Boston/London, Dordrecht, 2000.
    [16] Salahuddin, Solutions of vector variational inequality problems, Korean J. Math., 26 (2018), 299-306.
    [17] X. Li, Y. Z. Zou, Existence result and error bounds for a new class of inverse mixed quasi-variational inequalities, J. Inequal. Appl., 2016 (2016), 1-13. doi: 10.1186/s13660-015-0952-5
  • This article has been cited by:

    1. Aviv Gibali, , Error bounds and gap functions for various variational type problems, 2021, 115, 1578-7303, 10.1007/s13398-021-01066-8
    2. Soumitra Dey, Simeon Reich, A dynamical system for solving inverse quasi-variational inequalities, 2023, 0233-1934, 1, 10.1080/02331934.2023.2173525
    3. Safeera Batool, Muhammad Aslam Noor, Khalida Inayat Noor, Merit functions for absolute value variational inequalities, 2021, 6, 2473-6988, 12133, 10.3934/math.2021704
    4. Ting Liu, Wensheng Jia, Samuel Nicolay, An Approximation Theorem and Generic Convergence of Solutions of Inverse Quasivariational Inequality Problems, 2022, 2022, 2314-8888, 1, 10.1155/2022/2539961
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2897) PDF downloads(232) Cited by(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog