
In this paper, an efficient artificial neural network is proposed for solving a generalized vertical complementarity problem. Based on the properties of log-exponential function, the generalized vertical complementarity problem is reformulated in terms of the unconstrained minimization problem. The existence and the convergence of the trajectory of the neural network are addressed in detail. In addition, it is also proved that if the neural network problem has an equilibrium point under some initial condition, the equilibrium point is asymptotically stable or exponentially stable under certain conditions. At the end of this paper, the simulation results for the generalized bimatrix game are illustrated to show the efficiency of the neural network.
Citation: Bin Hou, Jie Zhang, Chen Qiu. A neural network for a generalized vertical complementarity problem[J]. AIMS Mathematics, 2022, 7(4): 6650-6668. doi: 10.3934/math.2022371
[1] | Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698 |
[2] | Li Wan, Qinghua Zhou, Hongbo Fu, Qunjiao Zhang . Exponential stability of Hopfield neural networks of neutral type with multiple time-varying delays. AIMS Mathematics, 2021, 6(8): 8030-8043. doi: 10.3934/math.2021466 |
[3] | Qinghua Zhou, Li Wan, Hongbo Fu, Qunjiao Zhang . Exponential stability of stochastic Hopfield neural network with mixed multiple delays. AIMS Mathematics, 2021, 6(4): 4142-4155. doi: 10.3934/math.2021245 |
[4] | Tao Xie, Wenqing Zheng . Robustness analysis of Cohen-Grossberg neural network with piecewise constant argument and stochastic disturbances. AIMS Mathematics, 2024, 9(2): 3097-3125. doi: 10.3934/math.2024151 |
[5] | Jittiporn Tangkhawiwetkul . A neural network for solving the generalized inverse mixed variational inequality problem in Hilbert Spaces. AIMS Mathematics, 2023, 8(3): 7258-7276. doi: 10.3934/math.2023365 |
[6] | Li Zhu, Er-yong Cong, Xian Zhang . Global exponential stability conditions for quaternion-valued neural networks with leakage, transmission and distribution delays. AIMS Mathematics, 2023, 8(8): 19018-19038. doi: 10.3934/math.2023970 |
[7] | Yijia Zhang, Tao Xie, Yunlong Ma . Robustness analysis of exponential stability of Cohen-Grossberg neural network with neutral terms. AIMS Mathematics, 2025, 10(3): 4938-4954. doi: 10.3934/math.2025226 |
[8] | Mengting Gan . Global error bounds for the extended vertical LCP of ∑−SDD matrices. AIMS Mathematics, 2024, 9(9): 24326-24335. doi: 10.3934/math.20241183 |
[9] | Yanshou Dong, Junfang Zhao, Xu Miao, Ming Kang . Piecewise pseudo almost periodic solutions of interval general BAM neural networks with mixed time-varying delays and impulsive perturbations. AIMS Mathematics, 2023, 8(9): 21828-21855. doi: 10.3934/math.20231113 |
[10] | Huahai Qiu, Li Wan, Zhigang Zhou, Qunjiao Zhang, Qinghua Zhou . Global exponential periodicity of nonlinear neural networks with multiple time-varying delays. AIMS Mathematics, 2023, 8(5): 12472-12485. doi: 10.3934/math.2023626 |
In this paper, an efficient artificial neural network is proposed for solving a generalized vertical complementarity problem. Based on the properties of log-exponential function, the generalized vertical complementarity problem is reformulated in terms of the unconstrained minimization problem. The existence and the convergence of the trajectory of the neural network are addressed in detail. In addition, it is also proved that if the neural network problem has an equilibrium point under some initial condition, the equilibrium point is asymptotically stable or exponentially stable under certain conditions. At the end of this paper, the simulation results for the generalized bimatrix game are illustrated to show the efficiency of the neural network.
In this paper, a generalized vertical complementarity problem is investigated as follows: find a vector x∈Rn such that
min{Fi1(x),Fi2(x),⋯,Fil(x)}=0,i=1,2,⋯,n, | (1.1) |
where Fij:Rn→R,i=1,2,⋯,n,j=1,2,⋯,l are twice continuously differentiable functions.
Finite dimensional complementarity problem and its applications in engineering, economy, game theory and networks have become a mature and fruitful discipline in mathematical programming. When the involved functions Fij(⋅),i=1,2,⋯,n,j=1,2,⋯,l in (1.1) are linear, the problem (1.1) becomes a generalized vertical linear complementarity problem (GVLCP), which was first proposed in [1]. If, in addition, Fi1(x)=xi,i=1,2,⋯,n, then (1.1) is a vertical linear complementarity problem (VLCP). There are several methods for solving VLCP and GVLCP, like smoothing Newton methods [2,3], continuation methods [4,5], projected splitting methods [6] or algorithms that are based on formulating equivalent complementarity problems [7,8,9,10]. And other relevant works are [1,11,12,13,14,15,16]. In the case when l=2 and Fi1(x)=xi,i=1,2,⋯,n, (1.1) is a nonlinear complementarity problem (NCP) in [17,18]. In the introduction of [18], merit function approach, nonsmooth Newton method, smoothing methods, regularization approach, interior-point method, and proximal point algorithm are mentioned, which are well-known approaches to solve NCP. If, in addition, Fi2(x),i=1,2,⋯,n are linear functions, problem (1.1) is a linear complementarity problem (LCP) in [19,20,21]. And smoothing Newton method, projected method, many modulus-based algorithms are mentioned for solving LCP [20,21] and references therein. $ If an optimization problem contains the generalized vertical complementarity problem as its constraint, it is called mathematical programming with general vertical complementarity constraints (MPVCC) [22,23,24,25], which can be found in equilibrium systems, engineering design, optimal control and game theory [26].
The traditional research of optimization was restricted by theoretical investigation and numerical implementation. However, neural networks were used to solve optimization problems [27,28,29] since 1980s. The LCP and the NCP were studied in [30] and [31] through neural networks, respectively. Another new neural network was proposed in [19] for solving the LCP. In recent years, neural networks are still popular among scholars, and many complex optimization problems were solved by using neural networks [18,32,33,34,35,36,37,38,39,40]. As shown in [30], the significant and unique feature of neural network to optimization is the realization of simple and real-time hardware implementation.
In this paper, we focus on solving the generalized vertical complementarity problem (1.1) through neural networks. At first, based on the properties of log-exponential function, we reformulate the problem (1.1) as an unconstrained minimization problem and the consistency is studied. Then the neural network is used to solve the unconstrained minimization problem, and under certain conditions, the equilibrium point is asymptotically stable or exponentially stable. The obtained results are finally applied to an example of generalization of bimatrix game.
The main contributions of this paper are as follows: first, different from the neural networks in [30] and [31], a neural network is constructed by a smoothing form of the nonsmooth problem (1.1), which avoids computing subgradients in the nonsmooth case. Second, some conditions ensuring consistency of the the equilibrium point in the neural network to the solution of (1.1) are provided. Moreover, asymptotical stability and exponential stability of equilibrium point of the differential equation with the initial condition are studied under certain conditions. Third, the neural network is finally used to solve a generalized vertical complementarity problem in a generalization of bimatrix game.
The main difficulties of this paper are as follows: first, the proof of consistency is obtained by combining the definition of convergence of sequence of set, the properties of log-exponential function, and the properties of solution mapping, which requires more complex techniques in variational analysis. Second, in the process of approximating the original problem, Φ is smooth when α>0, and not smooth when α=0. Hence, in order to ensure the convergence of solutions, the above two cases are investigated to complete the consistency analysis of the original problem (1.1), the approximated problem (3.3) and the differential Eq (3.5). Third, in the study of the stability of the neural network in the Lyapunov sense, many definitions and theorems on stability are mentioned, which need some differential equation techniques to combine them to get the desired conclusion.
This paper is organized as follows: in Section 2, some preliminary knowledge are provided. In Section 3, we reformulate the complementarity problem as an unconstrained minimization problem and construct a neural network. The consistency and stability results are discussed in Section 4 and Section 5, respectively. Simulation results are presented in Section 6. Finally, some concluding remarks are drawn in Section 7.
In this section, some background material are provided.
Let ‖⋅‖ denote the Euclidean norm of a vector or the Frobenius norm of a matrix. Let B denote the closed unit ball and B(x,δ) denote the closed ball around x of radius δ >0. Let ∇ϕ(x) denote the gradient of ϕ : Rn→R at x. For a mapping φ : Rn→Rm, Jφ(x) denotes the Jacobian of φ at x. The notation of Clarke generalized Jacobian [41] is as follows.
Definition 2.1. Let F : Rn→Rm be locally Lipschitz at ˉx∈Rn, by Redemacher's theorem, F is differentiable almost everywhere. Let ωF denote the set where F is differentiable. We can define the B-subdifferential of F at ˉx as
∂BF(ˉx)={ limi→∞ JF(xi) ∣ xi→ˉx, xi∈ ωF}, |
and the Clarke subdifferential of F at ˉx as
∂cF(ˉx)=co ∂BF(ˉx), |
where co denotes the convex hull of a set.
The log-exponential function is a smoothing function for max-type functions. Let V : Rn→R be defined by
V(x)=max{v1(x),v2(x),⋯,vm(x)}, |
where vi, i=1,2,⋯,m, are continuously differentiable functions. Notice that, V(x) is continuous in Rn, but V(x) is not differentiable everywhere. The log-exponential function is defined as follows.
Definition 2.2. [42,43] For any α>0, the log-exponential function of V(x), denoted as ˜V(α,x):Rn+1→R, is defined by
˜V(α,x)=αln(m∑i=1exp(vi(x)/α)), | (2.1) |
Notice that,
0≤˜V(α,x)−V(x)≤αlnm, | (2.2) |
which implies limα→0˜V(α,x)=V(x) and the convergence is uniformly with respect to x. From the definition, we know that V(α,x) is a smoothing function with respect to x for any α>0.
Definition 2.3. [43] For sets Xn and X in Rn, X is closed, the sequence {Xn}n∈N is called convergence to X if
lim supn→∞Xn⊆X⊆lim infn→∞Xn |
with
lim supn→∞Xn:={x∣ ∃ N∈N#∞, ∃ xn∈Xn, n∈N,suchthatxnN⟶x }, |
lim infn→∞Xn:={x∣ ∃ N∈N∞, ∃ xn∈Xn, n∈N,suchthatxnN⟶x }, |
where N∞={N⊆N∣N∖N finite},N#∞={N⊆N∣N infinite } and N denotes the set of all positive integer numbers.
Definition 2.4. [44] Let h : Rn→R be locally Lipschitz. We call ˜h : Rn×(0,+∞)→R a smoothing function of h, if ˜h satisfies the following conditions:
1) For any fixed μ∈(0,+∞), ˜h(⋅,μ) is continuously differentiable in Rn, and for any fixed x∈Rn, ˜h(x,⋅) is differentiable in (0,+∞).
2) For any fixed x∈Rn, limμ→0˜h(x,μ)=h(x).
3) {limz→x,μ→0∇z˜h(z,μ)}⊆∂h(x).
4) There is a positive constant κ>0 such that |∇μh(x,μ)|≤κ, ∀x∈Rn,μ∈(0,+∞).
Some fundamental definitions about differential equation in the [45] are reviewed. Consider the following differential equation
d(x(t))dt=H(x(t)), x(t0)=x0∈Rn. | (2.3) |
Definition 2.5. We said x∗=x(t∗) is an equilibrium point or the stable state of the dynamical system (2.3) if H(x∗)=0. And x∗ is called an isolated equilibrium point if there exists a neighbourhood Ω of x∗, Ω⊆Rn, such that H(x∗)=0 but H(x)≠0 for x∈Ω∖{x∗}.
Definition 2.6. An isolated equilibrium point x∗ of (2.3) is asymptotically stable if it is stable in the sense Lyapunov stable and there exists a δ>0, such that for any maximal solution x(t),t∈[t0,t1), if ‖x0−x∗‖<δ, then one has t1=+∞ and limt→∞x(t)=x∗.
Definition 2.7. An isolated equilibrium point x∗ of (2.3) is said to be exponentially stable if there exist ω<0, κ>0, δ>0 such that arbitrary solution x(t), with the initial condition x(t0)=x0, ‖x0−x∗‖<δ, is defined on [t0,+∞) and satisfies
‖x(t)−x∗‖≤κeωt‖x0−x∗‖, t≥t0. |
Definition 2.8. [46] Let L={1,2,⋯,n}. The function F : Rn→Rn is said to be a P0 function if for all x,y∈Rn with x≠y
maxxi≠yi(xi−yi)[Fi(x)−Fi(y)]≥0. |
Let Fi(x)=(F1i(x),F2i(x),⋯,Fni(x))T, i=1,2,⋯,l. If for each x∈Rn and i∈L, Fi(x)∈{F1i(x),F2i(x),⋯,Fli(x)}, then F is a row representative of {F1,F2,⋯,Fl}. And we say {F1,F2,⋯,Fl} has P0 property if every representative of {F1,F2,⋯,Fl} is a P0 function.
In this section, (1.1) is approximated as an unconstrained minimization problem by using the log-exponential function. Then, the neural network in the forms of differential equation is proposed for solving the unconstrained minimization problem.
For convenience, in this paper, we only consider (1.1) for l=3, that is finding a vector x∈Rn such that
min{Fk1(x),Fk2(x),Fk3(x)}=0,k=1,2,⋯,n. | (3.1) |
Notice that for each k, min{Fk1(x),Fk2(x),Fk3(x)}=0 can be approximated by the following equation:
ϕα(Fk1(x),Fk2(x),Fk3(x))=0, |
in the sense that
limα→0+ϕα(Fk1(x),Fk2(x),Fk3(x))=ϕ0(Fk1(x),Fk2(x),Fk3(x)), | (3.2) |
where
ϕα(a1,a2,a3)={−αln(3∑i=1exp(−ai/α)), α>0, min{a1,a2,a3}, α=0, |
for numbers ai∈R, i=1,2,3.
Then we obtain the following approximation of problem (3.1):
Φ(x,α)=(ϕα(F11(x),F12(x),F13(x))ϕα(F21(x),F22(x),F23(x))⋮ϕα(Fn1(x),Fn2(x),Fn3(x)))=0. | (3.3) |
Suppose that xα satisfies Φ(xα,α)=0 for some α>0. Then one has that ϕα(Fk1(x),Fk2(x),Fk3(x)), k=1,2,⋯,n is twice continuously differentiable at xα and
∇xϕα(Fk1(xα),Fk2(xα),Fk3(xα))=3∑i=1ηki(xα,α)∇xFki(xα), |
where
ηki(x,α)=exp(−Fki(x)/α)3∑i=1exp(−Fki(x)/α)∈(0,1),i=1,2,3, |
with ∑3i=1ηki(x,α)=1.
Notice that (3.3) has a solution if and only if the following least square problem has the zero minimum:
minx∈Rn f(x,α)=12‖Φ(x,α)‖2. | (3.4) |
It is clear that f(x,α) is continuously differentiable.
Next, the frame structure of neural network is given for solving problem (3.1), which is based on the steepest descent method for the reformulated problem (3.4):
d(x(t))dt=−τ∇xf(x(t),α), α>0, τ>0, | (3.5) |
τ is a scale factor. τ>1 indicates that a longer step could be taken in simulation. And to simplify our analysis, let τ=1. A block diagram of the model is shown in the following Figure 1.
We know from Fermat's theorem that for a fixed α>0 if x(α) is an optimal solution of (3.4), then ∇xf(x(α),α)=0, which means that x(α) is an equilibrium point of (3.5).
The existence of equilibrium of (3.5) is illustrated by the next theorem.
Theorem 3.1. Suppose {F1,F2,⋯,Fl} has P0 property, where Fi(x)=(F1i(x),F2i(x), ⋯,Fni(x))T, i=1,2,⋯,l, then for each α>0, (3.5) has an equilibrium point.
Proof. By [46,Theorem 3.4], if {F1,F2,⋯,Fl} has P0 property, then for each α>0, Φ(x,α)=0 has a unique solution x(α), which means that ∇xf(x(α),α)=0. The proof is completed.
In this section, the relationship between the solutions of problem (3.1) and the solutions of problem (3.3), the relationship between the solutions of problem (3.1) and the equilibrium point of problem (3.5) are investigated.
Consider the following sets
X1α={x∈Rn∣∇xf(x,α)=0,α>0},Xα={x∈Rn ∣ ϕα(Fk1(x),Fk2(x),Fk3(x))=0,k=1,2,⋯,n},X0={x∈Rn ∣ min{Fk1(x),Fk2(x),Fk3(x)}=0,k=1,2,⋯,n}. |
Lemma 4.1. [24,Lemma 4.1] For a solution mapping Σ : Rn1→Rn2 defined by
Σ(p)={x∈Rn2∣ Ψ(x,p)∈K }, |
where Ψ: Rn2×Rn1→Rn3 is continuous at (x0,p0), thefunction Ψ(⋅,p) is locally Lipschitz on Rn2 for every p∈Rn1, K isa closed convex set in Rn3 and the map (x,p)⟼∂c1Ψ(x,p), where ∂c1Ψ(x,p) denotes the generalized Jacobian of Ψ(⋅,p) at x, is upper semicontinuous at (x0,p0). Let Ψ0(x)=Ψ(x,p0) for x∈Rn2.If the following regularity condition
0∈int{Ψ0(x0)+ARn2−K } |
holds for every A∈∂cΨ0(x0), then there existconstants μ>0,ε>0 and δ>0 such that
d(x,Σ(p))≤μd(0,Ψ(x,p)−K), ∀x∈B(x0,ε), p∈B(p0,δ). |
For ˉx∈X0, let index sets
K(ˉx)={k∈{1,2,⋯,n} :∃ i, j, i≠j,suchthatFki(ˉx)=Fkj(ˉx)=0, i,j=1,2,3 }, |
I(ˉx)={(k,i)∈K(ˉx)c×{1,2,3}: Fki(ˉx)=0 }. |
Next inspired by [24], a theorem is summarized about the convergence of the set Xα as α tends to zero.
Theorem 4.2. Suppose the regularity condition
0∈int{Ξ(ˉx,a)Rn− {0}n } | (4.1) |
holds for any ˉx∈X0 and any a∈R3 satisfying ∑3i=1ai=1 and ai≥0 for each i, where Ξ(ˉx,a)∈Rn×n and thek-th row of Ξ(ˉx,a) is
Ξ(ˉx,a)k={∇xFkik(ˉx)T,if k∈K(ˉx)c,3∑i=1ai∇xFki(ˉx)T,if k∈K(ˉx) |
with ik is the index satisfying (k,ik)∈I(ˉx), then
limα→0+Xα=X0. |
Proof. At first, we show that lim supα→0+Xα ⊆ X0. It suffices to prove that if there exist a number sequence {αn}→0 and a sequence {xn} converging to ˉx as n→∞ such that xn∈Xαn for each n, then ˉx∈X0. Indeed, by (2.2), we have for each k,
0≤ϕαn(Fk1(xn),Fk2(xn),Fk3(xn))−ϕ0(Fk1(xn),Fk2(xn),Fk3(xn))≤αnln3, |
which means that ˉx∈X0.
Next we show that for any ˉx∈X0,ˉx∈lim infα→0+Xα. It suffices to show that for any αn→0, there exists a sequence {xn} satisfying xn∈Xαn for each n such that xn→ˉx. According to Lemma 4.1, if the following condition
0∈int{A(ˉx)Rn− {0}n} | (4.2) |
holds for any A(ˉx)∈∂c[Φ(⋅,0)](ˉx), where Φ is defined as in (3.3), then there exist numbers μ>0,ε>0 and δ>0 such that
d(x,Xα)≤μd(0,Φ(x,α)), ∀x∈B(ˉx,ε), α∈B(0,δ). |
Especially, we have
d(ˉx,Xαn)≤μd(0,Φ(ˉx,αn))≤μ‖Φ(ˉx,0)−Φ(ˉx,αn)‖≤μαn√nln3, |
which means for each n, Xαn≠∅. Therefore, there exists a sequence {xn} satisfying xn∈Xαn for each n and xn→ˉx as n→∞. Then under condition (4.2), ˉx∈lim infα→0Xα. As a result, to obtain the conclusion, we only need to show condition (4.1) is the sufficient condition for (4.2). Consider the index
K(ˉx)={k∈{1,2,⋯,n} :∃ i, j, i≠j,suchthatFki(ˉx)=Fkj(ˉx)=0, i,j=1,2,3 }. |
Since for the index k∈{1,2,⋯,n}∖K(ˉx), there exists only one index i∈{1,2,3} such that Fki(ˉx)=0, we obtain
∂cϕ0(Fk1(ˉx),Fk2(ˉx),Fk3(ˉx))={ ∇xFki(ˉx) }. |
For index k∈K(ˉx), by Definition 1, we have
∂cϕ0(Fk1(ˉx),Fk2(ˉx),Fk3(ˉx))=co{limn→∞∇xϕ0(Fk1(xn),Fk2(xn),Fk3(xn):xn→ˉx s.t. for i≠j, i,j∈Jk(ˉx), Fki(xn)≠Fkj(xn) }=co{∇xFki(ˉx), i∈Jk(ˉx) }={|Jk(ˉx)|∑i=1ai∇xFki(ˉx):|Jk(ˉx)|∑i=1ai=1,ai≥0 }, |
where Jk(ˉx)={i∈{1,2,3}: ∃ j≠isuchthat Fki(ˉx)=Fkj(ˉx)=0 }. So the condition (4.1) implies conidtion (4.2).
According to the Theorem 4.2, under some conditions, we have limα→0+Xα=X0. Now a natural question is the relationship between X1α and X0.
At the beginning, let us consider the following problem:
g(x)=minx∈Rn12‖g1(x)‖2, |
where
g1(x)=(min{F11(x),F12(x),F13(x)}min{F21(x),F22(x),F23(x)}⋮min{Fn1(x),Fn2(x),Fn3(x)}). |
It is clear that g1(x) is not differentiable, although the squared norm is differentiable, we cannot say the composition g(x) is differentiable by the chain rule in Mathematical Analysis. In fact, g(x) is continuously differentiable.
Lemma 4.3. [47,48]The function g(x) is continuously differentiable with thegradient ∇g(x)=VTg1(x) for any arbitraryelement V∈∂cg1(x).
Theorem 4.4. Suppose x(α)∈X1α, limα→0+x(α)=ˉx, if any V∈∂cg1(ˉx) is nonsingular, then ˉx∈X0.
Proof. We know that
∇xf(x,α)=JxΦ(x,α)TΦ(x,α). |
Then
∇xf(x(α),α)=JxΦ(x(α),α)TΦ(x(α),α)=0. |
Notice that
∇g(x)=VTg1(x), V∈∂cg1(x). |
According to the Definition 4, we have
{limα→0+JxΦ(x(α),α)}⊂∂cg1(ˉx), |
that is, there exists V0∈∂cg1(ˉx) such that
limα→0+‖JxΦ(x(α),α)−V0‖=0. |
We know from proof of Theorem 4.2 that as α→0+,
Φ(x(α),α)→g1(ˉx). |
Therefore we have as α→0+,
JxΦ(x(α),α)TΦ(x(α),α)→VT0g1(ˉx). |
That is, VT0g1(ˉx)=0. Since V0 is nonsingular, then g1(ˉx)=0, i.e., ˉx∈X0.
We know from the Theorem 4.4 that every cluster point of {x(α)} is a solution of generalized vertical complementarity problem (3.1) under conditions in Theorem 4.4, where x(α) is an equilibrium point of the neural network (3.5) for α>0.
In Section 3 and Section 4, for the sake of illustration, consistency analysis is made for problem (1.1) when l=3. In fact, the consistency analysis in this paper can be extended to (1.1) for an arbitrary choice of l, just replace the relevant 3 with l in the corresponding theorems and proofs.
In this section, asymptotic stability and exponential stability for differential Eq (3.5) are studied.
Let α>0, suppose that x∗ is an isolated equilibrium point of (3.5), and Ω∗⊆Rn is a neighbourhood of x∗ such that
∇xf(x∗,α)=0, ∇xf(x,α)≠0, ∀x∈Ω∗∖{x∗}. |
First, we give a theorem to show the global convergence of solutions of differential Eq (3.5).
Theorem 5.1. (i) For an arbitrary initial state x0, there exists exactly one maximal solution x(t), t∈[t0,τ(x0)). (ii) If the level set Ω(x0)={x∈Rn∣f(x,α)≤f(x0,α)} is bounded or Fki(x) are Lipschitz continuous, k=1,2,⋯,n,i=1,2,3, then τ(x0)=+∞.
Proof. (i) It is obvious that ∇xf(x(t),α) is continuous, according to the [31,Theorem 2.5], we can get the conclusion easily. (ii) First, we start with the first case, the level set Ω(x0) is bounded. If otherwise, τ(x0)<+∞. Based on [31,Theorem 2.6], we have limt→τ(x0)‖x(t)‖=+∞. Let
τ0=inf{s≥t0∣s<τ(x0),x(s)∈Ωc(x0)}<∞, |
where Ωc(x0)=Rn∖Ω(x0). Because of the continuity of f, Ω(x0) is a closed set, and Ω(x0) is bounded by the condition. Consequently, Ω(x0) is a compact set. Hence, we can obtain x(τ0)∈Ω(x0) and τ0<τ(x0), this implies there exists a s∈[t0,τ(x0)), such that
f(x(s),α)>f(x(τ0),α)). |
However,
df(x(t),α)dt=∇xf(x(t),α)Td(x(t))dt=−‖∇xf(x(t),α)‖2≤0, |
so f(⋅,α) is not increasing in monotony, this is a contradiction, therefore τ(x0)=+∞. Then in the second case, if Fki,k=1,2,⋯,n,i=1,2,3 are Lipschitz continuous in Rn, then ∇xf(x,α) is Lipschitz continuous in Rn. According to [31,Theorem 2.5], τ(x0)=+∞.
Theorem 5.2. For an arbitrary initial state x0, if the level set Ω(x0) is bounded or Fki, k=1,2,⋯,n, i=1,2,3 are Lipschitz continuous in Rn, then the differential Eq (3.5) has a unique solution on [t0,+∞).
Proof. The proof is the same as the proof of Theorem 5.1. In fact, the theorem is an intuitive statement of Theorem 5.1. If the above conditions hold, then the differential Eq (3.5) has global convergence for an arbitrary initial state x0.
Lemma 5.3. [45,Theorem 2.6] An isolated equilibrium point x∗ is asymptotically stable ifthere exists a Lyapunov function over some neighbourhood Ω∗ of x∗, satisfying
dW(x(t))dt<0, ∀x(t)∈Ω∗, x(t)≠x∗. |
Next we show the asymptotic stability of differential Eq (3.5) under mild conditions.
Theorem 5.4. If x∗ is an isolated equilibrium point of differential Eq (3.5), then x∗ is asymptotically stable for (3.5).
Proof. First, we need to prove that f(x,α) is a Lyapunov function over the set Ω∗ for Eq (3.5). From the definition of f(x,α), we know that f(x,α) is nonnegative over Rn. Since x∗ is an isolated equilibrium point f(x∗,α)=0, for any x∈Ω∗∖{x∗}, f(x,α)>0. Now we check the second condition in the definition of Lyapunov function [45]. Notice that
df(x(t),α)dt=∇xf(x(t),α)Td(x(t))dt=−‖∇xf(x(t),α)T ∇xf(x(t),α)‖≤0. |
Hence, the function f(x,α) is a Lyapunov function for (3.5) over the set Ω∗. Because of x∗ being an isolated equilibrium point, we know df(x(t),α)dt<0, ∀x∈Ω∗∖{x∗}. It follows from Lemma 5.3 that x∗ is asymptotically stable for (3.5).
Theorem 5.5. If JxΦ(x∗,α) is nonsingular for α>0, then x∗ is exponentially stable for (3.5).
Proof. Since JxΦ(x∗,α) is nonsingular, x∗ satisfies Φ(x∗,α)=0. Notice that x∗ is an isolated equilibrium point of (3.5), therefore, x∗ is asymptotically stable by Theorem 5.4. Let δ>0 be sufficiently small such that for any x(t)∈B(x∗,δ),x(t)→x∗ as t→+∞. Notice that, JxΦ(x,α)JxΦ(x,α)T is an n×n nonsingular matrix, hence there exist κ1>0 and κ2>0 such that
κ1‖v‖2≤vTJxΦ(x,α)JxΦ(x,α)Tv≤κ2‖v‖2,∀x∈B(x∗,δ). |
By the smoothness of the function Φ(x,α), we have the following expansion
Φ(x,α)=Φ(x∗,α)+JxΦ(x,α)(x−x∗)+o(‖x−x∗‖). |
Suppose that δ is small enough, such that
|o(‖x−x∗‖)|≤ϵ‖x−x∗‖, |
for some 0<ϵ<κ1 and ∀x∈B(x∗,δ). Now let
Γ(t)=‖x(t)−x∗‖2,t∈[t0,+∞). |
Then
dΓ(t)dt=2(x(t)−x∗)Tdx(t)dt=−2(x(t)−x∗)T∇xf(x(t),α)=−2(x(t)−x∗)T(JxΦ(x(t),α)TΦ(x(t),α)). |
Let
ˉτ=inf{t∈[t0,+∞)∣‖x(t)−x∗‖≥δ} |
be the first exit time of the solution from the ball B(x∗,δ). Combine the above formulas, and Φ(x∗,α)=0, then for arbitrary t∈ˉI=[t0,ˉτ),
dΓ(t)dt=−2(x(t)−x∗)T(∇xΦ(x(t),α)TΦ(x(t),α))=−2(x(t)−x∗)T∇xΦ(x(t),α)T[Φ(x∗,α)+∇xΦ(x(t),α)T(x(t)−x∗)+o(‖x(t)−x∗‖)]≤−2(x(t)−x∗)T∇xΦ(x(t),α)T∇xΦ(x(t),α)(x(t)−x∗)+ϵ‖x(t)−x∗‖≤(−2κ1+ϵ)‖x(t)−x∗‖=(−2κ1+ϵ)Γ(t). |
According to [45,Corollary 2.1], the following inequality
Γ(t)≤e(−2κ1+ϵ)tΓ(t0), t∈ˉI, |
is equivalent to the definition of exponential stability
‖x(t)−x∗‖≤eωt‖x(t0)−x∗‖, t∈ˉI, |
where ω=−κ1+ϵ∖2<0. If ˉτ<+∞, then
δ≤lim supt→ˉτ‖x(t)−x∗‖≤eωˉτ‖x(t0)−x∗‖<δ, |
that is a contradiction. Thus ˉτ=+∞ and we complete the proof of exponential stability.
Theorem 5.6. Assume that the neural network (3.5) is Lyapunov stable, for arbitrary initial state x0, for a fixed α∈(0,+∞), if the level set Ω(x0) is bounded or Fki(x), k=1,2,⋯,n,i=1,2,3 are Lipschitz continuous, then the solution trajectory xα(t) of neural network (3.5) is global convergence to the stationary point of (3.4). Furthermore, if JxΦ(x∗,α) is nonsingular, then xα(t) is global convergence to the solution of (3.4).
Proof. If the condition holds, then the solution interval can be extended to infinity through Theorem 5.1 and neural network (3.5) has a unique solution denoted as xα(t), which satisfies limt→+∞xα(t)=x∗ with ∇xf(x∗,α)=JxΦ(x∗,α)Φ(x∗,α)=0. Hence, the solution trajectory xα(t) of neural network (3.5) is global convergence to the stationary point of (3.4). Furthermore, if JxΦ(x∗,α) is nonsingular, which means that Φ(x∗,α)=0, then xα(t) is global convergence to the solution of (3.4).
In this section, an example in a generalized bimatrix game is illustrated to test our neural network.
At first, the generalized bimatrix game is described, which is based on a bimatrix game introduced by [49] and a generalized SER-SIT stochastic games model in [50]. Let A and B be two nonempty sets in Rm×n. If Ri∈{Ai:A∈A} for i=1,2,⋯,m, then the m×n matrix R is said to be a row representative of A, where the subscript denotes the corresponding row. In the same way, if Sj∈{Bj:B∈B},j=1,2,⋯,n, then the m×n matrix S is said to be a column representative of B, where the subscript denotes the corresponding column. Suppose A={C,D}, B={E,F}, where
C=(2124), D=(1332) |
and
E=(1231), F=(5124). |
Let ˉx∈Rm and ˉy∈Rn, if there exist a row representative G of A, and a column representative H of B such that
ˉxTGˉy≤uTRˉy and ˉxTHˉy≤ˉxTSv | (6.1) |
for all probability vectors u and v, for all row representative matrices R of A, and for all column representative matrices S of B, then the mixed strategies (ˉx,ˉy) is said to be a generalized Nash equilibrium pair for the generalized bimatrix game.
The game theoretic implication of the above generalized Nash equilibrium pair is as follows. In the game denoted by Λ(A,B), player I deals with the rows of matrices in A while player II deals with the columns of matrices in B. Player I chooses a row representative A of A and a probability distribution x on the set {1,2}. Player II chooses a column representative B of B and a probability distribution y on the set {1,2}. Then the first player's cost is xTGy while the second player's cost is xTHy. The existence of an equilibrium for Λ(A,B) describes the stage at which no player can decrease his cost by unilaterally changing his row/column selection and probability distribution.
By [49,Proposition 1], the existence of a pair of probability vectors (ˉx,ˉy) satisfying the (6.1) is equivalent to the existence of the solution (˜x,˜y) to the equation
F(x,y)=(min{x1,(infA∈A(Ay−e))1}min{x2,(infA∈A(Ay−e))2}min{y1,(infB∈B(BTx−e))1}min{y2,(infB∈B(BTx−e))2})=0, | (6.2) |
where e=(1,−1)T. Moreover, if F(˜x,˜y)=0 and ‖˜x‖‖˜y‖≠0, then (˜x/‖˜x‖,˜y/‖˜y‖) is a generalized Nash equilibrium pair for the generalized bimatrix game. Notice that (6.2) is just a generalized vertical complementarity problem:
G(z)=(min{z1,(Q1z+q)1,(Q2z+q)1}min{z2,(Q1z+q)2,(Q2z+q)2}min{z3,(Q1z+q)3,(Q2z+q)3}min{z4,(Q1z+q)4,(Q2z+q)4})=0, | (6.3) |
where
Q1=(0CDT0), Q2=(0EFT0), q=(−e−e), z=(x1,x2,y1,y2)T. |
To solve the above problem, by introducing the log-exponential function and α>0, we obtain a least square problem to approximate (6.3):
minz∈R4f(z,α)=12‖Φ(z,α)‖2, |
where
Φ(z,α)=(ϕα(z1,(Q1z+q)1,(Q2z+q)1)ϕα(z2,(Q1z+q)2,(Q2z+q)2)ϕα(z3,(Q1z+q)3,(Q2z+q)3)ϕα(z4,(Q1z+q)4,(Q2z+q)4)), α>0. |
The Eq (6.3) is
G(z)=(z12z3+z4−1z3+2z4−2z22z3+4z4+13z3+z4+2z3z1+3z2−15z1+2z2−2z43z1+2z2+1z1+4z2+2)=0. | (6.4) |
The following neural network is constructed for solving problem (6.4):
d(z(t))dt=−τ∇zf(z(t),α), α>0, τ>0. | (6.5) |
The simulation is based on Matlab version 9.5, and the package ode45 in Matlab is used to solve differential equations. The starting point z0 is a uniformly distributed random four dimensional vector in [0,1]×[0,1]×[0,1]×[0,1].
Here are some detailed simulation results in the following tables.
Problem (6.4) has a unique solution z∗=(1,0,2,0)T. According to Table 1, where τ is set to be equal to 1000, α≤0.3, the approximate solutions obtained by neural networks are almost as the same as z∗, but as α gets bigger, the distance between the approximate solutions and z∗ become larger. From Table 2, we know where α is set to be equal to 0.01 and τ=1,10,100,1000, the approximate solutions are almost as the same as z∗, but only the length of the convergence has changed.
τ | α | t | z | f(z,α) |
1000 | 0.01 | 0.0620 | (1.0000 2.3230e-09 2.0000 2.3230e-09) | 2.2694e-13 |
1000 | 0.05 | 0.0771 | (1.0000 3.4470e-09 2.0000 3.4470e-09) | 2.1864e-14 |
1000 | 0.1 | 0.0695 | (1.0000 2.4990e-09 2.0000 2.4990e-09) | 1.5218e-13 |
1000 | 0.3 | 0.0396 | (1.0000 1.4090e-05 2.0110 1.4090e-05) | 2.5289e-14 |
1000 | 0.5 | 0.0456 | (1.0090 0.0013 2.0700 0.0013) | 3.5818e-13 |
1000 | 0.7 | 0.0396 | (1.0390 0.0111 2.1650 0.0111) | 1.7911e-11 |
1000 | 0.9 | 0.0597 | (1.0920 0.0160 2.0160 0.0373) | 3.7420e-10 |
τ | α | t | z | f(z,α) |
1 | 0.01 | 0.0510 | (1.0000 7.6746e-10 2.0000 7.6746e-10) | 1.6197e-13 |
10 | 0.01 | 0.0682 | (1.0000 2.7560e-09 2.0000 2.7560e-09) | 8.2078e-14 |
100 | 0.01 | 0.0841 | (1.0000 9.5730e-09 2.0000 9.5730e-09) | 5.2607e-14 |
1000 | 0.01 | 0.1000 | (1.0000 1.6420e-07 2.0000 1.6420e-07) | 1.7804e-12 |
Moreover, we give Figures 2 and 3 about the trajectories of z(t) when τ=1000 and α=0.01,0.05.
The following observations can be made through the above tables and figures: (i) All trajectories converge to their corresponding static states, respectively. The convergence is faster when a larger scaling factor is applied. (ii) The smaller α is, the solutions approximate to the solution of the true problem (α≥0.01). As α gets bigger, the distance between the approximate solutions and the solution of the true problem become larger. These limited numerical experiments verified the stability of the proposed neural network.
In this paper, a neural network is constructed to solve a generalized vertical complementarity problem. The log-exponential function is introduced to reformulate the generalized vertical complementarity problem in terms of an unconstrained minimization problem, and then we construct a neural network based on the unconstrained minimization problem. Some conditions ensuring consistency of equilibrium point of the neural network to the solution of generalized vertical complementarity problem are provided. Moreover, the asymptotical stability and exponential stability of equilibrium point of neural network are studied in detail. Finally, an example of generalized bimatrix game is illustrated to test our neural network.
The research is supported by the National Natural Science Foundation of China under Project Grant Nos. 12171219 and 61877032, the Liaoning Revitalization Talents Program No. XLYC2007113, Scientific Research Fund of Liaoning Provincial Education Department under Project No. LJKZ0961 and Liaoning BaiQianWan Talents Program.
The authors declare no conflict of interest.
[1] |
R. Cottle, G. Dantzig, A generalization of the linear complementarity problem, J. Comb. Theory, 8 (1970), 79–90. http://dx.doi.org/10.1016/S0021-9800(70)80010-2 doi: 10.1016/S0021-9800(70)80010-2
![]() |
[2] |
L. Zhang, Z. Gao, Global linear and quadratic one-step smoothing Newton method for vertical linear complementarity problems, Appl. Math. Mech., 24 (2003), 738–746. http://dx.doi.org/10.1007/BF02437876 doi: 10.1007/BF02437876
![]() |
[3] |
H. Qi, L. Liao, A smoothing Newton method for extended vertical linear complementarity problems, SIAM J. Matrix Anal. Appl., 21 (1999), 45–66. http://dx.doi.org/10.1137/S0895479897329837 doi: 10.1137/S0895479897329837
![]() |
[4] |
J. Peng, Z. Lin, A non-interior continuation method for generalized linear complementarity problems, Math. Program., 86 (1999), 533–563. http://dx.doi.org/10.1007/s101070050104 doi: 10.1007/s101070050104
![]() |
[5] |
S. C. Fang, J. Han, Z. H. Huang, S. Birbil, On the finite termination of an entropy function based non-interior continuation method for vertical linear complementarity problems, J. Glob. Optim., 33 (2005), 369–391. http://dx.doi.org/10.1007/s10898-004-6098-5 doi: 10.1007/s10898-004-6098-5
![]() |
[6] | F. Mezzadri, E. Galligani, Projected splitting methods for vertical linear complementarity problems, J. Optim. Theory Appl., in press. http://dx.doi.org/10.1007/s10957-021-01922-y |
[7] |
A. Ebiefung, Nonlinear mappings associated with the generalized linear complementarity problem, Math. Program., 69 (1995), 255–268. http://dx.doi.org/10.1007/BF01585560 doi: 10.1007/BF01585560
![]() |
[8] |
S. Mohan, S. Neogy, R. Sridhar, The generalized linear complementarity problem revisited, Math. Program., 74 (1996), 197. http://dx.doi.org/10.1007/BF02592211 doi: 10.1007/BF02592211
![]() |
[9] |
S. Mohan, S. Neogy, Algorithms for the generalized linear complementarity problem with a vertical block z-matrix, SIAM J. Optim., 6 (1996), 994–1006. http://dx.doi.org/10.1137/S1052623494275586 doi: 10.1137/S1052623494275586
![]() |
[10] |
S. Mohan, S. Neogy, Vertical block hidden Z-matrices and the generalized linear complementarity problem, SIAM J. Matrix Anal. Appl., 18 (1997), 181–190. http://dx.doi.org/10.1137/S0895479894271147 doi: 10.1137/S0895479894271147
![]() |
[11] |
A. Ebiefung, Existence theory and Q-matrix characterization for the generalized linear complementarity problem, Linear Algebra Appl., 223 (1995), 155–169. http://dx.doi.org/10.1016/0024-3795(95)00091-5 doi: 10.1016/0024-3795(95)00091-5
![]() |
[12] |
A. Ebiefung, G. Habetler, M. Kostreva, B. Szanc, A direct algorithm for the vertical generalized complementarity problem associated with P-matrices, Open Journal of Optimization, 6 (2017), 101–114. http://dx.doi.org/10.4236/ojop.2017.63008 doi: 10.4236/ojop.2017.63008
![]() |
[13] |
G. Habetler, B. Szanc, Existence and uniqueness of solutions for the generalized linear complementarity problem, J. Optim. Theory Appl., 84 (1995), 103–116. http://dx.doi.org/10.1007/BF02191738 doi: 10.1007/BF02191738
![]() |
[14] |
A. Ebiefung, M. Kostreva, V. Ramanujam, An algorithm to solve the generalized linear complementarity problem with a vertical block Z-matrix, Optim. Method. Softw., 7 (1997), 123–138. http://dx.doi.org/10.1080/10556789708805648 doi: 10.1080/10556789708805648
![]() |
[15] |
F. Mezzadri, E. Galligani, A generalization of irreducibility and diagonal dominance with applications to horizontal and vertical linear complementarity problems, Linear Algebra Appl., 621 (2021), 214–234. http://dx.doi.org/10.1016/j.laa.2021.03.016 doi: 10.1016/j.laa.2021.03.016
![]() |
[16] |
M. Gowda, R. Sznajder, The generalized order linear complementarity problem, SIAM J. Matrix Anal. Appl., 15 (1994), 779–795. http://dx.doi.org/10.1137/S0895479892237859 doi: 10.1137/S0895479892237859
![]() |
[17] | F. Facchinei, J. Pang, Finite-dimensional variational inequalities and complementarity problems, New York: Springer, 2003. |
[18] |
J. Alcantara, J. S. Chen, Neural networks based on three classes of NCP-functions for solving nonlinear complementarity problems, Neurocomputing, 359 (2019), 102–113. http://dx.doi.org/10.1016/j.neucom.2019.05.078 doi: 10.1016/j.neucom.2019.05.078
![]() |
[19] | L. Yang, J. Li, L. W. Zhang, A novel neural network for linear complementarity problems, Journal of Mathematical Research and Exposition, 27 (2007), 539–546. |
[20] |
A. Hadjidimos, M. Tzoumas, On the solution of the linear complementarity problem by the generalized accelerated overrelaxation iterative method, J. Optim. Theory Appl., 165 (2015), 545–562. http://dx.doi.org/10.1007/s10957-014-0589-4 doi: 10.1007/s10957-014-0589-4
![]() |
[21] |
H. Ren, X. Wang, X. B. Tang, T. Wang, The general two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems, Comput. Math. Appl., 77 (2019), 1071–1081. http://dx.doi.org/10.1016/j.camwa.2018.10.040 doi: 10.1016/j.camwa.2018.10.040
![]() |
[22] |
L. Pang, N. Xu, J. Lv, The inexact log-exponential regularization method for mathematical programs with vertical complementarity constraints, J. Ind. Manag. Optim., 15 (2019), 59–79. http://dx.doi.org/10.3934/jimo.2018032 doi: 10.3934/jimo.2018032
![]() |
[23] |
J. Zhang, X. S. He, Q. Wang, A SAA nonlinear regularization method for a stochastic extended vertical linear complementarity problem, Appl. Math. Comput., 232 (2014), 888–897. http://dx.doi.org/10.1016/j.amc.2014.01.121 doi: 10.1016/j.amc.2014.01.121
![]() |
[24] |
J. Zhang, S. Lin, L. W. Zhang, A log-exponential regularization method for a mathenmatical program with general vertical complementarity constraints, J. Ind. Manag. Optim., 9 (2013), 561–577. http://dx.doi.org/10.3934/jimo.2013.9.561 doi: 10.3934/jimo.2013.9.561
![]() |
[25] |
J. Zhang, Y. Q. Zhang, L. W. Zhang, A sample average approximation regulaeization method for a stochastic mathematical program with general vertical complementarity constraints, J. Comput. Appl. Math., 280 (2015), 202–216. http://dx.doi.org/10.1016/j.cam.2014.11.057 doi: 10.1016/j.cam.2014.11.057
![]() |
[26] |
H. Scheel, S. Scholtes, Mathematical programs with complementarity constraints: stationarity, optimiality and sensitivity, Math. Oper. Res., 25 (2000), 1–22. http://dx.doi.org/10.1287/moor.25.1.1.15213 doi: 10.1287/moor.25.1.1.15213
![]() |
[27] |
L. Chua, G. N. Lin, Nonlinear programming without computation, IEEE T. Circuits, 31 (1984), 182–188. http://dx.doi.org/10.1109/TCS.1984.1085482 doi: 10.1109/TCS.1984.1085482
![]() |
[28] |
J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. USA, 81 (1984), 3088–3092. http://dx.doi.org/10.1073/pnas.81.10.3088 doi: 10.1073/pnas.81.10.3088
![]() |
[29] |
J. Hopfield, D. Tank, "Neural" computation of decisions in optimization problems, Biol. Cybern., 52 (1985), 141–152. http://dx.doi.org/10.1007/bf00339943 doi: 10.1007/bf00339943
![]() |
[30] |
L. Z. Liao, H. Qi, A neural network for the linear complementarity problem, Math. Comput. Model., 29 (1999), 9–18. http://dx.doi.org/10.1016/S0895-7177(99)00026-6 doi: 10.1016/S0895-7177(99)00026-6
![]() |
[31] |
L. Z. Liao, H. Qi, L. Qi, Solving nonlinear complementarity problems with neural networks: a reformulation method approach, J. Comput. Appl. Math., 131 (2001), 343–359. http://dx.doi.org/10.1016/S0377-0427(00)00262-4 doi: 10.1016/S0377-0427(00)00262-4
![]() |
[32] |
A. Golbabai, S. Ezazipour, A projection based on recurrent neural network and its application in solving convex quadratic bilevel optimization problems, Neural Comput. Applic., 32 (2020), 3887–3900. http://dx.doi.org/10.1007/s00521-019-04391-7 doi: 10.1007/s00521-019-04391-7
![]() |
[33] |
A. Zazemi, A. Sabeghi, A new neural network framework for solving convex second-order cone constrained variational inequality problems with an application in multi-ginger robot hands, J. Exp. Thero. Artif. In., 32 (2020), 181–203. http://dx.doi.org/10.1080/0952813X.2019.1647559 doi: 10.1080/0952813X.2019.1647559
![]() |
[34] |
J. Sun, J. S. Chen, C. H. Ko, Neural networks for solving second-order cone constrained variational inequality problem, Comput. Optim. Appl., 51 (2012), 623–648. http://dx.doi.org/10.1007/s10589-010-9359-x doi: 10.1007/s10589-010-9359-x
![]() |
[35] |
J. Sun, W. Fu, J. Alcantara, J. S. Chen, A neural network based on the metric projector for solving SOCCVI problem, IEEE T. Neur. Net. Lear., 32 (2020), 2886–2900. http://dx.doi.org/10.1109/TNNLS.2020.3008661 doi: 10.1109/TNNLS.2020.3008661
![]() |
[36] |
S. Wen, S. Xiao, Z. Yan, Z. Zeng, T. Huang, Adjusting learning rate of memristor-based multilayer neural networks via fuzzy method, IEEE T. Comput. Aid. D., 38 (2019), 1084–1094. http://dx.doi.org/10.1109/TCAD.2018.2834436 doi: 10.1109/TCAD.2018.2834436
![]() |
[37] |
X. Ju, H. Che, C. Li, X. He, G. Feng, Exponential convergence of a proximal projection neural network for mixed variational inequalities and applications, Neurocomputing, 454 (2021), 54–64. http://dx.doi.org/10.1016/j.neucom.2021.04.059 doi: 10.1016/j.neucom.2021.04.059
![]() |
[38] |
X. Ju, C. Li, X. He, G. Feng, An inertial projection neural network for solving inverse variational inequalities, Neurocomputing, 406 (2020), 99–105. http://dx.doi.org/10.1016/j.neucom.2020.04.023 doi: 10.1016/j.neucom.2020.04.023
![]() |
[39] |
Q. Han, L. Z. Liao, H. Qi, L. Qi, Stability analysis of gradient-based neural networks for optimization problems, J. Global Optim., 19 (2001), 363–381. http://dx.doi.org/10.1023/A:1011245911067 doi: 10.1023/A:1011245911067
![]() |
[40] |
M. Xu, B. Du, Dynamic behaviors for reation-diffusion neural networks with mixed delays, AIMS Mathematics, 5 (2020), 6841–6855. http://dx.doi.org/10.3934/math.2020439 doi: 10.3934/math.2020439
![]() |
[41] | F. Clarke, Optimization and nonsmooth analysis, New York: Society for Industrial and Applied Mathematics, 1990. |
[42] | R. Rockafellar, Convex analysis, New Jersey: Princeton University Press, 1970. |
[43] | R. Rockafellar, R. Wets, Variational analysis, Berlin: Springer, 1998. http://dx.doi.org/10.1007/978-3-642-02431-3 |
[44] |
W. Bian, X. Chen, Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation, IEEE T. Neur. Net. Lear., 25 (2014), 545–556. http://dx.doi.org/10.1109/TNNLS.2013.2278427 doi: 10.1109/TNNLS.2013.2278427
![]() |
[45] | J. Zabczyk, Mathematical control theory, Boston: Birkh ¨a users, 2020. http://dx.doi.org/10.1007/978-3-030-44778-6 |
[46] |
H. Qi, L. Liao, Z. Lin, Regularized smoothing approximations to vertical nonlinear complementarity problems, J. Math. Anal. Appl., 230 (1999), 261–276. http://dx.doi.org/10.1006/jmaa.1998.6205 doi: 10.1006/jmaa.1998.6205
![]() |
[47] |
F. Facchinei, J. Soares, A new merit function for nonliner complementarity problems and a related algorithm, SIAM J. Optim., 7 (1997), 225–247. http://dx.doi.org/10.1137/S1052623494279110 doi: 10.1137/S1052623494279110
![]() |
[48] |
L. Qi, Convergence analysis of some algorithms for solving nonsmooth equations, Math. Oper. Res., 18 (1993), 227–244. http://dx.doi.org/10.1287/moor.18.1.227 doi: 10.1287/moor.18.1.227
![]() |
[49] |
M. Gowda, R. Sznajder, A generalization of the Nash equilibeium theorem on bimatrix games, Int. J. Game Theory, 25 (1996), 1–12. http://dx.doi.org/10.1007/BF01254380 doi: 10.1007/BF01254380
![]() |
[50] | G. Murthy, T. Parthasarthy, D. Sampangi, SER-SIT stichastic games and vertical linear, Proceedings of 14th International Conference on Game Theory, 2003. |
1. | Youshen Xia, Qingshan Liu, Jun Wang, Andrzej Cichocki, A Survey of Neurodynamic Optimization, 2024, 8, 2471-285X, 2677, 10.1109/TETCI.2024.3369667 | |
2. | Shuilian Xie, Zhen-Ping Yang, Hongru Xu, A modulus-based matrix splitting method for the vertical nonlinear complementarity problem, 2023, 69, 1598-5865, 2987, 10.1007/s12190-023-01866-8 | |
3. | Yanan Wang, Shuang Lin, Jie Zhang, Chen Qiu, A Neural Network Based on a Nonsmooth Equation for a Box Constrained Variational Inequality Problem, 2024, 2024, 2314-4629, 1, 10.1155/2024/5511978 |
τ | α | t | z | f(z,α) |
1000 | 0.01 | 0.0620 | (1.0000 2.3230e-09 2.0000 2.3230e-09) | 2.2694e-13 |
1000 | 0.05 | 0.0771 | (1.0000 3.4470e-09 2.0000 3.4470e-09) | 2.1864e-14 |
1000 | 0.1 | 0.0695 | (1.0000 2.4990e-09 2.0000 2.4990e-09) | 1.5218e-13 |
1000 | 0.3 | 0.0396 | (1.0000 1.4090e-05 2.0110 1.4090e-05) | 2.5289e-14 |
1000 | 0.5 | 0.0456 | (1.0090 0.0013 2.0700 0.0013) | 3.5818e-13 |
1000 | 0.7 | 0.0396 | (1.0390 0.0111 2.1650 0.0111) | 1.7911e-11 |
1000 | 0.9 | 0.0597 | (1.0920 0.0160 2.0160 0.0373) | 3.7420e-10 |
\tau | \alpha | t | z | f(z, \alpha) |
1 | 0.01 | 0.0510 | (1.0000 7.6746e-10 2.0000 7.6746e-10) | 1.6197e-13 |
10 | 0.01 | 0.0682 | (1.0000 2.7560e-09 2.0000 2.7560e-09) | 8.2078e-14 |
100 | 0.01 | 0.0841 | (1.0000 9.5730e-09 2.0000 9.5730e-09) | 5.2607e-14 |
1000 | 0.01 | 0.1000 | (1.0000 1.6420e-07 2.0000 1.6420e-07) | 1.7804e-12 |
\tau | \alpha | t | z | f(z, \alpha) |
1000 | 0.01 | 0.0620 | (1.0000 2.3230e-09 2.0000 2.3230e-09) | 2.2694e-13 |
1000 | 0.05 | 0.0771 | (1.0000 3.4470e-09 2.0000 3.4470e-09) | 2.1864e-14 |
1000 | 0.1 | 0.0695 | (1.0000 2.4990e-09 2.0000 2.4990e-09) | 1.5218e-13 |
1000 | 0.3 | 0.0396 | (1.0000 1.4090e-05 2.0110 1.4090e-05) | 2.5289e-14 |
1000 | 0.5 | 0.0456 | (1.0090 0.0013 2.0700 0.0013) | 3.5818e-13 |
1000 | 0.7 | 0.0396 | (1.0390 0.0111 2.1650 0.0111) | 1.7911e-11 |
1000 | 0.9 | 0.0597 | (1.0920 0.0160 2.0160 0.0373) | 3.7420e-10 |
\tau | \alpha | t | z | f(z, \alpha) |
1 | 0.01 | 0.0510 | (1.0000 7.6746e-10 2.0000 7.6746e-10) | 1.6197e-13 |
10 | 0.01 | 0.0682 | (1.0000 2.7560e-09 2.0000 2.7560e-09) | 8.2078e-14 |
100 | 0.01 | 0.0841 | (1.0000 9.5730e-09 2.0000 9.5730e-09) | 5.2607e-14 |
1000 | 0.01 | 0.1000 | (1.0000 1.6420e-07 2.0000 1.6420e-07) | 1.7804e-12 |