Loading [MathJax]/jax/output/SVG/jax.js
Research article

Dynamic allocation of opposition-based learning in differential evolution for multi-role individuals

  • Opposition-based learning (OBL) is an optimization method widely applied to algorithms. Through analysis, it has been found that different variants of OBL demonstrate varying performance in solving different problems, which makes it crucial for multiple OBL strategies to co-optimize. Therefore, this study proposed a dynamic allocation of OBL in differential evolution for multi-role individuals. Before the population update in DAODE, individuals in the population played multiple roles and were stored in corresponding archives. Subsequently, different roles received respective rewards through a comprehensive ranking mechanism based on OBL, which assigned an OBL strategy to maintain a balance between exploration and exploitation within the population. In addition, a mutation strategy based on multi-role archives was proposed. Individuals for mutation operations were selected from the archives, thereby influencing the population to evolve toward more promising regions. Experimental results were compared between DAODE and state of the art algorithms on the benchmark suite presented at the 2017 IEEE conference on evolutionary computation (CEC2017). Furthermore, statistical tests were conducted to examine the significance differences between DAODE and the state of the art algorithms. The experimental results indicated that the overall performance of DAODE surpasses all state of the art algorithms on more than half of the test functions. Additionally, the results of statistical tests also demonstrated that DAODE consistently ranked first in comprehensive ranking.

    Citation: Jian Guan, Fei Yu, Hongrun Wu, Yingpin Chen, Zhenglong Xiang, Xuewen Xia, Yuanxiang Li. Dynamic allocation of opposition-based learning in differential evolution for multi-role individuals[J]. Electronic Research Archive, 2024, 32(5): 3241-3274. doi: 10.3934/era.2024149

    Related Papers:

    [1] Yves Achdou, Ziad Kobeissi . Mean field games of controls: Finite difference approximations. Mathematics in Engineering, 2021, 3(3): 1-35. doi: 10.3934/mine.2021024
    [2] Benoît Perthame, Edouard Ribes, Delphine Salort . Career plans and wage structures: a mean field game approach. Mathematics in Engineering, 2019, 1(1): 38-54. doi: 10.3934/Mine.2018.1.38
    [3] Piermarco Cannarsa, Rossana Capuani, Pierre Cardaliaguet . C1;1-smoothness of constrained solutions in the calculus of variations with application to mean field games. Mathematics in Engineering, 2019, 1(1): 174-203. doi: 10.3934/Mine.2018.1.174
    [4] Pablo Blanc, Fernando Charro, Juan J. Manfredi, Julio D. Rossi . Games associated with products of eigenvalues of the Hessian. Mathematics in Engineering, 2023, 5(3): 1-26. doi: 10.3934/mine.2023066
    [5] Simone Paleari, Tiziano Penati . Hamiltonian lattice dynamics. Mathematics in Engineering, 2019, 1(4): 881-887. doi: 10.3934/mine.2019.4.881
    [6] Mario Pulvirenti . On the particle approximation to stationary solutions of the Boltzmann equation. Mathematics in Engineering, 2019, 1(4): 699-714. doi: 10.3934/mine.2019.4.699
    [7] Franco Flandoli, Eliseo Luongo . Heat diffusion in a channel under white noise modeling of turbulence. Mathematics in Engineering, 2022, 4(4): 1-21. doi: 10.3934/mine.2022034
    [8] Giacomo Canevari, Arghir Zarnescu . Polydispersity and surface energy strength in nematic colloids. Mathematics in Engineering, 2020, 2(2): 290-312. doi: 10.3934/mine.2020015
    [9] Zheming An, Nathaniel J. Merrill, Sean T. McQuade, Benedetto Piccoli . Equilibria and control of metabolic networks with enhancers and inhibitors. Mathematics in Engineering, 2019, 1(3): 648-671. doi: 10.3934/mine.2019.3.648
    [10] Jérôme Droniou, Jia Jia Qian . Two arbitrary-order constraint-preserving schemes for the Yang–Mills equations on polyhedral meshes. Mathematics in Engineering, 2024, 6(3): 468-493. doi: 10.3934/mine.2024019
  • Opposition-based learning (OBL) is an optimization method widely applied to algorithms. Through analysis, it has been found that different variants of OBL demonstrate varying performance in solving different problems, which makes it crucial for multiple OBL strategies to co-optimize. Therefore, this study proposed a dynamic allocation of OBL in differential evolution for multi-role individuals. Before the population update in DAODE, individuals in the population played multiple roles and were stored in corresponding archives. Subsequently, different roles received respective rewards through a comprehensive ranking mechanism based on OBL, which assigned an OBL strategy to maintain a balance between exploration and exploitation within the population. In addition, a mutation strategy based on multi-role archives was proposed. Individuals for mutation operations were selected from the archives, thereby influencing the population to evolve toward more promising regions. Experimental results were compared between DAODE and state of the art algorithms on the benchmark suite presented at the 2017 IEEE conference on evolutionary computation (CEC2017). Furthermore, statistical tests were conducted to examine the significance differences between DAODE and the state of the art algorithms. The experimental results indicated that the overall performance of DAODE surpasses all state of the art algorithms on more than half of the test functions. Additionally, the results of statistical tests also demonstrated that DAODE consistently ranked first in comprehensive ranking.



    The authors are honored to participate in this special issue and express their admiration to Professor Italo Capuzzo-Dolcetta for his mathematical work. The first author met Prof. Dolcetta for the first time at the end of his Ph.D. thesis more than 20 years ago during an extended visit to Roma. Prof. Dolcetta mentorship at that time influenced his career deeply and he is particularly grateful for this opportunity.

    Mean-field games (MFG) is a tool to study the Nash equilibrium of infinite populations of rational agents. These agents select their actions based on their state and the statistical information about the population. Here, we study a price formation model for a commodity traded in a market under uncertain supply, which is a common noise shared by the agents. These agents are rational and aim to minimize the average trading cost by selecting their trading rate. The distribution of the agents solves a stochastic partial differential equation. Finally, a market-clearing condition characterizes the price.

    We let (Ω,F,(Ft)0t,P) be a complete filtered probability space such that (Ft)0t is the standard filtration induced by tWt, the common noise, which is a one-dimensional Brownian motion. We consider a commodity whose supply process is described by a stochastic differential equation; that is, we are given a drift bS:[0,T]×R2R and volatility σS:[0,T]×R2R+0, which are smooth functions, and the supply Qs is determined by the stochastic differential equation

    dQs=bS(Qs,ϖs,s)ds+σS(Qs,ϖs,s)dWs in [0,T] (1.1)

    with the initial condition ˉq. We would like to determine the drift bP:[0,T]×R2R, the volatility σP:[0,T]×R2R+0, and ˉw such that the price ϖs solves

    dϖs=bP(Qs,ϖs,s)ds+σP(Qs,ϖs,s)dWs in [0,T] (1.2)

    with initial condition ˉw and such that it ensures a market clearing condition. It may not be possible to find bP and σP in a feedback form. However, for linear dynamics, as we show here, we can solve quadratic models, which are of great interest in applications.

    Let Xs be the quantity of the commodity held by an agent at time s for tsT. This agent trades this commodity, controlling its rate of change, v, thus

    dXs=v(s)ds in [t,T]. (1.3)

    At time t, an agent who holds x and observes q and w chooses a control process v, progressively measurable with respect to Ft, to minimize the expected cost functional

    J(x,q,w,t;v)=E[TtL(Xs,v(s))+ϖsv(s)ds+Ψ(XT,QT,ϖT)], (1.4)

    subject to the dynamics (1.1), (1.2), and (1.3) with initial condition Xt=x, and the expectation is taken w.r.t. Fr. The Lagrangian, L, takes into account costs such as market impact or storage, and the terminal cost Ψ stands for the terminal preferences of the agent.

    This control problem determines a Hamilton-Jacobi equation addressed in Section 2.1. In turn, each agent selects an optimal control and uses it to adjust its holdings. Because the source of noise in Qt is common to all agents, the evolution of the probability distribution of agents is not deterministic. Instead, it is given by a stochastic transport equation derived in Section 2.2. Finally, the price is determined by a market-clearing condition that ensures that supply meets demand. We study this condition in Section 2.3.

    Mathematically, the price model corresponds to the following problem.

    Problem 1. Given a Hamiltonian, H:R2R, HC, a commodity's supply initial value, ˉqR, supply drift, bS:R2×[0,T]R, and supply volatility, σS:R2×[0,T]R, a terminal cost, Ψ:R3R, ΨC(R3), and an initial distribution of agents, ˉmCc(R)P(R), find u:R3×[0,T]R, μC([0,T]×Ω;P(R3)), ˉwR, the price at t=0, the price drift bP:R2×[0,T]R, and the price volatility σP:R2×[0,T]R solving

    {ut+H(x,w+ux)=bSuq+bPuw+12(σS)2uqq+σSσPuqw+12(σP)2uwwdμt=((μ(σS)22)qq+(μσSσP)qw+(μ(σP)22)wwdiv(μb))dtdiv(μσ)dWtR3q+DpH(x,w+ux(x,q,w,t))μt(dx×dq×dw)=0,a.e.ωΩ,0tT, (1.5)

    and the terminal-initial conditions

    {u(x,q,w,T)=Ψ(x,q,w)μ0=ˉm×δˉq×δˉw, (1.6)

    where b=(DpH(x,w+ux),bS,bP), σ=(0,σS,σP), and the divergence is taken w.r.t. (x,q,w).

    Given a solution to the preceding problem, we construct the supply and price processes

    Qt=R3qμt(dx×dq×dw)

    and

    ϖt=R3wμt(dx×dq×dw),

    which also solve

    {dQt=bS(Qt,ϖt,t)dt+σS(Qt,ϖt,t)dWt in [0,T]dϖt=bP(Qt,ϖt,t)dt+σP(Qt,ϖt,t)dWt in [0,T]

    with initial conditions

    {Q0=ˉqϖ0=ˉw (1.7)

    and satisfy the market-clearing condition

    Qt=RDpH(x,ϖt+ux(x,Qt,ϖt,t))μt(dx).

    In [10], the authors presented a model where the supply for the commodity was a given deterministic function, and the balance condition between supply and demand gave rise to the price as a Lagrange multiplier. Price formation models were also studied by Markowich et al. [18], Caffarelli et al. [2], and Burger et al. [1]. The behavior of rational agents that control an electric load was considered in [16,17]. For example, turning on or off space heaters controls the electric load as was discussed in [13,14,15]. Previous authors addressed price formation when the demand is a given function of the price [12] or that the price is a function of the demand, see, for example [5,6,7,8,11]. An N-player version of an economic growth model was presented in [9].

    Noise in the supply together with a balance condition is a central issue in price formation that could not be handled directly with the techniques in previous papers. A probabilistic approach of the common noise is discussed in Carmona et al. in [4]. Another approach is through the master equation, involving derivatives with respect to measures, which can be found in [3]. None of these references, however, addresses problems with integral constraints such as (1.7).

    Our model corresponds to the one in [10] for the deterministic setting when we take the volatility for the supply to be 0. Here, we study the linear-quadratic case, that is, when the cost functional is quadratic, and the dynamics (1.1) and (1.2) are linear. In Section 3.2, we provide a constructive approach to get semi-explicit solutions of price models for linear dynamics and quadratic cost. This approach avoids the use of the master equation. The paper ends with a brief presentation of simulation results in Section 4.

    In this section, we derive Problem 1 from the price model. We begin with standard tools of optimal control theory. Then, we derive the stochastic transport equation, and we end by introducing the market-clearing (balance) condition.

    The value function for an agent who at time t holds an amount x of the commodity, whose instantaneous supply and price are q and w, is

    u(x,q,w,t)=infvJ(x,q,w,t;v) (2.1)

    where J is given by (1.4) and the infimum is taken over the set A((t,T]) of all functions v:[t,T]R, progressively measurable w.r.t. (Fs)tsT. Consider the Hamiltonian, H, which is the Legendre transform of L; that is, for pR,

    H(x,p)=supvR[pvL(x,v)]. (2.2)

    Then, from standard stochastic optimal control theory, whenever L is strictly convex, if u is C2, it solves the Hamilton-Jacobi equation in R3×[0,T)

    ut+H(x,w+ux)bPuwbSuq(σP)22uww(σS)22uqqσPσSuwq=0 (2.3)

    with the terminal condition

    u(x,q,w,T)=Ψ(x,q,w). (2.4)

    Moreover, as the next verification theorem establishes, any C2 solution of (2.3) is the value function.

    Theorem 2.1(Verification). Let ˜u:[0,T]×R3R be a smooth solution of (2.3) with terminal condition (2.4). Let (X,Q,ϖ) solve (1.3), (1.1) and (1.2), where X is driven by the (Ft)0t-progressively measurable control

    v(s):=DpH(Xs,ϖs+˜ux(Xs,Qs,ϖs,s)).

    Then

    1). v is an optimal control for (2.1)

    2). ˜u=u, the value function.

    Theorem 2.1 provides an optimal feedback strategy. As usual in MFG, we assume that the agents are rational and, hence, choose to follow this optimal strategy. This behavior gives rise to a flow that transports the agents and induces a random measure that encodes their distribution. Here, we derive a stochastic PDE solved by this random measure. To this end, let u solve (2.3) and consider the random flow associated with the diffusion

    {dXs=DpH(Xs,ϖs+ux(Xs,Qs,ϖs,s))dsdQs=bS(Qs,ϖs,s)ds+σS(Qs,ϖs,s)dWsdϖs=bP(Qs,ϖs,s)ds+σP(Qs,ϖs,s)dWs (2.5)

    with initial conditions

    {X0=xQ0=ˉqϖ0=ˉw.

    That is, for a given realization ωΩ of the common noise, the flow maps the initial conditions (x,ˉq,ˉw) to the solution of (2.5) at time t, which we denote by (Xωt(x,ˉq,ˉw),Qωt(ˉq,ˉw),ϖωt(ˉq,ˉw)). Using this map, we define a measure-valued stochastic process μt as follows:

    Definition 2.2. Let ωΩ denote a realization of the common noise W on 0sT. Given a measure ˉmP(R) and initial conditions ˉq,ˉwR take ˉμP(R3) by ˉμ=ˉm×δˉq×δˉw and define a random measure μt by the mapping ωμωtP(R3), where μωt is characterized as follows:

    for any bounded and continuous function ψ:R3R

    R3ψ(x,q,w)μωt(dx×dq×dw)=R3ψ(Xωt(x,q,w),Qωt(q,w),ϖωt(q,w))ˉμ(dx×dq×dw).

    Remark 2.3. Because ˉμ=ˉm×δˉq×δˉw, we have

    R3ψ(Xωt(x,q,w),Qωt(q,w),ϖωt(q,w))ˉμ(dx×dq×dw)=Rψ(Xωt(x,ˉq,ˉw),Qωt(ˉq,ˉw),ϖωt(ˉq,ˉw))ˉm(dx).

    Moreover, due to the structure of (2.5),

    μωt=(Xωt(x,ˉq,ˉw)#ˉm)×δQωt(ˉq,ˉw)×δϖωt(ˉq,ˉw).

    Definition 2.4. Let ˉμP(R3) and write

    b(x,q,w,s)=(DpH(x,w+ux(x,q,w,s)),bS(q,w,s),bP(q,w,s)),σ(q,w,s)=(0,σS(q,w,s),σP(q,w,s)).

    A measure-valued stochastic process μ=μ(,t)=μt() is a weak solution of the stochastic PDE

    dμt=(div(μb)+(μ(σS)22)qq+(μσSσP)qw+(μ(σP)22)ww)dtdiv(μσ)dWt, (2.6)

    with initial condition ˉμ if for any bounded smooth test function ψ:R3×[0,T]R

    R3ψ(x,q,w,t)μt(dx×dq×dw)=R3ψ(x,q,w,0)ˉμ(dx×dq×dw) (2.7)
    +t0R3tψ+Dψb+12tr(σTσD2ψ)μs(dx×dq×dw)ds (2.8)
    +t0R3Dψσμs(dx×dq×dw)dWs, (2.9)

    where the arguments for b, σ and ψ are (x,q,w,s) and the differential operators D and D2 are taken w.r.t. the spatial variables x,q,w.

    Theorem 2.5. Let ˉmP(R) and ˉq,ˉwR. The random measure from Definition 2.2 is a weak solution of the stochastic partial differential equation (2.6) with initial condition ˉμ=ˉm×δˉq×δˉw.

    Proof. Let ψ:R3×[0,T]R be a bounded smooth test function. Consider the stochastic process sR3ψ(x,q,w,s)μωs(dx×dq×dw). Let

    (Xt(x,ˉq,ˉw),Qt(ˉq,ˉw),ϖt(ˉq,ˉw))

    be the flow induced by (2.5). By the definition of μωt,

    R3ψ(x,q,w,t)μωt(dx×dq×dw)R3ψ(x,q,w,0)ˉμ(dx×dq×dw)=R[ψ(Xωt(x,ˉq,ˉw),Qωt(ˉq,ˉw),ϖωt(ˉq,ˉw),t)ψ(x,ˉq,ˉw,0)]ˉm(dx).

    Then, applying Ito's formula to the stochastic process

    sRψ(Xs(x,ˉq,ˉw),Qs(ˉq,ˉw),ϖs(ˉq,ˉw),s)ˉm(dx),

    the preceding expression becomes

    t0d(Rψ(Xs(x,ˉq,ˉw),Qs(ˉq,ˉw),ϖs(ˉq,ˉw),s)ˉm(dx))=t0R[Dtψ+Dψb+12tr(σTσD2ψ)]ˉm(dx)ds+t0RDψσˉm(dx)dWs=t0R3[Dtψ+Dψb+12tr(σTσD2ψ)]μs(dx×dq×dw)ds+t0R3Dψσμs(dx×dq×dw)dWs,

    where arguments of b, σ and the partial derivatives of ψ in the integral with respect to ˉm(dx) are (Xs(x,ˉq,ˉw),Qs(ˉq,ˉw),ϖs(ˉq,ˉw),s), and in the integral with respect to μt(dx×dq×dw) are (x,q,w,t). Therefore,

    R3ψ(x,q,w,t)μωt(dx×dq×dw)R3ψ(x,q,w,0)ˉμ(dx×dq×dw)=t0R3[Dtψ+Dψb+12tr(D2ψ:(σ,σ))]μωs(dx×dq×dw)ds+t0R3Dψσμωs(dx×dq×dw)dWs.

    Hence, (2.7) holds.

    The balance condition requires the average trading rate to be equal to the supply. Because agents are rational and, thus, use their optimal strategy, this condition takes the form

    Qt=R3DpH(x,w+ux(x,q,w,t))μωt(dx×dq×dw), (2.10)

    where μωt is given by Definition 2.2. Because Qt satisfies a stochastic differential equation, the previous can also be read in differential form as

    bS(Qt,ϖt,t)dt+σS(Qt,ϖt,t)dWt=dR3DpH(x,w+ux(x,q,w,t))μωt(dx×dq×dw). (2.11)

    The former condition determines bP and σP. In general, bP and σP are only progressively measurable with respect to (Ft)0t and not in feedback form. In this case, the Hamilton–Jacobi (2.3) must be replaced by either a stochastic partial differential equation or the problem must be modeled by the master equation. However, as we discuss next, in the linear-quadratic case, we can find bP and σP in feedback form.

    Here, we consider a price model for linear dynamics and quadratic cost. The Hamilton-Jacobi equation admits quadratic solutions. Then, the balance equation determines the dynamics of the price, and the model is reduced to a first-order system of ODE.

    Suppose that L(x,v)=c2v2 and, thus, H(x,p)=12cp2. Accordingly, the corresponding MFG model is

    {ut+12c(w+ux)2bPuwbSuq12(σP)2uww12(σS)2uqqσPσSuwq=0dμt=((μ(σS)22)qq+(μσSσP)qw+(μ(σP)22)wwdiv(μb))dtdiv(μσ)dWtQt=1cϖt+R1cux(x,q,w,t)μωt(dx×dq×dw). (3.1)

    Assume further that Ψ is quadratic; that is,

    Ψ(x,q,w)=c0+c11x+c21q+c31w+c12x2+c22xq+c32xw+c42q2+c52qw+c62w2.

    Let

    Πt=R3ux(x,q,w,t)μt(dx×dq×dw).

    The balance condition is Qt=1c(ϖt+Πt). Furthermore, Definition 2.2 provides the identity

    Πt=Rux(Xt(x,ˉq,ˉw),Qt(ˉq,ˉw),ϖt(ˉq,ˉw),t)ˉm(dx).

    Lemma 3.1. Let (X,Q,ϖ) solve (1.3), (1.1) and (1.2) with v=v, the optimal control, and initial conditions ˉq,ˉwR. Let uC3(R3×[0,T]) solve the Hamilton-Jacobi equation (2.3). Then

    dΠt=R(uxqσS+uxwσP)ˉm(dx)dWt, (3.2)

    where the arguments for the partial derivatives of u are (Xt(x,ˉq,ˉw),Qt(ˉq,ˉw),ϖt(ˉq,ˉw),t).

    Proof. By Itô's formula, the process tux(Xt,Qt,ϖt,t) solves

    d(ux(Xt,Qt,ϖt,t))=(uxt+uxxv+uxqbS+uxwbP+uxqq12(σS)2+uxqwσSσP+uxww12(σP)2)dt++(uxqσS+uxwσP)dWt, (3.3)

    with v(t)=1c(ϖt+ux(Xt,Qt,ϖt,t)). By differentiating the Hamilton-Jacobi equation, we get

    utx+1c(ϖt+ux)uxxbPuwxbSuqx(σP)22uwwx(σS)22uqqxσPσSuwqx=0.

    Substituting the previous expression in (3.3), we have

    d(Rux(Xt(x,ˉq,ˉw),Qt(ˉq,ˉw),ϖt(ˉq,ˉw),t)ˉm(dx))=R(1c(ϖt+ux)uxx+uxxv)ˉm(dx)dt+R(uxqσS+uxwσP)ˉm(dx)dWt.

    The preceding identity simplifies to

    R(uxqσS+uxwσP)ˉm(dx)dWt.

    Using Lemma 3.1, we have

    cdQt=R(uxqσS+uxwσP)ˉm(dx)dWt+dϖt;

    that is,

    cbSdtcσSdWt=(σSRuxqˉm(dx)+σPRuxwˉm(dx))dWt+dϖt=bPdt+(σSRuxqˉm(dx)+σPRuxwˉm(dx)+σP)dWt.

    Thus,

    bP=cbS,σP=σSc+Ruxqˉm(dx)1+Ruxwˉm(dx). (3.4)

    If u is a second-degree polynomial with time-dependent coefficients, then

    Ruxq(Xt(x,ˉq,ˉw),Qt(ˉq,ˉw),ϖt(ˉq,ˉw),t)ˉm(dx)

    and

    Ruxw(Xt(x,ˉq,ˉw),Qt(ˉq,ˉw),ϖt(ˉq,ˉw),t)ˉm(dx)

    are deterministic functions of time. Accordingly, bP and σP are given in feedback form by (3.4), thus, consistent with the original assumption. Here, we investigate the linear-quadratic case that admits solutions of this form.

    Now, we assume that the dynamics are affine; that is,

    {bP(t,q,w)=bP0(t)+qbP1(t)+wbP2(t)bS(t,q,w)=bS0(t)+qbS1(t)+wbS2(t)σP(t,q,w)=σP0(t)+qσP1(t)+wσP2(t)σS(t,q,w)=σS0(t)+qσS1(t)+wσS2(t). (3.5)

    Then, (3.4) gives

    bP0=cbS0,σP0=σS0c+Ruxqˉm(dx)1+Ruxwˉm(dx)bP1=cbS1,σP1=σS1c+Ruxqˉm(dx)1+Ruxwˉm(dx)bP2=cbS2,σP2=σS2c+Ruxqˉm(dx)1+Ruxwˉm(dx).

    Because all the terms in the Hamilton-Jacobi equation are at most quadratic, we seek for solutions of the form

    u(t,x,q,w)=a0(t)+a11(t)x+a21(t)q+a31(t)w+a12(t)x2+a22(t)xq+a32(t)xw+a42(t)q2+a52(t)qw+a62(t)w2,

    where aji:[0,T]R. Therefore, the previous identities reduce to

    bP0=cbS0,σP0=σS0c+a221+a32bP1=cbS1,σP1=σS1c+a221+a32bP2=cbS2,σP2=σS2c+a221+a32. (3.6)

    Using (3.6) and grouping coefficients in the Hamilton-Jacobi PDE, we obtain the following ODE system

    ˙a12=2(a12)2c˙a22=c2a32bS1ca22bS1+2a12a22c˙a32=c2a32bS2ca22bS2+2a12+2a12a32c˙a11=c2a32bS0ca22bS0+2a11a12c˙a42=ca52bS12a42bS1+a52(a22+c)(σS1)2a32+114(4a62(a22+c)2(σS1)2(a32+1)2+4a42(σS1)2)+(a22)22c˙a52=2ca62bS1+ca52bS2a52bS12a42bS212(4a62(a22+c)2σS1σS2(a32+1)2+4a42σS1σS2)+2a52(a22+c)σS1σS2a32+1+a22(a32+1)c˙a62=2ca62bS2a52bS214(4a62(a22+c)2(σS2)2(a32+1)2+4a42(σS2)2)+a52(a22+c)(σS2)2a32+1+(a32+1)22c˙a0=ca31bS0a21bS0+a52(a22+c)(σS0)2a32+112(2a62(a22+c)2(σS0)2(a32+1)2+2a42(σS0)2)+(a11)22c˙a21=ca52bS0+ca31bS12a42bS0a21bS1+2a52(a22+c)σS0σS1a32+112(4a62(a22+c)2σS0σS1(a32+1)2+4a42σS0σS1)+a11a22c˙a31=2ca62bS0+ca31bS2a52bS0a21bS212(4a62(a22+c)2σS0σS2(a32+1)2+4a42σS0σS2)+2a52(a22+c)σS0σS2a32+1+a11(a32+1)c,

    with terminal conditions

    a0(T)=Ψ(0,0,0)=c0a11(T)=DxΨ(0,0,0)=c11a21(T)=DqΨ(0,0,0)=c21a31(T)=DwΨ(0,0,0)=c31a12(T)=12DxxΨ(0,0,0)=c12a22(T)=DxqΨ(0,0,0)=c22a32(T)=DxwΨ(0,0,0)=c32a42(T)=12DqqΨ(0,0,0)=c42a52(T)=DqwΨ(0,0,0)=c52a62(T)=12DwwΨ(0,0,0)=c62.

    While this system has a complex structure, it admits some simplifications. For example, the equation for a12 is independent of other terms and has the solution

    a12(t)=cc12c+2c12(Tt).

    Moreover, we can determine a22 and a32 from the linear system

    ddt[a22a32]=[bS1+2ca12cbS1bS2cbS2+2ca12][a22a32]+[02ca12].

    Lemma 3.1 takes the form

    dΠt=(a22(t)σS(Qt,ϖt,t)+a32(t)σP(Qt,ϖt,t))dWt.

    Therefore,

    Πt=Π0+t0(a22(r)σS(Qr,ϖr,r)+a32(r)σP(Qr,ϖr,r))dWr

    where

    Π0=a11(0)+2a12(0)Rxˉm(dx)+a22(0)ˉq+a32(0)ˉw.

    Replacing the above in the balance condition at the initial time, that is ˉw=cˉqΠ0, we obtain the initial condition for the price

    ˉw=11+a32(0)(a11(0)+2a12(0)Rxˉm(dx)+(a22(0)+c)ˉq). (3.7)

    where a11 can be obtained after solving for a12, a22 and a32.

    Now, we proceed with the price dynamics using the balance condition. Under linear dynamics, we have

    Qt=1c(ϖt+Π0)1ct0a22(r)(σS0(r)+QrσS1(r)+ϖrσS2(r))+a32(r)(σP0(r)+QrσP1(r)+ϖrσP2(r))dWr.

    Thus, replacing the price coefficients for (3.6), we obtain

    dϖt=c(bS0(t)+bS1(t)Qt+bS2(t)ϖt)dtc+a22(t)1+a32(t)(σS0(t)+σS1(t)Qt+σS2(t)ϖt)dWt,dQt=bSdt+σSdWt,

    which determines the dynamics for the price.

    In this section, we consider the running cost corresponding to c=1; that is,

    L(v)=12v2

    and terminal cost at time T=1

    Ψ(x)=(xα)2.

    We take ˉm to be a normal standard distribution; that is, with zero-mean and unit variance. We assume the dynamics for the normalized supply is mean-reverting

    dQt=(1Qt)dt+QtdWt,

    with initial condition ˉq=1. Therefore, the dynamics for the price becomes

    dϖt=(1Qt)dt1+a221+a32QtdWt,

    with initial condition ˉw given by (3.7), and a22 and a32 solve

    ˙a22=a32+a22(1+2a12)˙a32=2a12(1+a32),

    with terminal conditions a22(1)=0 and a32(1)=0. We observe that the coefficient multiplying Qt in the volatility of the price is now time-dependent.

    For a fixed simulation of the supply, we compute the price for different values of α. Agents begin with zero energy average. The results are displayed in Figure 1. As expected, the price is negatively correlated with the supply. Moreover, as the storage target increases, prices increase, which reflects the competition between agents who, on average, want to increase their storage.

    Figure 1.  Supply vs. Price for the values α=0, α=0.1, α=0.25, α=0.5.

    The authors were partially supported by KAUST baseline funds and KAUST OSR-CRG2017-3452.

    All authors declare no conflicts of interest in this paper.



    [1] L. Migliorelli, D. Berardini, K. Cela, M. Coccia, L. Villani, E. Frontoni, et al., A store-and-forward cloud-based telemonitoring system for automatic assessing dysarthria evolution in neurological diseases from video-recording analysis, Comput. Biol. Med., 163 (2023), 107194. https://doi.org/10.1016/j.compbiomed.2023.107194 doi: 10.1016/j.compbiomed.2023.107194
    [2] W. Zhu, L. Fang, X. Ye, M. Medani, J. Escorcia-Gutierrez, IDRM: Brain tumor image segmentation with boosted rime optimization, Comput. Biol. Med., 166 (2023), 107551. https://doi.org/10.1016/j.compbiomed.2023.107551 doi: 10.1016/j.compbiomed.2023.107551
    [3] X. Zhang, Z. Wang, Z. Lu, Multi-objective load dispatch for microgrid with electric vehicles using modified gravitational search and particle swarm optimization algorithm, Appl. Energy, 306 (2022), 118018. http://dx.doi.org/10.1016/j.apenergy.2021.118018 doi: 10.1016/j.apenergy.2021.118018
    [4] S. Yin, Q. Luo, Y. Zhou, IBMSMA: An indicator-based multi-swarm slime mould algorithm for multi-objective truss optimization problems, J. Bionic Eng., 20 (2023), 1333–1360. http://dx.doi.org/10.1007/s42235-022-00307-9 doi: 10.1007/s42235-022-00307-9
    [5] X. Ju, F. Liu, L. Wang, W. J. Lee, Wind farm layout optimization based on support vector regression guided genetic algorithm with consideration of participation among landowners, Energy Convers. Manage., 196 (2019), 1267–1281. http://dx.doi.org/10.1016/j.enconman.2019.06.082 doi: 10.1016/j.enconman.2019.06.082
    [6] J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT press, 1992.
    [7] J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of ICNN'95-International Conference on Neural Networks, 4 (1995), 1942–1948. http://dx.doi.org/10.1109/ICNN.1995.488968
    [8] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: Optimization by a colony of cooperating agents, IEEE Trans. Syst. Man Cybern. Part B, 26 (1996), 29–41. http://dx.doi.org/10.1109/3477.484436 doi: 10.1109/3477.484436
    [9] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, Report, Technical report-tr06, Erciyes university, engineering faculty, computer, 2005.
    [10] S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007 doi: 10.1016/j.advengsoft.2013.12.007
    [11] J. Lian, G. Hui, L. Ma, T. Zhu, X. Wu, A. A. Heidari, et al., Parrot optimizer: Algorithm and applications to medical problems, Comput. Biol. Med., 172 (2024), 108064. https://doi.org/10.1016/j.compbiomed.2024.108064 doi: 10.1016/j.compbiomed.2024.108064
    [12] H. Su, D. Zhao, A. A. Heidari, L. Liu, X. Zhang, M. Mafarja, et al., Rime: A physics-based optimization, Neurocomputing, 532 (2023), 183–214. https://doi.org/10.1016/j.neucom.2023.02.010 doi: 10.1016/j.neucom.2023.02.010
    [13] I. Ahmadianfar, A. A. Heidari, S. Noshadian, H. Chen, A. H. Gandomi, INFO: An efficient optimization algorithm based on weighted mean of vectors, Expert Syst. Appl., 195 (2022), 116516. https://doi.org/10.1016/j.eswa.2022.116516 doi: 10.1016/j.eswa.2022.116516
    [14] Y. Yang, H. Chen, A. A. Heidari, A. H. Gandomi, Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts, Expert Syst. Appl., 177 (2021), 114864. https://doi.org/10.1016/j.eswa.2021.114864 doi: 10.1016/j.eswa.2021.114864
    [15] R. Storn, K. Price, Differential evolution–-a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim., 11 (1997), 341–359. http://dx.doi.org/10.1023/A:1008202821328 doi: 10.1023/A:1008202821328
    [16] D. Liu, Z. Hu, Q. Su, Neighborhood-based differential evolution algorithm with direction induced strategy for the large-scale combined heat and power economic dispatch problem, Inf. Sci., 613 (2022), 469–493. https://doi.org/10.1016/j.ins.2022.09.025 doi: 10.1016/j.ins.2022.09.025
    [17] C. Zhang, W. Zhou, W. Qin, W. Tang, A novel UAV path planning approach: Heuristic crossing search and rescue optimization algorithm, Expert Syst. Appl., 215 (2023), 119243. https://doi.org/10.1016/j.eswa.2022.119243 doi: 10.1016/j.eswa.2022.119243
    [18] M. Sajid, H. Mittal, S. Pare, M. Prasad, Routing and scheduling optimization for UAV assisted delivery system: A hybrid approach, Appl. Soft Comput., 126 (2022), 109225. https://doi.org/10.1016/j.asoc.2022.109225 doi: 10.1016/j.asoc.2022.109225
    [19] L. Abualigah, M. A. Elaziz, D. Yousri, M. A. A. Al-qaness, A. A. Ewees, R. A. Zitar, Augmented arithmetic optimization algorithm using opposite-based learning and lévy flight distribution for global optimization and data clustering, J. Intell. Manuf., 34 (2023), 3523–3561. http://dx.doi.org/10.1007/s10845-022-02016-w doi: 10.1007/s10845-022-02016-w
    [20] H. R. Tizhoosh, Opposition-based learning: A new scheme for machine intelligence, in International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC'06), 1 (2005), 695–701. http://dx.doi.org/10.1109/CIMCA.2005.1631345
    [21] S. Rahnamayan, H. R. Tizhoosh, M. M. A. Salama, Opposition-based differential evolution, IEEE Trans. Evol. Comput., 12 (2008), 64–79. http://dx.doi.org/10.1109/TEVC.2007.894200 doi: 10.1109/TEVC.2007.894200
    [22] M. Črepinšek, S. H. Liu, M. Mernik, Exploration and exploitation in evolutionary algorithms, ACM Comput. Surv., 45 (2013), 1–33. http://dx.doi.org/10.1145/2480741.2480752 doi: 10.1145/2480741.2480752
    [23] H. L. Kwa, J. Philippot, R. Bouffanais, Effect of swarm density on collective tracking performance, Swarm Intell., 17 (2023), 253–281. http://dx.doi.org/10.1007/s11721-023-00225-4 doi: 10.1007/s11721-023-00225-4
    [24] P. Joćko, B. M. Ombuki-Berman, A. P. Engelbrecht, Multi-guide particle swarm optimisation archive management strategies for dynamic optimisation problems, Swarm Intell., 16 (2022), 143–168. http://dx.doi.org/10.1007/s11721-022-00210-3 doi: 10.1007/s11721-022-00210-3
    [25] F. Yu, J. Guan, H. R. Wu, C. Y. Chen, X. W. Xia, Lens imaging opposition-based learning for differential evolution with cauchy perturbation, Appl. Soft Comput., 152 (2023), 111211. https://doi.org/10.1016/j.asoc.2023.111211 doi: 10.1016/j.asoc.2023.111211
    [26] S. Mahdavi, S. Rahnamayan, K. Deb, Opposition based learning: A literature review, Swarm Evol. Comput., 39 (2018), 1–23. http://dx.doi.org/10.1016/j.swevo.2017.09.010 doi: 10.1016/j.swevo.2017.09.010
    [27] W. Deng, S. F. Shang, X. Cai, H. M. Zhao, Y. J. Song, J. J. Xu, An improved differential evolution algorithm and its application in optimization problem, Soft Comput., 25 (2021), 5277–5298. http://dx.doi.org/10.1007/s00500-020-05527-x doi: 10.1007/s00500-020-05527-x
    [28] L. L. Kang, R. S. Chen, W. L. Cao, Y. C. Chen, Non-inertial opposition-based particle swarm optimization and its theoretical analysis for deep learning applications, Appl. Soft Comput., 88 (2020), 10. http://dx.doi.org/10.1016/j.asoc.2019.106038 doi: 10.1016/j.asoc.2019.106038
    [29] S. Dhargupta, M. Ghosh, S. Mirjalili, R. Sarkar, Selective opposition based grey wolf optimization, Expert Syst. Appl., 151 (2020), 13. http://dx.doi.org/10.1016/j.eswa.2020.113389 doi: 10.1016/j.eswa.2020.113389
    [30] A. Chatterjee, S. Ghoshal, V. Mukherjee, Solution of combined economic and emission dispatch problems of power systems by an opposition-based harmony search algorithm, Int. J. Electr. Power Energy Syst., 39 (2012), 9–20. https://doi.org/10.1016/j.ijepes.2011.12.004 doi: 10.1016/j.ijepes.2011.12.004
    [31] B. Kazemi, M. Ahmadi, S. Talebi, Optimum and reliable routing in VANETs: An opposition based ant colony algorithm scheme, in 2013 International Conference on Connected Vehicles and Expo (ICCVE), (2013), 926–930.
    [32] Y. Y. Zhang, Backtracking search algorithm with specular reflection learning for global optimization, Knowl.-Based Syst., 212 (2021), 17. https://doi.org/10.1016/j.knosys.2020.106546 doi: 10.1016/j.knosys.2020.106546
    [33] R. Patel, M. M. Raghuwanshi, L. G. Malik, Decomposition based multi-objective genetic algorithm (DMOGA) with opposition based learning, in 2012 Fourth International Conference on Computational Intelligence and Communication Networks, (2012), 605–610. https://doi.org/10.1109/cicn.2012.79
    [34] M. Tair, N. Bacanin, M. Zivkovic, K. Venkatachalam, A chaotic oppositional whale optimisation algorithm with firefly search for medical diagnostics, Comput. Mater. Continua, 72 (2022). https://doi.org/10.32604/cmc.2022.024989 doi: 10.32604/cmc.2022.024989
    [35] L. Abualigah, A. Diabat, M. A. Elaziz, Improved slime mould algorithm by opposition-based learning and Levy flight distribution for global optimization and advances in real-world engineering problems, J. Ambient Intell. Humanized Comput., 14 (2023), 1163–1202. https://doi.org/10.1007/s12652-021-03372-w doi: 10.1007/s12652-021-03372-w
    [36] S. K. Joshi, Chaos embedded opposition based learning for gravitational search algorithm, Appl. Intell., 53 (2023), 5567–5586. https://doi.org/10.1007/s10489-022-03786-9 doi: 10.1007/s10489-022-03786-9
    [37] V. H. S. Pham, N. T. N. Dang, V. N. Nguyen, Hybrid sine cosine algorithm with integrated roulette wheel selection and opposition-based learning for engineering optimization problems, Int. J. Comput. Intell. Syst., 16 (2023), 171. https://doi.org/10.1007/s44196-023-00350-2 doi: 10.1007/s44196-023-00350-2
    [38] N. Bacanin, U. Arnaut, M. Zivkovic, T. Bezdan, T. A. Rashid, Energy efficient clustering in wireless sensor networks by opposition-based initialization bat algorithm, in Computer Networks and Inventive Communication Technologies. Lecture Notes on Data Engineering and Communications Technologies (eds. S. Smys, R. Bestak, R. Palanisamy and I. Kotuliak), (2022), 1–16. https://doi.org/10.1007/978-981-16-3728-5_1
    [39] T. Bezdan, A. Petrovic, M. Zivkovic, I. Strumberger, V. K. Devi, N. Bacanin, Current best opposition-based learning salp swarm algorithm for global numerical optimization, in 2021 Zooming Innovation in Consumer Technologies Conference (ZINC), (2021), 5–10. https://doi.org/10.1109/ZINC52049.2021.9499275
    [40] S. J. Mousavirad, D. Oliva, S. Hinojosa, G. Schaefer, Differential evolution-based neural network training incorporating a centroid-based strategy and dynamic opposition-based learning, in 2021 IEEE Congress on Evolutionary Computation (CEC), (2021), 1233–1240. https://doi.org/10.1109/CEC45853.2021.9504801
    [41] S. Rahnamayan, H. R. Tizhoosh, M. M.A. Salama, Quasi-oppositional differential evolution, in 2007 IEEE Congress on Evolutionary Computation, (2007), 2229–2236. https://doi.org/10.1109/CEC.2007.4424748
    [42] M. Ergezer, D. Simon, D. Du Oppositional biogeography-based optimization, in 2009 IEEE International Conference on Systems, Man and Cybernetics, (2009), 1009–1014. https://doi.org/10.1109/ICSMC.2009.5346043
    [43] H. R. Tizhoosh, M. Ventresca, S. Rahnamayan, Opposition-based computing, in Oppositional Concepts in Computational Intelligence (eds. H. R. Tizhoosh and M. Ventresca), Springer, (2008), 11–28. https://doi.org/10.1007/978-3-540-70829-2_2
    [44] H. Wang, Z. Wu, S. Rahnamayan, Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems, Soft Comput., 15 (2010), 2127–2140. http://dx.doi.org/10.1007/s00500-010-0642-7 doi: 10.1007/s00500-010-0642-7
    [45] M. Ergezer, D. Simon, Probabilistic properties of fitness-based quasi-reflection in evolutionary algorithms, Comput. Oper. Res., 63 (2015), 114–124. http://https://doi.org/10.1016/j.cor.2015.03.013 doi: 10.1016/j.cor.2015.03.013
    [46] Z. Hu, Y. Bao, T. Xiong, Partial opposition-based adaptive differential evolution algorithms: Evaluation on the CEC 2014 benchmark set for real-parameter optimization, in 2014 IEEE congress on evolutionary computation (CEC), (2014), 2259–2265. http://dx.doi.org/10.1109/CEC.2014.6900489
    [47] S. Rahnamayan, J. Jesuthasan, F. Bourennani, G. F. Naterer, H. Salehinejad, Centroid opposition-based differential evolution, Int. J. Appl. Metaheuristic Comput., 5 (2014), 1–25. http://dx.doi.org/10.4018/ijamc.2014100101 doi: 10.4018/ijamc.2014100101
    [48] H. Liu, Z. Wu, H. Li, H. Wang, S. Rahnamayan, C. Deng, Rotation-based learning: A novel extension of opposition-based learning, in PRICAI 2014: Trends in Artificial Intelligence. PRICAI 2014. Lecture Notes in Computer Science (eds. D. N. Pham and S. B. Park), Springer International Publishing, (2014), 511–522. https://doi.org/10.1007/978-3-319-13560-1_41
    [49] H. Xu, C. D. Erdbrink, V. V. Krzhizhanovskaya, How to speed up optimization? Opposite-center learning and its application to differential evolution, Proc. Comput. Sci., 51 (2015), 805–814. http://doi.org/10.1016/j.procs.2015.05.203 doi: 10.1016/j.procs.2015.05.203
    [50] Z. Seif, M. B. Ahmadi, An opposition-based algorithm for function optimization, Eng. Appl. Artifi. Intell., 37 (2015), 293–306. http://dx.doi.org/10.1016/j.engappai.2014.09.009 doi: 10.1016/j.engappai.2014.09.009
    [51] Q. Xu, L. Wang, B. He, N. Wang, Modified opposition-based differential evolution for function optimization, J. Comput. Inf. Syst., 7 (2011), 1582–1591.
    [52] S. Y. Park, J. J. Lee, Stochastic opposition-based learning using a beta distribution in differential evolution, IEEE Trans. Cybern., 46 (2016), 2184–2194. http://dx.doi.org/10.1109/TCYB.2015.2469722 doi: 10.1109/TCYB.2015.2469722
    [53] X. Xia, L. Gui, Y. Zhang, X. Xu, F. Yu, H. Wu, et al., A fitness-based adaptive differential evolution algorithm, Inf. Sci., 549 (2021), 116–141. http://dx.doi.org/10.1016/j.ins.2020.11.015 doi: 10.1016/j.ins.2020.11.015
    [54] H. Deng, L. Peng, H. Zhang, B. Yang, Z. Chen, Ranking-based biased learning swarm optimizer for large-scale optimization, Inf. Sci., 493 (2019), 120–137. http://dx.doi.org/10.1016/j.ins.2019.04.037 doi: 10.1016/j.ins.2019.04.037
    [55] L. Gui, X. Xia, F. Yu, H. Wu, R. Wu, B. Wei, et al., A multi-role based differential evolution, Swarm Evol. Comput., 50 (2019), 100508. https://doi.org/10.1016/j.swevo.2019.03.003 doi: 10.1016/j.swevo.2019.03.003
    [56] B. Morales-Castañeda, D. Zaldívar, E. Cuevas, F. Fausto, A. Rodríguez, A better balance in metaheuristic algorithms: Does it exist?, Swarm Evol. Comput., 54 (2020), 100671. http://dx.doi.org/10.1016/j.swevo.2020.100671 doi: 10.1016/j.swevo.2020.100671
    [57] G. Wu, R. Mallipeddi, P. Suganthan, Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective real-parameter optimization, Nanyang Technol. Univ. Singapore Tech. Rep., (2016), 1–18.
    [58] J. Zhang, A. C. Sanderson, JADE: Adaptive differential evolution with optional external archive, IEEE Trans. Evol. Comput., 13 (2009), 945–958. http://dx.doi.org/10.1109/tevc.2009.2014613 doi: 10.1109/tevc.2009.2014613
    [59] W. Deng, H. C. Ni, Y. Liu, H. L. Chen, H. M. Zhao, An adaptive differential evolution algorithm based on belief space and generalized opposition-based learning for resource allocation, Appl. Soft Comput., 127 (2022), 20. http://dx.doi.org/10.1016/j.asoc.2022.109419 doi: 10.1016/j.asoc.2022.109419
    [60] Y. L. Xu, X. F. Yang, Z. L. Yang, X. P. Li, P. Wang, R. Z. Ding, et al., An enhanced differential evolution algorithm with a new oppositional-mutual learning strategy, Neurocomputing, 435 (2021), 162–175. http://dx.doi.org/10.1016/j.neucom.2021.01.003 doi: 10.1016/j.neucom.2021.01.003
    [61] J. Li, Y. Gao, K. Wang, Y. Sun, A dual opposition-based learning for differential evolution with protective mechanism for engineering optimization problems, Appl. Soft Comput., 113 (2021), 107942. http://dx.doi.org/10.1016/j.asoc.2021.107942 doi: 10.1016/j.asoc.2021.107942
    [62] X. C. Zhao, S. Feng, J. L. Hao, X. Q. Zuo, Y. Zhang, Neighborhood opposition-based differential evolution with gaussian perturbation, Soft Comput., 25 (2021), 27–46. http://dx.doi.org/10.1007/s00500-020-05425-2 doi: 10.1007/s00500-020-05425-2
    [63] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm Evol. Comput., 1 (2011), 3–18. http://dx.doi.org/10.1016/j.swevo.2011.02.002 doi: 10.1016/j.swevo.2011.02.002
  • This article has been cited by:

    1. Masaaki Fujii, Akihiko Takahashi, Strong Convergence to the Mean Field Limit of a Finite Agent Equilibrium, 2022, 13, 1945-497X, 459, 10.1137/21M1441055
    2. Masaaki Fujii, Akihiko Takahashi, Strong Convergence to the Mean-Field Limit of A Finite Agent Equilibrium, 2021, 1556-5068, 10.2139/ssrn.3905899
    3. Diogo Gomes, Julian Gutierrez, Ricardo Ribeiro, A Random-Supply Mean Field Game Price Model, 2023, 14, 1945-497X, 188, 10.1137/21M1443923
    4. Yuri Ashrafyan, Tigran Bakaryan, Diogo Gomes, Julian Gutierrez, 2022, The potential method for price-formation models, 978-1-6654-6761-2, 7565, 10.1109/CDC51059.2022.9992621
    5. Diogo Gomes, Julian Gutierrez, Mathieu Laurière, Machine Learning Architectures for Price Formation Models, 2023, 88, 0095-4616, 10.1007/s00245-023-10002-8
    6. Matt Barker, Pierre Degond, Ralf Martin, Mirabelle Muûls, A mean field game model of firm-level innovation, 2023, 33, 0218-2025, 929, 10.1142/S0218202523500203
    7. Khaled Aljadhai, Majid Almarhoumi, Diogo Gomes, Melih Ucer, A mean-field game model of price formation with price-dependent agent behavior, 2024, 1982-6907, 10.1007/s40863-024-00465-0
    8. Diogo Gomes, Julian Gutierrez, Mathieu Laurière, 2023, Machine Learning Architectures for Price Formation Models with Common Noise, 979-8-3503-0124-3, 4345, 10.1109/CDC49753.2023.10383244
    9. Masaaki Fujii, Masashi Sekine, Mean-Field Equilibrium Price Formation With Exponential Utility, 2023, 1556-5068, 10.2139/ssrn.4420441
    10. Yuri Ashrafyan, Diogo Gomes, A Fully-Discrete Semi-Lagrangian Scheme for a Price Formation MFG Model, 2025, 2153-0785, 10.1007/s13235-025-00620-y
    11. Masaaki Fujii, Masashi Sekine, Mean-field equilibrium price formation with exponential utility, 2024, 24, 0219-4937, 10.1142/S0219493725500017
    12. Masaaki Fujii, Masashi Sekine, Mean Field Equilibrium Asset Pricing Model with Habit Formation, 2025, 1387-2834, 10.1007/s10690-024-09507-1
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1139) PDF downloads(74) Cited by(1)

Figures and Tables

Figures(10)  /  Tables(14)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog