Loading [MathJax]/jax/output/SVG/jax.js
Research article

Multi-objective particle swarm optimization with reverse multi-leaders


  • Despite being easy to implement and having fast convergence speed, balancing the convergence and diversity of multi-objective particle swarm optimization (MOPSO) needs to be further improved. A multi-objective particle swarm optimization with reverse multi-leaders (RMMOPSO) is proposed as a solution to the aforementioned issue. First, the convergence strategy of global ranking and the diversity strategy of mean angular distance are proposed, which are used to update the convergence archive and the diversity archive, respectively, to improve the convergence and diversity of solutions in the archives. Second, a reverse selection method is proposed to select two global leaders for the particles in the population. This is conducive to selecting appropriate learning samples for each particle and leading the particles to quickly fly to the true Pareto front. Third, an information fusion strategy is proposed to update the personal best, to improve convergence of the algorithm. At the same time, in order to achieve a better balance between convergence and diversity, a new particle velocity updating method is proposed. With this, two global leaders cooperate to guide the flight of particles in the population, which is conducive to promoting the exchange of social information. Finally, RMMOPSO is simulated with several state-of-the-art MOPSOs and multi-objective evolutionary algorithms (MOEAs) on 22 benchmark problems. The experimental results show that RMMOPSO has better comprehensive performance.

    Citation: Fei Chen, Yanmin Liu, Jie Yang, Meilan Yang, Qian Zhang, Jun Liu. Multi-objective particle swarm optimization with reverse multi-leaders[J]. Mathematical Biosciences and Engineering, 2023, 20(7): 11732-11762. doi: 10.3934/mbe.2023522

    Related Papers:

    [1] Jingya Wang, Ye Zhu . L2L control for memristive NNs with non-necessarily differentiable time-varying delay. Mathematical Biosciences and Engineering, 2023, 20(7): 13182-13199. doi: 10.3934/mbe.2023588
    [2] Kseniia Kravchuk, Alexander Vidybida . Non-Markovian spiking statistics of a neuron with delayed feedback in presence of refractoriness. Mathematical Biosciences and Engineering, 2014, 11(1): 81-104. doi: 10.3934/mbe.2014.11.81
    [3] Tianqi Yu, Lei Liu, Yan-Jun Liu . Observer-based adaptive fuzzy output feedback control for functional constraint systems with dead-zone input. Mathematical Biosciences and Engineering, 2023, 20(2): 2628-2650. doi: 10.3934/mbe.2023123
    [4] Gonzalo Robledo . Feedback stabilization for a chemostat with delayed output. Mathematical Biosciences and Engineering, 2009, 6(3): 629-647. doi: 10.3934/mbe.2009.6.629
    [5] Linni Li, Jin-E Zhang . Input-to-state stability of stochastic nonlinear system with delayed impulses. Mathematical Biosciences and Engineering, 2024, 21(2): 2233-2253. doi: 10.3934/mbe.2024098
    [6] Xiaoxiao Dong, Huan Qiao, Quanmin Zhu, Yufeng Yao . Event-triggered tracking control for switched nonlinear systems. Mathematical Biosciences and Engineering, 2023, 20(8): 14046-14060. doi: 10.3934/mbe.2023627
    [7] Tong Guo, Jing Han, Cancan Zhou, Jianping Zhou . Multi-leader-follower group consensus of stochastic time-delay multi-agent systems subject to Markov switching topology. Mathematical Biosciences and Engineering, 2022, 19(8): 7504-7520. doi: 10.3934/mbe.2022353
    [8] Dong Xu, Xinling Li, Weipeng Tai, Jianping Zhou . Event-triggered stabilization for networked control systems under random occurring deception attacks. Mathematical Biosciences and Engineering, 2023, 20(1): 859-878. doi: 10.3934/mbe.2023039
    [9] Yue Song, Yi Zhang, Song Yang, Na Li . Investigation on stability and controller design for singular bio-economic systems with stochastic fluctuations. Mathematical Biosciences and Engineering, 2021, 18(3): 2991-3005. doi: 10.3934/mbe.2021150
    [10] Xiaofeng Chen . Double-integrator consensus for a switching network without dwell time. Mathematical Biosciences and Engineering, 2023, 20(7): 11627-11643. doi: 10.3934/mbe.2023516
  • Despite being easy to implement and having fast convergence speed, balancing the convergence and diversity of multi-objective particle swarm optimization (MOPSO) needs to be further improved. A multi-objective particle swarm optimization with reverse multi-leaders (RMMOPSO) is proposed as a solution to the aforementioned issue. First, the convergence strategy of global ranking and the diversity strategy of mean angular distance are proposed, which are used to update the convergence archive and the diversity archive, respectively, to improve the convergence and diversity of solutions in the archives. Second, a reverse selection method is proposed to select two global leaders for the particles in the population. This is conducive to selecting appropriate learning samples for each particle and leading the particles to quickly fly to the true Pareto front. Third, an information fusion strategy is proposed to update the personal best, to improve convergence of the algorithm. At the same time, in order to achieve a better balance between convergence and diversity, a new particle velocity updating method is proposed. With this, two global leaders cooperate to guide the flight of particles in the population, which is conducive to promoting the exchange of social information. Finally, RMMOPSO is simulated with several state-of-the-art MOPSOs and multi-objective evolutionary algorithms (MOEAs) on 22 benchmark problems. The experimental results show that RMMOPSO has better comprehensive performance.



    In many industrial applications, due to the ubiquity of stochastic noise and nonlinear [1,2], real systems are often modelled by stochastic differential equations, which attracts researchers to pay more and more attention to the control of stochastic systems. Using the state-feedback, a closed-loop pole can be arbitrarily configured to improve the performance of the control systems. Therefore, some scholars research the problem of state-feedback stabilization for stochastic systems, e.g., reference [3] focuses on the cooperative control problem of multiple nonlinear systems perturbed by second-order moment processes in a directed topology. Reference [4] considers the case where the diffusion term and the drift term are unknown parameters for stochastic systems with strict feedback. Reference [5] studies stochastic higher-order systems with state constraints and [6] discusses output constrained stochastic systems with low-order and high-order nonlinear and stochastic inverse dynamics. However, it is often difficult to obtain all the state variables of the system directly, it is unsuitable for direct measurement or the measurement equipment is limited in economy and practicality, so the physical implementation of state-feedback is difficult. One of the solutions to this difficulty is to reconstruct the state of the system. At this time, scholars use an observer to investigate the output-feedback stabilization, e.g., reference [7] investigates the prescribed-time stability problem of stochastic nonlinear strict-feedback systems. Reference [8] focuses on stochastic strict feedback systems with sensor uncertainties. In addition, based on output-feedback, for nonlinear multiagent systems, a distributed output-feedback tracking controller is proposed in [9].

    It should be noted that all of the above results [7,8,9], Markovian switching is not considered in the design of output-feedback controller. However, as demonstrated by [10], switching system is a complex hybrid system, which consists of a series of subsystems and switching rules that coordinate the order of each subsystem. In real life, due to the aging of internal components, high temperature, sudden disturbance of external environment, operator error and other inevitable factors, the structure of many systems changes suddenly. Such systems can be reasonably modelled as differential equation with Markovian switching, see [11,12]. Recently, references [13] and [14] discuss the adaptive tracking problem and output tracking problem with Markovian switching respectively. Besides, as shown in [15], the power of the system changes because of factors such as the aging of the springs inside the boiler-turbine unit. Therefore, the research on the stability of stochastic nonlinear systems with time-varying powers has important practical significance. Reference [16] investigates the optimality and stability of high-order stochastic systems with time-varying powers. However, these results do not address the output-feedback stabilization for higher-order stochastic systems with both Markovian switching and time-varying powers.

    Based on these discussions, we aim to resolve the output-feedback stabilization for higher-order stochastic nonlinear systems with both Markovian switching and time-varying powers. The main contributions and characteristics of this paper are two-fold:

    1) The system model we take into account is more applicable than the existing results [7,8,9] and [12,13,14]. Different from the previous results [7,8,9], the stochastic system with Markovian switching is studied in this paper. Unlike previous studies in [12,13,14], we investigate the power is time-varying. The simultaneous existence of the Markov process and time-varying order makes the controller design process more complicated and difficult. More advanced stochastic analysis techniques are needed.

    2) We propose a new observer. The existence of Markovian switching and nondifferentiable time-varying power makes the observer constructed in [7,8,9] invalid. We use the time-varying power's bounds to construct a new observer, which can effectively observe the unmeasurable state and can deal with the nonlinear growth rate, while the existing observer can only deal with constant growth rate.

    The rest of this paper is listed as follows. The problem is formulated in Section 2. In Section 3, an output-feedback controller is designed. Section 4 is the stability analysis. A simulation is given in Section 5. The conclusions are collected in Section 6.

    Notations: R2 denotes the 2-dimensional space and the set of nonnegative real numbers is represented by R+. X denotes the matrix or vector, its transpose is represented by XT. |X| denotes the Euclidean norm of a vector X. When X is square, Tr{X} denotes its trace. The set of all functions with continuous ith partial derivatives is represented by Ci. Let C2,1(R2×R+×S;R+) represent all nonnegative functions V on R2×R+×S which are C2 in x and C1 in t.

    This paper studies the output-feedback stabilization for stochastic nonlinear systems with both Markovian switching and time-varying powers described by:

    dζ1=[ζ2]m(t)dt,dζ2=[u]m(t)dt+fTγ(t)(ˉζ2)dω,y=ζ1, (2.1)

    where ζ=ˉζ2=(ζ1,ζ2)TR2, yR and uR are the system state, control output and the input, respectively. The state ζ2 is unmeasurable. The function m(t):R+R+ is continuous and bounded, which satisfies 1m_m(t)ˉm with m_ and ˉm being constants. The powers sign function []α is defined as []α:=sign()||α with α(0,+). The functions fγ(t) is assumed to be smooth, and for all t0, the locally Lipschitz continuous in x uniformly. fγ(t)(t,0)=0. ω is an rdimensional standard Wiener process, which is defined on the complete probability space (Ω,F,Ft,P) with the filtration Ft satisfying the general conditions. γ(t) is a homogeneous Markov process on the probability space taking values in a space S={1,2,...,N}, which the generator Γ=(λij)N×N given by

    Pij(t)=P{γ(t+s)=i|γ(s)=j}={λijt+o(t)ifij,1+λijt+o(t)ifi=j, (2.2)

    where λij>0 is the transition rate from i to j if ij while λii=ΣNj=1,ijλij for any s,t0. Suppose the Markov process γ(t) is irrelevant to the ω(t).

    To implement the controller design, we need the following assumption.

    Assumption 2.1. There exists a non-negative smooth function ˜f(ζ1) such that

    |fγ(t)(ˉζ2)|(|ζ1|m(t)+12+|ζ2|m(t)+12)˜f(ζ1). (2.3)

    Remark 2.1. As we know, the existing results for stochastic systems with time-varying powers (e.g., [16]), neither the state-feedback control nor the output-feedback control, has considered Markovian switching. However, the structure of many physical systems in the actual system often mutates, which makes it necessary to study systems with both Markovain switching and time-varying powers. Therefore, compared with [16], the model we consider is more practical and more general.

    Remark 2.2. In Assumption 2.1, we can see that the power m(t) is time-varying and the growth rate ˜f(ζ1) is a nonlinear function. When m(t)=1 and ˜f(ζ1) is a constant, Assumption 2.1 is a linear growth condition. However, we consider that ˜f(ζ1) is a nonlinear function, which includes the constant case as a special case. The growth condition of Assumption 2.1 is broader than the linear growth condition. The time-varying power m(t) makes the design in [7,8,9] for time-invariant power invalid. In addition, the nonlinear growth rate ˜f(ζ1) makes the design in [7,8,9,17,18] for constant growth rate fail. A new design scheme should be proposed.

    In this section, we develop an output-feedback controller design for system (2.1). The process is divided into two steps:

    Firstly, we assume that all states are measurable and develop a state-feedback controller using backstepping technique.

    Secondly, we construct a reduced-order observer with a dynamic gain, and design an output-feedback controller.

    In this part, under Assumption 2.1, our objective is to develop a state-feedback controller design for the system (2.1).

    Step 1. Introducing the coordinate transformation ξ1=ζ1 and choosing V1=14ξ41, by using the infinitesimal generator defined in section 1.8 of [11], we have

    LV1ξ31[ζ2]m(t)+IIV1ξ31([ζ2]m(t)[ζ2]m(t))+ξ31[ζ2]m(t)+IIV1. (3.1)

    If we choose ζ2 as

    ζ2=c1/m_1ξ1:=α1ξ1, (3.2)

    we get

    ξ31[ζ2]m(t)=αm(t)1ξm(t)+31c1|ξ1|m(t)+3, (3.3)

    where α1=c1/m_11 is a constant with c11 being a design parameter.

    Substituting (3.3) into (3.1) yields

    LV1c1|ξ1|m(t)+3+ξ31([ζ2]m(t)[ζ2]m(t))+IIV1. (3.4)

    Step 2. Introducing the coordinate transformation ξ2=ζ2ζ2, and using Itô's differentiation rule, we get

    dξ2=([u]m(t)ζ2ζ1[ζ2]m(t))dt+fTγ(t)(ˉζ2)dω. (3.5)

    Choose V2=V1+14ξ42. From (3.4) and (3.5), we obtain

    LV2c1|ξ1|m(t)+3+ξ31([ζ2]m(t)[ζ2]m(t))+ξ32[u]m(t)ξ32ζ2ζ1[ζ2]m(t)+32ξ22|fTγ(t)(ˉζ2)|2+IIV2. (3.6)

    By (3.2) and using Lemma 1 in [19], we have

    ξ31([ζ2]m(t)[ζ2]m(t))ˉm(2ˉm2+2)(|ξ1|3|ξ2|m(t)+αˉm11|ξ1|m(t)+2|ξ2|). (3.7)

    By using Lemma 2.1 in [20], we get

    ˉm(2+2ˉm2)|ξ1|3|ξ2|m(t)16|ξ1|3+m(t)+β211|ξ2|3+m(t),ˉm(2+2ˉm2)αˉm11|ξ1|m(t)+2|ξ2|16|ξ1|3+m(t)+β212|ξ2|3+m(t), (3.8)

    where

    β211=ˉmm_+3(ˉm(2+2ˉm2))3+ˉmm_(3+m_18)3ˉm,β212=1m_+3(ˉm(2+2ˉm2)αˉm11)ˉm+3(m_+36(ˉm+2))(ˉm+2). (3.9)

    Substituting (3.8) into (3.7) yields

    ξ31([ζ2]m(t)[ζ2]m(t))13|ξ1|3+m(t)+β21|ξ2|3+m(t), (3.10)

    where β21=β211+β212 is a positive constant.

    By (3.2) and using Lemma 5 in [21], we get

    |ζ2|m(t)=|ξ2+ζ2|m(t)(|ξ2|+|α1ξ1|)m(t)2ˉm1(|ξ2|m(t)+|α1ξ1|m(t))2ˉm1αˉm1(|ξ2|m(t)+|ξ1|m(t)), (3.11)

    which means that

    |ζ1|m(t)+|ζ2|m(t)φ1(|ξ2|m(t)+|ξ1|m(t)), (3.12)

    where φ1=2ˉm1αˉm1+10 is a constant.

    By (3.11) and using Lemma 1 in [19], we have

    ξ32ζ2ζ1[ζ2]m(t)|ξ32||ζ2ζ1|2ˉm1αˉm1(|ξ2|m(t)+|ξ1|m(t))2ˉm1αˉm1|ζ2ζ1|(|ξ2|m(t)+3+|ξ32||ξ1|m(t)). (3.13)

    By using Lemma 2.1 in [20], we get

    2ˉm1αˉm1|ζ2ζ1||ξ32||ξ1|m(t)13|ξ1|3+m(t)+β221(ζ1)|ξ2|3+m(t), (3.14)

    where

    β221(ζ1)=3m_+3(2ˉm1αˉm1|ζ2ζ1|)ˉm+33(m_+33ˉm)ˉm3. (3.15)

    Substituting (3.14) into (3.13) yields

    ξ32ζ2ζ1[ζ2]m(t)13|ξ1|3+m(t)+β22(ζ1)|ξ2|3+m(t), (3.16)

    where β22(ζ1)=2ˉm1αˉm1|ζ2ζ1|+β221(ζ1) is a smooth function irrelevant to m(t).

    By (3.12), using Assumption 2.1 and Lemma 1 in [19], we get

    32ξ22|fTγ(t)(ˉζ2)|23˜f2(ζ1)|ξ2|2(|ζ1|m(t)+1+|ζ2|m(t)+1)3˜f2(ζ1)φ2(|ξ2|m(t)+3+|ξ2|2|ξ1|m(t)+1), (3.17)

    where φ2=2ˉmαˉm+11+10 is a constant.

    From Lemma 2.1 in [20], we obtain

    3˜f2(ζ1)φ2|ξ2|2|ξ1|m(t)+113|ξ1|m(t)+3+β231(ζ1)|ξ2|m(t)+3, (3.18)

    where

    β231(ζ1)=2m_+3(3˜f2(ζ1)φ2)ˉm+32(m_+33(ˉm+1))ˉm+12. (3.19)

    Substituting (3.18) into (3.17) yields

    32ξ22|fTγ(t)(ˉζ2)|213|ξ1|m(t)+3+β23(ζ1)|ξ2|m(t)+3, (3.20)

    where β23(ζ1)=3˜f2(ζ1)φ2+β231(ζ1)0 is a smooth function irrelevant to m(t).

    By using (3.6), (3.10), (3.16) and (3.20), we obtain

    LV2(c11)|ξ1|m(t)+3+ξ32([u]m(t)[x3]m(t))+ξ32[x3]m(t)+β2(ζ1)|ξ2|m(t)+3+IIV2, (3.21)

    where β2(ζ1)=β21(ζ1)+β22(ζ1)+β23(ζ1) is a smooth function irrelevant to m(t).

    Constructing the virtual controller as

    x3=(c2+β2(ζ1))1m_:=α2(ζ1)ξ2, (3.22)

    we have

    ξ32[x3]m(t)=αm(t)2(ζ1)ξm(t)+32(c2+β2(ζ1))ξm(t)+32, (3.23)

    where c2>0 is a constant and α2(ζ1)0 is a smooth function irrelevant to m(t).

    Substituting (3.23) into (3.21) yields

    LV2(c11)|ξ1|m(t)+3c2|ξ2|m(t)+3+ξ32([u]m(t)[x3]m(t))+IIV2. (3.24)

    In this part, we first design a reduced-order observer with a dynamic gain, then we design an output-feedback controller.

    Since ζ2 are unmeasurable, we construct the following observer

    dη=([u]m(t)L(ζ1)ζ1[η+L(ζ1)]m(t))dt, (3.25)

    where L(ζ1) is a smooth function, and L(ζ1)ζ1>0 is irrelevant to m(t).

    Defining e=ζ2L(ζ1)η and by the construction of the observer, we have

    de=L(ζ1)ζ1([η+L(ζ1)]m(t)[ζ2]m(t))dt+fTγ(t)(ˉζ2)dω. (3.26)

    Choose U=14e4. From (3.26), we get

    LU=e3L(ζ1)ζ1([η+L(ζ1)]m(t)[ζ2]m(t))+32e2|fTγ(t)(ˉζ2)|2+IIU. (3.27)

    By definition of e and lemma 2.2 in [22], we have

    e3L(ζ1)ζ1([η+L(ζ1)]m(t)[ζ2]m(t))12ˉm1L(ζ1)ζ1em(t)+3. (3.28)

    From (3.12), (3.17) and Assumption 2.1, we get

    32e2|fTγ(t)(ˉζ2)|23˜f2(ζ1)|e|2(|ζ1|m(t)+1+|ζ2|m(t)+1)3˜f2(ζ1)φ2(|e|2|ξ2|m(t)+1+|e|2|ξ1|m(t)+1). (3.29)

    By using Lemma 2.1 in [20], we have

    3˜f2(ζ1)φ2|e|2|ξ1|1+m(t)|ξ1|3+m(t)+β31(ζ1)|e|3+m(t),3˜f2(ζ1)φ2|e|2|ξ2|1+m(t)12|ξ2|3+m(t)+β32(ζ1)|e|3+m(t), (3.30)

    where

    β31(ζ1)=2m_+3(3˜f2(ζ1)φ2)ˉm+32(m_+3ˉm+1)ˉm+12,β32(ζ1)=2m_+3(3˜f2(ζ1)φ2)ˉm+32(3+m_2(1+ˉm))1+ˉm2. (3.31)

    Substituting (3.30) into (3.29) yields

    32e2|fTγ(t)(ˉζ2)|2|ξ1|m(t)+3+12|ξ2|m(t)+3+β3(ζ1)|e|m(t)+3, (3.32)

    where β3(ζ1)=β31(ζ1)+β32(ζ1)0 is a smooth function irrelevant to m(t).

    Substituting (3.28), (3.32) into (3.27) yields

    LU|ξ1|m(t)+3+12|ξ2|m(t)+3(12ˉm1L(ζ1)ζ1β3(ζ1))|e|m(t)+3+IIU. (3.33)

    Since ζ2 is unmeasurable, replace ζ2 in virtual controller x3 with η+L(ζ1), and we can get the controller as follows

    u=α2(ζ1)(η+L(ζ1)+α1ζ1). (3.34)

    By (3.22), (3.24) and (3.34), we obtain

    LV2(c11)|ξ1|m(t)+3c2|ξ2|m(t)+3+ξ32αˉm2(ζ1)([ξ2]m(t)[ξ2e]m(t))+IIV2. (3.35)

    By using Lemma 1 in [19], we have

    ξ32αˉm2(ζ1)([ξ2]m(t)[ξ2e]m(t))αˉm2(ζ1)ˉm(2ˉm2+2)(|ξ2|3|e|m(t)+|e||ξ2|m(t)+2). (3.36)

    By using Lemma 2.1 in [20], we get

    αˉm2(ζ1)ˉm(2ˉm2+2)|ξ2|3|e|m(t)14|ξ2|3+m(t)+β41(ζ1)|e|3+m(t),αˉm2(ζ1)ˉm(2ˉm2+2)|e||ξ2|2+m(t)14|ξ2|3+m(t)+β42(ζ1)|e|3+m(t), (3.37)

    where

    β41(ζ1)=ˉmm_+3(αˉm2(ζ1)ˉm(2ˉm2+2))ˉm+3m_(m_+312)3ˉm,β42(ζ1)=1m_+3(αˉm2(ζ1)ˉm(2ˉm2+2))ˉm+3(m_+34(ˉm+2))(ˉm+2). (3.38)

    Substituting (3.37) into (3.36) yields

    ξ32αˉm2(ζ1)([ξ2]m(t)[ξ2e]m(t))12|ξ2|3+m(t)+β4(ζ1)|e|3+m(t), (3.39)

    where β4(ζ1)=β41(ζ1)+β42(ζ1)0 is a smooth function irrelevant to m(t).

    By using (3.39) and (3.35), we have

    LV2(c11)|ξ1|m(t)+3(c212)|ξ2|m(t)+3+β4(ζ1)|e|m(t)+3+IIV2. (3.40)

    Choosing V(ξ1,ξ2,e)=V2(ξ1,ξ2)+U(e), by (3.33) and (3.40), we obtain

    LV(c12)|ξ1|m(t)+3(c21)|ξ2|m(t)+3(12ˉm1L(ζ1)ζ1β3(ζ1)β4(ζ1))|e|m(t)+3+IIV. (3.41)

    Let

    L(ζ1)=12ˉm1(c3ζ1+ζ10(β3(s)+β4(s))ds), (3.42)

    and the controller as

    u=α2(ζ1)(η+12ˉm1(c3ζ1+ζ10(β3(s)+β4(s))ds)+α1ζ1), (3.43)

    where c3>0 is a design parameter.

    By using (3.41) and (3.42), we can obtain

    LV(c12)|ξ1|m(t)+3(c21)|ξ2|m(t)+3c3|e|m(t)+3+IIV. (3.44)

    Remark 3.1. If m(t) is time-invariant and the growth rate is a constant rather than a smooth function, such as those in [7,8,9], from (3.32) and (3.39), β3 and β4 are constants irrelevant to ζ1. Then, the dynamic gain L(ζ) is a linear function of ζ1. We can design L(ζ1)=cζ1 by choosing the right parameter c to make LV in (3.41) negative definite. However, in this paper, the growth rate ˜f(ζ1) is a nonnegative smooth function and the m(t) is time-varying and non-differentiable, which makes the deducing of the dynamic gain much more difficult. To solve this problem, we introduce two constants m_ and ˉm, which are reasonably used in the design process, see (3.7) and (3.11). In this way, the dynamic gain (3.42) can be designed irrelevant to m(t), which is crucial to assure the effectiveness of the observer and controller. This is one of the main innovations of this paper.

    In this section, for the closed-loop system (2.1), (3.25) and (3.43), we first give a lemma, which is useful to prove the system has a unique solution. Then, we present the main results of the stability analysis.

    Lemma 4.1. For ζR, the function g(ζ)=[ζ]m(t) satisfies the locally Lipschitz condition.

    Proof. If ζ=0, we can get

    h+(0)=limζ0+h(ζ)h(0)ζ=0,h(0)=limζ0h(ζ)h(0)ζ=0. (4.1)

    Then, we have

    dhdζ|ζ=0=h+(0)=h(0)=0, (4.2)

    thus, h(ζ) is differentiable function in ζ=0 and so meets the locally Lipschitz condition in ζ=0.

    As ζ>0, we get

    h(ζ)=[ζ]m(t)=ζm(t). (4.3)

    For m(t)1, h(ζ) is differentiable function in ζ>0, so meets the locally Lipschitz condition in ζ>0. Similarly, as ζ<0, the conclusion is valid.

    Therefore, the conclusion holds for ζR.

    Next, we give the stability results.

    Theorem 4.1. Under Assumption 2.1, for the system (2.1), using the observer (3.25) and controller (3.43) with

    ci>3i,i=1,2,3, (4.4)

    we can get

    1) For each ζ(t0)=ζ0R2 and γ(t0)=i0S, the closed-loop system has an almost surely unique solution on [0,+);

    2) For any ζ0R2 and i0S, the closed-loop system is almost surely regulated to the equilibrium at the origin.

    Proof. By (2.1), (3.25), (3.43) and using Lemma 4.1, we can conclude that the closed-loop system satisfies the locally Lipschitz condition. By (3.2), (3.22), (3.25) and (3.42), we can get that ξ1,ξ2,η are bounded, which implies that ζ1 is bounded, which means that

    VR=inftt0,|ζ|>RV(ζ(t))R. (4.5)

    Through the verification of the controller development process, we choose appropriate design parameters ci to satisfy (4.4), and we can get IIV=0. For each l>0, the first exit time is defined as

    σl=inf{t:tt0,|ζ(t)|l}. (4.6)

    When tt0, choose tl=min{σl,t}. We can obtain that bounded |ζ(t)| on interval [t0,tl] a.s., which means that V(ζ) is bounded in the interval [t0,tl] a.s. By using (3.44), we can get that LV is bounded in the interval [t0,tl] a.s. By using Lemma 1.9 in [11], (3.44) and (4.4), we can obtain

    EV(ζ(tl))EV(ζ(t0)). (4.7)

    By (4.5), (4.7) and using Lemma 1 in [23], we can obtain conclusion (1).

    From (3.44), (4.5), by using Theorem 2.1 in [24], we can prove conclusion (2).

    In this section, a simulation example is given to show the availability of the control method.

    Study the stabilization for system with two modes. The Markov process γ(t) belongs to the space S={1,2} with generator Γ=(λij)2×2 given by λ11=2,λ12=2,λ21=1 and λ22=1. We have π1=13,π2=23. When γ(t)=1, the systems can be written as

    dζ1=[ζ2]32+12sintdt,dζ2=[u]32+12sintdt+ζ1sinζ2dω,y=ζ1, (5.1)

    where m(t)=32+12sint,m_=1,ˉm=2. When γ(t)=2, the systems are described by

    dζ1=[ζ2]2+sintdt,dζ2=[u]2+sintdt+12ζ21sinζ2dω,y=ζ1, (5.2)

    where m(t)=2+sint,m_=1,ˉm=3. Clearly, system (5.1) and (5.2) satisfy Assumption 2.1.

    According to the above design process, when γ(t)=1, the observer is constructed as

    dη=([u]32+12sintL(ζ1)ζ1[η+L(ζ1)]32+12sint)dt, (5.3)

    and the control is

    u=(c2+4ζ21)(η+L(ζ1)+c1ζ1), (5.4)

    where L(ζ1)=12(c3ζ1+6ζ21).

    When γ(t)=2, the observer is constructed as

    dη=([u]2+sintL(ζ1)ζ1[η+L(ζ1)]2+sint)dt, (5.5)

    and the control is

    u=(c2+4ζ1+12ζ21)(η+L(ζ1)+c1ζ1), (5.6)

    where L(ζ1)=14(c3ζ1+20ζ1+4ζ21).

    For simulation, we select c1=6,c2=6,c3=5, and the initial conditions as ζ1(0)=1,ζ2(0)=2,η(0)=5. We can obtain Figure 1, which illustrates that the signals of the closed-loop system (ζ1,ζ2,u,η,e) converge to zero. Specifically, the states and controller of the closed-loop system converge to zero. The observation error also converges to zero, which means that our constructed observer and controller are efficient. Figure 2 illustrates the jump of Markov process γ(t) in 1 and 2.

    Figure 1.  The responses of closed-loop systems (5.1)--(5.6).
    Figure 2.  The runs of the Markov process γ(t).

    Remark 5.1. It can be observed from the example that there are time-varying powers and Markovian switching in systems (5.1) and (5.2). For the output-feedback control of the system (5.1) and (5.2), the method in [7,8,9] fails since they can only deal with time-invariant powers without Markovian switching. To solve the difficulties caused by time-varying powers, we introduce constants 1, 2, and 1, 3 so that the design of the observer and controller is irrelevant to the power. This is one of the characteristics of our controller and observer design scheme (5.3)–(5.6).

    We investigate the output-feedback stabilization for stochastic nonlinear systems with both Markovian switching and time-varying powers in this paper. Compared with existing work, the system model considered in this paper is more general because it studies the time-varying power and Markovian switching, simultaneously. To achieve stabilization, we first design a state observer with a dynamic gain and an output-feedback controller, then use advanced stochastic analysis techniques to prove that the closed-loop system has an almost surely unique solution and the states are regulated to the origin almost surely. Even though there is no Markovian switching, the results in this paper are also new in the sense that we consider nonlinear growth rate, which is much more general than constant growth rate cases in [7,8,9].

    There are many related problems to be considered, such as how to extend the result to impulsive systems [25,26,27] and systems with arbitrary order.

    This work is funded by Shandong Province Higher Educational Excellent Youth Innovation team, China (No. 2019KJN017), and Shandong Provincial Natural Science Foundation for Distinguished Young Scholars, China (No. ZR2019JQ22).

    The authors declare there is no conflict of interest.



    [1] Y. Wang, W. Gao, M. Gong, H. Li, J. Xie, A new two-stage based evolutionary algorithm for solving multi-objective optimization problems, Inf. Sci., 611 (2022), 649–659. https://doi.org/10.1016/j.ins.2022.07.180 doi: 10.1016/j.ins.2022.07.180
    [2] Q. Zhu, Q. Lin, W. Chen, K. C. Wong, C. A. C. Coello, J. Li, et al., An external archive-guided multiobjective particle swarm optimization algorithm, IEEE Trans. Cybern., 47 (2017), 2794–2808. https://doi.org/10.1109/TCYB.2017.2710133 doi: 10.1109/TCYB.2017.2710133
    [3] L. Ma, M. Huang, S. Yang, R. Wang, X. Wang, An adaptive localized decision variable analysis approach to large-scale multiobjective and many-objective optimization, IEEE Trans. Cybern., 52 (2021), 6684–6696. https://doi.org/10.1109/TCYB.2020.3041212 doi: 10.1109/TCYB.2020.3041212
    [4] G. Acampora, R. Schiattarella, A. Vitiello, Using quantum amplitude amplification in genetic algorithms, Expert Syst. Appl., 209 (2022), 118203. https://doi.org/10.1016/j.eswa.2022.118203 doi: 10.1016/j.eswa.2022.118203
    [5] H. Zhao, C. Zhang, An ant colony optimization algorithm with evolutionary experience-guided pheromone updating strategies for multi-objective optimization, Expert Syst. Appl., 201 (2022), 117151. https://doi.org/10.1016/j.eswa.2022.117151 doi: 10.1016/j.eswa.2022.117151
    [6] Z. Zeng, M. Zhang, H. Zhang, Z. Hong, Improved differential evolution algorithm based on the sawtooth-linear population size adaptive method, Inf. Sci., 608 (2022), 1045–1071. https://doi.org/10.1016/j.ins.2022.07.003 doi: 10.1016/j.ins.2022.07.003
    [7] R. Nand, B. N. Sharma, K. Chaudhary, Stepping ahead firefly algorithm and hybridization with evolution strategy for global optimization problems, Appl. Soft. Comput., 109 (2021), 107517. https://doi.org/10.1016/j.asoc.2021.107517 doi: 10.1016/j.asoc.2021.107517
    [8] J. Kennedy, R. Eberhart, Particle swarm optimization, in Icnn95-international Conference on Neural Networks, 4 (1995), 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
    [9] C. A. C. Coello, M. S. Lechuga, MOPSO: a proposal for multiple objective particle swarm optimization, in Pro. 2002 Congr. Evol. Comput. CEC'02 (Cat. No. 02TH8600), IEEE, 2 (2002), 1051–1056. https://doi.org/10.1109/CEC.2002.1004388
    [10] Y. Cui, X. Meng, J. Qiao, A multi-objective particle swarm optimization algorithm based on two-archive mechanism, Appl. Soft. Comput., 119 (2022), 108532. https://doi.org/10.1016/j.asoc.2022.108532 doi: 10.1016/j.asoc.2022.108532
    [11] Y. Li, Y. Zhang, W. Hu, Adaptive multi-objective particle swarm optimization based on virtual Pareto front, Inf. Sci., 625 (2023), 206–236. https://doi.org/10.1016/j.ins.2022.12.079 doi: 10.1016/j.ins.2022.12.079
    [12] D. Sharma, S. Vats, S. Saurabh, Diversity preference-based many-objective particle swarm optimization using reference-lines-based framework, Swarm Evol. Comput., 65 (2021), 100910. https://doi.org/10.1016/j.swevo.2021.100910 doi: 10.1016/j.swevo.2021.100910
    [13] Y. Hu, Y. Zhang, D. Gong, Multiobjective particle swarm optimization for feature selection with fuzzy cost, IEEE Trans. Cybern., 51 (2020), 874–888. https://doi.org/10.1109/TCYB.2020.3015756 doi: 10.1109/TCYB.2020.3015756
    [14] L. Li, L. Chang, T. Gu, W. Sheng, W. Wang, On the norm of dominant difference for many-objective particle swarm optimization, IEEE Trans. Cybern., 51 (2019), 2055–2067. https://doi.org/10.1109/TCYB.2019.2922287 doi: 10.1109/TCYB.2019.2922287
    [15] L. Yang, X. Hu, K. Li, A vector angles-based many-objective particle swarm optimization algorithm using archive, Appl. Soft. Comput., 106 (2021), 107299. https://doi.org/10.1016/j.asoc.2021.107299 doi: 10.1016/j.asoc.2021.107299
    [16] B. Wu, W. Hu, J. Hu, G. G.Yen, Adaptive multiobjective particle swarm optimization based on evolutionary state estimation, IEEE Trans. Cybern., 51 (2019), 3738–3751. https://doi.org/10.1109/TCYB.2019.2949204 doi: 10.1109/TCYB.2019.2949204
    [17] H. Han, W. Lu, J. Qiao, An adaptive multiobjective particle swarm optimization based on multiple adaptive methods, IEEE Trans. Cybern., 47 (2017), 2754–2767. https://doi.org/10.1109/TCYB.2017.2692385 doi: 10.1109/TCYB.2017.2692385
    [18] W. Huang, W. Zhang, Adaptive multi-objective particle swarm optimization with multi-strategy based on energy conversion and explosive mutation, Appl. Soft. Comput., 113 (2021), 107937. https://doi.org/10.1016/j.asoc.2021.107937 doi: 10.1016/j.asoc.2021.107937
    [19] K. Li, R. Chen, G. Fu, X. Yao, Two-archive evolutionary algorithm for constrained multiobjective optimization, IEEE Trans. Evol. Comput., 23 (2018), 303–315. https://doi.org/10.1109/TEVC.2018.2855411 doi: 10.1109/TEVC.2018.2855411
    [20] J. Liu, R. Liu, X. Zhang, Recursive grouping and dynamic resource allocation method for large-scale multi-objective optimization problem, Appl. Soft. Comput., 130 (2022), 109651. https://doi.org/10.1016/j.asoc.2022.109651 doi: 10.1016/j.asoc.2022.109651
    [21] M. Ergezer, D. Simon, Mathematical and experimental analyses of oppositional algorithms, IEEE Trans. Cybern., 44 (2014), 2178–2189. https://doi.org/10.1109/TCYB.2014.2303117 doi: 10.1109/TCYB.2014.2303117
    [22] Y. Xiang, Y. Zhou, M. Li, Z. Chen, A vector angle-based evolutionary algorithm for unconstrained many-objective optimization, IEEE Trans. Evol. Comput., 21 (2016), 131–152. https://doi.org/10.1109/TEVC.2016.2587808 doi: 10.1109/TEVC.2016.2587808
    [23] H. Wang, L. Jiao, X. Yao, Two_Arch2: An improved two-archive algorithm for many-objective optimization, IEEE Trans. Evol. Comput., 19 (2014), 524–541. https://doi.org/10.1109/TEVC.2014.2350987 doi: 10.1109/TEVC.2014.2350987
    [24] M. Garza-Fabre, G. T. Pulido, C. A. C. Coello, Ranking methods for many-objective optimization, Mex. Int. Conf. Artif. Intell., 5845 (2009), 633–645. https://doi.org/10.1007/978-3-642-05258-3_56 doi: 10.1007/978-3-642-05258-3_56
    [25] W. Huang, W. Zhang, Multi-objective optimization based on an adaptive competitive swarm optimizer, Inf. Sci., 583 (2022), 266–287. https://doi.org/10.1016/j.ins.2021.11.031 doi: 10.1016/j.ins.2021.11.031
    [26] S. Chen, X. Wang, J. Gao, W. Du, X. Gu, An adaptive switching-based evolutionary algorithm for many-objective optimization, Knowl. Based Syst., 248 (2022), 108915. https://doi.org/10.1016/j.knosys.2022.108915 doi: 10.1016/j.knosys.2022.108915
    [27] Y. Liu, D. Gong, J. Sun, Y. Jin, A many-objective evolutionary algorithm using a one-by-one selection strategy, IEEE Trans. Cybern., 47 (2017), 2689–2702. https://doi.org/10.1109/TCYB.2016.2638902 doi: 10.1109/TCYB.2016.2638902
    [28] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results, Evol. Comput., 8 (2000), 173–195. https://doi.org/10.1162/106365600568202 doi: 10.1162/106365600568202
    [29] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, S. Tiwari, Multi-objective optimization test instances for the CEC 2009 special session and competition, Mech. Eng. New York, 264 (2008), 1–30.
    [30] K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable test problems for evolutionary multi-objective optimization, Evol. Mult. Opt. London., (2005), 105–145. https://doi.org/10.1007/1-84628-137-7_6 doi: 10.1007/1-84628-137-7_6
    [31] A. M. Zhou, Y. C. Jin, Q. F. Zhang, B. Sendhoff, E. Tsang, Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion, in 2006 IEEE Int. Conf. Evol. Comput., (2006), 892–899. https://doi.org/10.1109/CEC.2006.1688406
    [32] L. While, P. Hingston, L. Barone, S. Huband, A faster algorithm for calculating hypervolume, IEEE Trans. Evol. Comput., 10 (2006), 29–38. https://doi.org/10.1109/TEVC.2005.851275 doi: 10.1109/TEVC.2005.851275
    [33] Q. Lin, S. Liu, Q. Zhu, C. Tang, R. Song, J. Chen, et al., Particle swarm optimization with a balanceable fitness estimation for many-objective optimization problems, IEEE Trans. Evol. Comput., 22 (2018), 32–46. https://doi.org/10.1109/TEVC.2016.2631279 doi: 10.1109/TEVC.2016.2631279
    [34] C. Dai, Y. Wang, M. Ye, A new multi-objective particle swarm optimization algorithm based on decomposition, Inf. Sci., 325 (2015), 541–557. https://doi.org/10.1016/j.ins.2015.07.018 doi: 10.1016/j.ins.2015.07.018
    [35] Q. Lin, J. Li, Z. Du, J. Chen, Z. Ming, A novel multiobjective particle swarm optimization with multiple search strategies, Eur. J. Oper. Res., 247 (2015), 732–744. https://doi.org/10.1016/j.ejor.2015.06.071 doi: 10.1016/j.ejor.2015.06.071
    [36] A. J. Nebro, J. J. Durillo, J. Garcia-Nieto, C. C. Coello, F. Luna, E. Alba, SMPSO: a new PSO-based metaheuristic for multi-objective optimization, in 2009 IEEE Symp. Comput. Intell. MCDM., (2009), 66–73. https://doi.org/10.1109/MCDM.2009.4938830
    [37] C. He, R. Cheng, D. Yazdani, Adaptive offspring generation for evolutionary large-scale multi-objective optimization, IEEE Trans. Syst. Man Cybern. Syst., 52 (2020), 786–798. https://doi.org/10.1109/TSMC.2020.3003926 doi: 10.1109/TSMC.2020.3003926
    [38] S. Jiang, S. Yang, A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization, IEEE Trans. Evol. Comput., 21 (2017), 329–346. https://doi.org/10.1109/TEVC.2016.2592479 doi: 10.1109/TEVC.2016.2592479
    [39] K. Deb, H. Jain, An evolutionary many-objective optimization algorithm using reference-point-based non-dominated sorting approach, part ⅰ: solving problems with box constraints, IEEE Trans. Evol. Comput., 18 (2013), 577–601. https://doi.org/10.1109/TEVC.2013.2281535 doi: 10.1109/TEVC.2013.2281535
    [40] Q. F. Zhang, H. Li, MOEA/D: a multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput., 11 (2007), 712–731. https://doi.org/10.1109/TEVC.2007.892759 doi: 10.1109/TEVC.2007.892759
    [41] Y. Tian, R. Cheng, X. Zhang, Y. Jin, PlatEMO: a matlab platform for evolutionary multi-objective optimization[educational forum], IEEE Comput. Intell. Mag., 12 (2017), 73–87. https://doi.org/10.1109/MCI.2017.2742868 doi: 10.1109/MCI.2017.2742868
    [42] Y. Zhou, Z. Chen, Z. Huang, Y. Xiang, A multiobjective evolutionary algorithm based on objective-space localization selection, IEEE Trans. Cybern., 52 (2020), 3888–3901. https://doi.org/10.1109/TCYB.2020.3016426 doi: 10.1109/TCYB.2020.3016426
    [43] M. Sheng, Z. Wang, W. Liu, X. Wang, S. Chen, X. Liu, A particle swarm optimizer with multi-level population sampling and dynamic p-learning mechanisms for large-scale optimization, Knowl. Based Syst., 242 (2022), 108382. https://doi.org/10.1016/j.knosys.2022.108382 doi: 10.1016/j.knosys.2022.108382
    [44] J. Lu, J. Zhang, J. Sheng, Enhanced multi-swarm cooperative particle swarm optimizer, Swarm Evol. Comput., 69 (2022), 100989. https://doi.org/10.1016/j.swevo.2021.100989 doi: 10.1016/j.swevo.2021.100989
  • This article has been cited by:

    1. Xinling Li, Xueli Qin, Zhiwei Wan, Weipeng Tai, Chaos synchronization of stochastic time-delay Lur'e systems: An asynchronous and adaptive event-triggered control approach, 2023, 31, 2688-1594, 5589, 10.3934/era.2023284
    2. Jingya Wang, Ye Zhu, L2L control for memristive NNs with non-necessarily differentiable time-varying delay, 2023, 20, 1551-0018, 13182, 10.3934/mbe.2023588
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1977) PDF downloads(108) Cited by(1)

Figures and Tables

Figures(11)  /  Tables(9)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog