Some new reverse versions of Hilbert-type inequalities are studied in this paper. The results are established by applying the time scale versions of reverse Hölder's inequality, reverse Jensen's inequality, chain rule on time scales, and the mean inequality. As applications, some particular results (when T=N and T=R) are considered. Our results provide some new estimates for these types of inequalities and improve some of those recently published in the literature.
Citation: Haytham M. Rezk, Mohammed Zakarya, Amirah Ayidh I Al-Thaqfan, Maha Ali, Belal A. Glalah. Unveiling new reverse Hilbert-type dynamic inequalities within the framework of Delta calculus on time scales[J]. AIMS Mathematics, 2025, 10(2): 2254-2276. doi: 10.3934/math.2025104
[1] | Jun Moon . The Pontryagin type maximum principle for Caputo fractional optimal control problems with terminal and running state constraints. AIMS Mathematics, 2025, 10(1): 884-920. doi: 10.3934/math.2025042 |
[2] | Jun Moon . A Pontryagin maximum principle for terminal state-constrained optimal control problems of Volterra integral equations with singular kernels. AIMS Mathematics, 2023, 8(10): 22924-22943. doi: 10.3934/math.20231166 |
[3] | Irmand Mikiela, Valentina Lanza, Nathalie Verdière, Damienne Provitolo . Optimal strategies to control human behaviors during a catastrophic event. AIMS Mathematics, 2022, 7(10): 18450-18466. doi: 10.3934/math.20221015 |
[4] | Ruiqing Shi, Yihong Zhang . Dynamic analysis and optimal control of a fractional order HIV/HTLV co-infection model with HIV-specific antibody immune response. AIMS Mathematics, 2024, 9(4): 9455-9493. doi: 10.3934/math.2024462 |
[5] | Xingxiao Wu, Lidong Huang, Shan Zhang, Wenjie Qin . Dynamics analysis and optimal control of a fractional-order lung cancer model. AIMS Mathematics, 2024, 9(12): 35759-35799. doi: 10.3934/math.20241697 |
[6] | Meijiao Wang, Qiuhong Shi, Maoning Tang, Qingxin Meng . Stochastic differential equations in infinite dimensional Hilbert space and its optimal control problem with Lévy processes. AIMS Mathematics, 2022, 7(2): 2427-2455. doi: 10.3934/math.2022137 |
[7] | Tingting Guan, Guotao Wang, Haiyong Xu . Initial boundary value problems for space-time fractional conformable differential equation. AIMS Mathematics, 2021, 6(5): 5275-5291. doi: 10.3934/math.2021312 |
[8] | Zongmin Yue, Yitong Li, Fauzi Mohamed Yusof . Dynamic analysis and optimal control of Zika virus transmission with immigration. AIMS Mathematics, 2023, 8(9): 21893-21913. doi: 10.3934/math.20231116 |
[9] | Xiangyun Shi, Xiwen Gao, Xueyong Zhou, Yongfeng Li . Analysis of an SQEIAR epidemic model with media coverage and asymptomatic infection. AIMS Mathematics, 2021, 6(11): 12298-12320. doi: 10.3934/math.2021712 |
[10] | Sayed Saber, Azza M. Alghamdi, Ghada A. Ahmed, Khulud M. Alshehri . Mathematical Modelling and optimal control of pneumonia disease in sheep and goats in Al-Baha region with cost-effective strategies. AIMS Mathematics, 2022, 7(7): 12011-12049. doi: 10.3934/math.2022669 |
Some new reverse versions of Hilbert-type inequalities are studied in this paper. The results are established by applying the time scale versions of reverse Hölder's inequality, reverse Jensen's inequality, chain rule on time scales, and the mean inequality. As applications, some particular results (when T=N and T=R) are considered. Our results provide some new estimates for these types of inequalities and improve some of those recently published in the literature.
Let X be a Banach space. Consider the following X-valued left Caputo fractional evolution equation:
{CDα0+[X](t)+AX(t)=f(t,X(t),u(t)),t∈(0,T],X(0)=X0∈X, | (1.1) |
where CDα0+ with α∈(0,1) denotes the left Caputo derivative, X:[0,T]→X is the state with −A being the generator of the analytic semigroup, and u:[0,T]→U is the control. The problem of this paper is to minimize the following Bolza-type Riemann-Liouville (RL) fractional integral objective functional over u(⋅)∈U (note that U denotes the set of admissible controls)
(P)J(X0;u(⋅))=∫T0(T−s)β−1Γ(β)l(s,X(s),u(s))ds+m(X0,X(T)), | (1.2) |
subject to (1.1) and the endpoint state constraint
(X0,X(T))∈F0×FT⊂X×X(endpoint state constraint). | (1.3) |
Under (1.1)–(1.3), the precise statement of (P) is as follows:
(P)infu(⋅)∈UJ(X0;u(⋅)),subject to (1.1) and (1.3). |
(P) can be referred to as the infinite-dimensional optimal control problem for fractional evolution equations with the endpoint state constraint. The precise problem statement is given in Section 2. The main goal of this paper is to derive the Pontryagin maximum principle for (P), which constitutes the necessary condition for optimality.
Fractional differential and evolution equations can be viewed as generalizations of classical differential and evolution equations using the fractional calculus framework. They can be applied to study and model various situations in economics, mathematical finance, viscoelasticity, aerodynamics, engineering, and biology, where the fractional phenomena can be observed; see [11,24,31,32,47] and the references therein. It is worth noting that (1.1) can be viewed as a class of fractional evolution equations, which covers Caputo fractional differential equations when X=Rn. Indeed, fractional evolution equations can be applied to study fractional partial differential equations (PDEs) and/or delay situations using the generator A and the operator f [8,31,47]. Some numerical approaches for solving fractional PDEs can be found in [20,37] and the references therein.
There are various different formulations and results on optimal control of fractional differential equations, which corresponds to X=Rn in (P). In particular, their notable results can be found in [1,2,4,23,28,34,44], where they obtained the Pontryagin maximum principles via variational and duality approaches. On the other hand, we mention that the general infinite-dimensional fractional optimal control problem has not yet been sufficiently studied. Indeed, it is quite surprising that there are no concrete results on the infinite-dimensional maximum principle for optimal control of fractional evolution equations except [12,33]. Below, we provide a detailed literature review on optimal control for fractional differential and evolution equations.
As mentioned above, to the best of our knowledge, the study on optimal control for fractional evolution equations has not been fully investigated, which we address in this paper through (P) (see Theorem 3.1). There are some results on the controllability and the existence of optimal controls for the fractional control problem similar to (P) under additional assumptions [41,47] (see Remark 3.1). However, the earlier works did not study explicit optimality conditions in terms of the Pontryagin maximum principle. Both [33,Theorem 3] and [12,Theorem 5.3] considered the fractional optimal control problem for fractional evolution equations. However, they did not consider the state constraints in the corresponding optimal control problems and their maximum principles, i.e., F0=FT=X in (1.3) (see Remark 3.1). In addition, although [12] considered the Volterra integral-type singular evolution equation with the analytic semigroup, the overall state equation as well as the adjoint equation in the corresponding maximum principle are R-valued finite dimensional ones, which belong to Lp([0,T];R) (see [12,Theorems 3.1 and 5.3]). Furthermore, the equivalence between the integral-type singular evolution equation in [12] and the Caputo- and RL-type fractional differential equations holds only in the finite-dimensional case (see [12,Section 4]). The preceding discussion implies that (P) is different from the earlier works [12,33]. Indeed, (P) can be viewed as a generalization of [33] to the state-constrained control case.
In the finite-dimensional case (i.e., when X=Rn in (P)), the various versions of maximum principles for fractional optimal control problems were studied in [1,2,4,23,28,34,44]. However, their approaches cannot be applied to solve the infinite-dimensional problem (P), since the infinite-dimensional control problem includes several technical intricacies as discussed in [26,Chapter 4] and [15,Chapter 10]. Moreover, the classical infinite-dimensional maximum principles for nonfractional optimal control problems (e.g., [14,15,17,25,26,30,35]) cannot be used to solve (P) directly, since the fractional control problem requires different techniques using fractional calculus and analysis (see Remark 3.1). Therefore, we have to develop new techniques to obtain the maximum principle of (P).
The summary of the preceding discussion is stated as follows. It is not possible to use the earlier works [12,33], since they considered the infinite-dimensional fractional optimal control problem without the state constraints.* Indeed, they did not need to use the key concepts and techniques established in this paper due to the absence of the state constraints (see the statements (a)–(c) and (ⅰ)–(ⅲ) below). Morevoer, we are not able to use the approaches for the earlier works on the finite-dimensional case (X=Rn) with the state constraints (e.g., [1,2,4,23,28,34,44]), as the infinite-dimensional problem requires different technical methods (see [26,Chapter 4] and [15,Chapter 10]). Finally, since the fractional control problem requires fractional calculus and analysis, the approaches for the classical nonfractional infinite-dimensional control problems (e.g., [14,15,17,25,26,30,35]) cannot be used directly to obtain the maximum principle of (P).
*As mentioned above, [12] cannot be viewed as the infinite-dimensional problem, since the overall state equation as well as the adjoint equation in the corresponding maximum principle are R-valued finite dimensional ones, which belong to Lp([0,T];R) (see [12,Theorems 3.1 and 5.3]).
In the following, we state the main contributions of this paper:
(a) We prove the maximum principle for (P) (see Theorem 3.1 and Section 5). Note that the key concept of proving Theorem 3.1 is to apply the Ekeland variational principle under the family of spike variations, where the intrinsic properties of fractional integral and derivative, analysis of variational equations in Lemmas 5.1 and 5.2, and the representation results of linear fractional evolutions in Lemmas B.3–B.5 are essentially required. Indeed, the proof of Theorem 3.1 is divided into nine steps, which is provided in Section 5.
(b) To prove Theorem 3.1, we obtain the explicit representation formulas of linear left Caputo fractional evolution equations with initial conditions and right RL fractional evolution equations with terminal conditions using RL fractional state-transition evolution operators (see Lemmas B.3–B.5 in Appendix 6).
(c) We apply the maximum principle in Theorem 3.1 to the optimal control problem of fractional diffusion PDEs. In fact, the necessary condition for optimality is derived using the maximum principle in Theorem 3.1 (see Section 4).
The detailed statements of the main results of this paper, including the comparisons with the existing literature, are as follows:
(ⅰ) As mentioned in (a), we state and prove the maximum principle for (P) (see Theorem 3.1). As noted above, our paper extends [33] to the state-constrained case. Indeed, unlike the unconstrained case in [33,Theorem 3] and [12,Theorem 5.3], the maximum principle in Theorem 3.1 requires additional nontriviality and transversality conditions to cope with the state constraint in (1.3). Hence, our proof has to be different from that in [12,33] (see Remarks 3.1 and 3.2).† Also, different from the existing literature for standard nonfractional infinite-dimensional and PDE optimal control problems (e.g., [15,26,30,35,46]), we do not assume the strict convexity of X∗, the dual space of X. This assumption is particularly important, as it guarantees the differentiability of the distance function of the endpoint state constraint. In the proof (see Section 5), we relax this assumption via a separation argument and constructing a family of spike variations for the Ekeland variational principle, which can be viewed as an extension of [25] to the fractional control problem (see Remarks 3.1 and 3.2). Subsequently, we prove the maximum principle (see Section 5), including nontriviality, adjoint equation, transversality, and Hamiltonian maximization conditions, by establishing variational and duality analysis under the finite codimensionality of initial- and end-point variational sets. Indeed, in our variational analysis, we need to apply fractional calculus to obtain the precise estimates of the variational equations under the family of spike variations (see Lemmas 5.1 and 5.2). Furthermore, in our duality analysis, it is essential to obtain explicit representations of the solutions to linear fractional variational and adjoint equations in terms of the associated RL fractional state-transition evolution operators. These results have not been reported in the existing literature (see also (ⅱ) below and Appendix 6). Hence, the proof in Section 5 has to be different from that for the maximum principles studied in the existing literature.
† As mentioned in Section 1, although [12] considered the infinite-dimensional Volterra integral-type singular evolution equation, the overall state equation as well as the adjoint equation in the corresponding maximum principle are R-valued finite dimensional ones.
(ⅱ) As for (b), we obtain the explicit representation results of linear left Caputo fractional evolution equations with initial conditions and right RL fractional evolution equations with terminal conditions (see Lemmas B.3–B.5 in Appendix 6). In both cases, to obtain the desired representation formulas, a detailed analysis of the fundamental solution of the left and right RL fractional state transition operators has to be carried out (see Lemma B.3 and Remark B.1). Also, the careful use of Fubini's formula as well as properties of fractional derivatives and integrals is needed. Notice that the finite-dimensional version of the representation result for forward linear Caputo evolution equations was presented in [19,Theorem 5.1], and Lemma B.4 can be viewed as a generalization [19,Theorem 5.1] to the infinite-dimensional case. On the other hand, the representation result of linear right RL fractional evolution equations with terminal conditions in Lemma B.5 has not been reported in the existing literature. Note that the results in Appendix 6 are independent from the main result of this paper, and we provide their complete proofs, which are omitted in [33].
(ⅲ) Finally, we apply Theorem 3.1 to the optimal control problem of fractional diffusion PDEs, which can be applied to analyze general diffusion problems appeared in engineering and applied sciences (see Section 4). In particular, we first convert the fractional diffusion PDE into the abstract X-valued left Caputo fractional evolution equation in (1.1). Then the fixed end-points constraint optimal control problem is formulated using (1.2) and (1.3). Under this setting, the corresponding optimality condition is derived by the maximum principle of this paper, where the bang-bang type optimal control is obtained via the Hamiltonian maximization condition (see Proposition 4.1). Notice that the result in Section 4 can be viewed as an extension of that for nonfractional versions (e.g., [13,15,21,26,30]) to the state-constrained fractional PDE control problem.
Based on the above discussion, we mention that the technique for proving the maximum principle (see Theorem 3.1 and Section 5) is significantly different from the unconstrained case (e.g., [12,33]), the finite-dimensional fractional control problems (e.g., [1,4,23]), and the classical nonfractional infinite-dimensional control problems (e.g., [15,25,26]). Indeed, due to the inherent complex nature of (P), the maximum principle in this paper is new in the optimal control problem context and its detailed proof must be different from that of the existing literature.
The organization of the paper is as follows. We provide the precise problem statement of (P) in Section 2, where the key notation, definitions, and mathematical framework of fractional integrals and derivatives are also given. The statement of the Pontryagin maximum principle is given in Section 3. The application to the fractional diffusion PDE is studied in Section 4. The proof of the maximum principle is given in Section 5. Appendix 6 provides the proof for the well-posedness of (1.1) (see Theorem 2.1). In Appendix 6, we prove the representation results on linear fractional evolution equations. Appendix 6 provides some technical lemmas used in Section 5.
In this section, we first state the mathematical framework of fractional integrals and derivatives, including their key definitions and the notations employed in the analysis. Then we provide the problem statement of this paper.
Let Rn be the n-dimensional Euclidean space and R:=R1, where the norm in Rn is defined by |⋅|:=|⋅|Rn. Let Γ be the Gamma function. Let [0,T] with T<∞ be the (finite) time-horizon. In this paper, M≥0 is a generic constant, and its exact value varies from line to line.
Let (X,|⋅|X) be a Banach space, and X∗ the dual space of X, i.e., the space of all bounded (or continuous) linear functionals on X. The norm on X∗ is defined by |f|X∗:=supx∈X,|x|X≤1|f(x)| for f∈X∗. Let ⟨⋅,⋅⟩X∗,X be the usual duality paring between X and X∗. The set of linear bounded operators from X to another Banach space Y is denoted by L(X,Y). Let L(X):=L(X,X). Let |A|L(X,Y) be the (operator) norm of A∈L(X,Y). Let I∈L(X) be an identity operator. Let A∗∈L(Y∗,X∗) be the adjoint operator of A∈L(X,Y), i.e., ⟨A∗y∗,x⟩X∗,X∗=⟨y∗,Ax⟩Y∗,Y for x∈X and y∗∈X∗. Clearly, A∗∈L(Y∗,X∗) is also linear and bounded.
We say that f is a Banach space valued function on [0,T] if f:[0,T]→X. The integration of Banach spaced valued functions is understood in the Bochner sense [26,page 45]. For 1≤p≤∞, let Lp([0,T];X) be the usual Lp-space of X-valued functions, where L∞ is defined with essential supremum. Let C([0,T];X) be the space of X-valued continuous functions on [0,T] with norm ‖⋅‖∞. Let AC([0,T];X) be the space of absolutely continuous X-valued functions on [0,T].
We state definitions of fractional integrals and derivatives. More detailed results on fractional calculus can be found in [11,24].
Definition 1. (i) For f(⋅)∈L1([0,T];X), the left Riemann-Liouville (RL) fractional integral Iα0+[f] of order α>0 is defined by
Iα0+[f](t):=∫t0(t−s)α−1Γ(α)f(s)ds. |
(ii) For f(⋅)∈L1([0,T];X), the right RL fractional integral IαT−[f] of order α>0 is defined by
IαT−[f](t):=∫Tt(s−t)α−1Γ(α)f(s)ds. |
Definition 2. (i) For f(⋅)∈L1([0,T];X), the left RL fractional derivative RLDα0+[f] of order α∈(0,1) is defined by
RLDα0+[f](t):=ddt[I1−α0+[f]](t), |
where I1−α0+[f](⋅)∈AC([0,T];X).
(ii) For f(⋅)∈L1([0,T];X), the right RL fractional derivative RLDαT−[f] of order α∈(0,1) is defined by
RLDαT−[f](t):=−ddt[I1−αT−[f]](t), |
where I1−αT−[f](⋅)∈AC([0,T];X).
Let RLDα0+[f(⋅)](⋅):=RLDα0+[f](⋅) and RLDαT−[f(⋅)](⋅):=RLDαT−[f](⋅), where (⋅) in the square bracket is used to emphasize the integration of f with respect to the other variable.
Definition 3. (i) For f(⋅)∈C([0,T];X), the left Caputo fractional derivative CDα0+[f] of order α∈(0,1) is defined by
CDα0+[f](t):=RLDα0+[f(⋅)−f(0)](t), |
where f(⋅)−f(0) is left RL fractional differentiable in the sense of Definition 2.
(ii) For f(⋅)∈C([0,T];X), the right Caputo fractional derivative CDαT−[f] of order α∈(0,1) is defined by
CDαT−[f](t):=RLDαT−[f(⋅)−f(T)](t), |
where f(⋅)−f(T) is right RL fractional differentiable in the sense of Definition 2.
Let (X,|⋅|X) be a Banach space, representing the state space. We consider the X-valued left Caputo fractional evolution equation on [0,T] with order α∈(12,1):
{CDα0+[X](t)+AX(t)=f(t,X(t),u(t)),t∈(0,T],X(0)=X0∈X, | (2.1) |
where X0∈X is the initial condition, −A is the generator of the analytic semigroup, and f:[0,T]×X×U→X is the driver of (2.1) with U being the control space. Let U:=U[0,T]={u:[0,T]→U|u(⋅)is measurable} be a set of admissible controls for (2.1). We sometimes write X(⋅;X0,u):=X(⋅) to emphasize that (2.1) is dependent on (X0,u(⋅))∈X×U.
Assumption 1. (i) A:D(A)⊂X→X, where D(A) is the domain of A, i.e., the subset of X such that A exists, is a linear operator such that −A is the generator of the compact analytic semigroup (T(t))t≥0 of uniformly bounded linear operators with T:[0,T]→L(X) [36].
(ii) (U,ρ) is a separable metric space.
(iii) t↦f are measurable for any fixed (X,u)∈X×U and f(⋅,X,u)∈L∞([0,T];X). (X,u)↦f is continuous in both variables. There is a constant M≥0 such that |f(t,0,u)|X≤M and |f(t,X,u)−f(t,X′,u′)|X≤M(|X−X′|X+ρ(u,u′)) for any X,X′∈X and u,u′∈U.
(iv) X↦f is continuously Fréchet differentiable, denoted by ∂Xf(t,X,u), for any fixed (t,u)∈[0,T]×U, where t↦∂Xf is continuous. Note that ∂Xf(t,⋅,u):[0,T]×U→L(X). It also holds that there is a constant M≥0 such that |∂Xf(t,X,u)|X≤M.
(v) MTαsupt∈[0,T]|T(t)|XΓ(1+α)<1.
We state the well-posedness result (equivalently, the existence and uniqueness of the solution) of (2.1). The proof is given in Appendix 6.
Theorem 2.1. Let Assumption 1 hold. Then (2.1) admits a unique (mild) solution. Moreover, the solution of (2.1) can be expressed by the left RL fractional integral form:
X(t)=X0−Iα0+[AX(⋅)](t)+Iα0+[f(⋅,X(⋅),u(⋅))](t),t∈[0,T]. | (2.2) |
Finally, there is a constant M≥0 such that for any X0,X′0∈X (with X(t):=X(t;X0,u) and X′(t):=X(t;X′0,u)),
supt∈[0,T]|X(t)−X′(t)|X≤M|X0−X′0|X+∫T0∞∑k=1(MΓ(α))kΓ(kα)(T−s)kα−1|X0−X′0|Xds,supt∈[0,T]|X(t)|X≤(|X0|X+M)+∫T0∞∑k=1(MΓ(α))kΓ(kα)(T−s)kα−1(|X0|X+M)ds. |
The objective functional to be minimized over u(⋅)∈U is given by the following left RL fractional integral with order β≥1:
J(X0;u(⋅))=Iβ0+[l(⋅,X(⋅),u(⋅)](T)+m(X0,X(T)), | (2.3) |
where by Definition 1,
Iβ0+[l(⋅,X(⋅),u(⋅)](T)=∫T0(T−s)β−1Γ(β)l(s,X(s),u(s))ds. |
In (2.3), l:[0,T]×X×U→R is the running cost, while m:X×X→R is the terminal cost. We introduce the associated endpoint state constraint:
(X0,X(T))∈F0×FT⊂X×X(endpoint state constraint). | (2.4) |
Based on the preceding setting, the problem of this paper is stated as follows:
(P)infu(⋅)∈UJ(X0;u(⋅)),subject to (2.1) and (2.4). |
Indeed, (P) can be regarded as the optimal control problem for fractional evolution equations with endpoint state constraint. The main goal is to derive the Pontryagin maximum principle for (P).
Assumption 2. (i) t↦l is measurable for any fixed (X,u)∈X×U and l(⋅,X,u)∈L∞([0,T];R). (X,u)↦l and (X,X′)↦m are continuous in both variables. There is a constant M≥0 such that |l(t,0,u,w)|+|m(0,0)|≤M.
(ii) X↦l is continuously Fréchet differentiable, denoted by ∂Xl(t,X,u), for any fixed (t,u)∈[0,T]×U, where t↦∂Xl is continuous. Note that ∂Xl(t,⋅,u):[0,T]×U→L(X,R)=X∗. (X,X′)↦m is continuously Fréchet differentiable, denoted by ∂Xm(X,X′) and ∂X′m(X,X′), respectively, where ∂Xm(⋅,X′),∂X′m(X,⋅)∈L(X,R)=X∗. There is a constant M≥0 such that |∂Xl(t,X,u)|+|∂Xm(X,X′)|+|∂X′m(X,X′)|≤M.
(iii) F0 and FT are closed convex subsets of X.
Remark 2.1. (i) Assumptions 1 and 2 are standard for fractional evolution equations in Theorem 3.1 and their optimal control problems. Indeed, these assumptions have been used in various types of (finite- and infinite-dimensional) optimal control problems and their maximum principles; see [1,4,12,15,23,25,26,28,34,41] and the references therein.
(ii) As (2.1) is an infinite-dimensional evolution equation driven by a (possibly) unbounded linear operator −A (the generator for the analytic semigroup), (P) is different from the finite-dimensional fractional optimal control problems in [1,4,23] (when X=Rn).
(iii) We note that the analyticity of −A in Assumption 1 is used for the well-posedness of (2.1) in Theorem 2.1 (see [48,Theorem 3.3] and [41,Theorem 3.1]). In fact, (ⅴ) of Assumption 1 is due to [48,Theorem 3.3]. Then Theorem 2.1 is used to prove the well-posedness of linear Caputo fractional evolution equations in Lemma B.4. Note that we can replace the analyticity of −A with −A being a generator of an (equicontinuous) C0-semigroup. In this case, we need additional assumptions on f for the well-posedness of (2.1); [9,Section 3] and [38,Sections 3 and 4]. See also [47,Chapter 2], [18,Chapters 2 and 3] and [31] for the well-posedness of (2.1) under different assumptions. In this paper, since the main goal is to derive the maximum principle of (P), we do not focus on the generality of the assumptions.
Assume that the pair (¯X(⋅),¯u(⋅))∈C([0,T];X)×U is the optimal solution of (P). We observe that ¯X(⋅)∈C([0,T];X) is the corresponding optimal state trajectory generated by the optimal control ¯u(⋅)∈U satisfying (¯X0,¯X(T))=(¯X0,¯X(T;¯X0,¯u))∈F0×FT. We introduce the notation:
¯f(⋅):=f(⋅,¯X(⋅),¯u(⋅)),∂X¯f(⋅):=∂Xf(⋅,¯X(⋅),¯u(⋅)),¯l(⋅):=l(⋅,¯X(⋅),¯u(⋅)),∂X¯l(⋅):=∂Xl(⋅,¯X(⋅),¯u(⋅)),¯m:=m(¯X0,¯X(T)),∂X0¯m:=∂X0m(¯X0,¯X(T)),∂X¯m:=∂Xm(¯X0,¯X(T)). |
Let NF0(X0) be the normal cone to F0 at X0∈F0 defined by
NF0(X0):={X′∈X∗|⟨X′,y−X0⟩X×X∗≤0,∀y∈F0}. |
The normal cone NFT(X) to FT at X∈FT can be defined analogously. Let Ω:=∪∞k=1Ωk, where for a positive integer k,
Ωk:={({uj(⋅)}kj=1,{θj}kj=1)|uj(⋅)∈U,θj≥0,∀j=1,…,k,k∑j=1θj=1}. |
For ({uj(⋅)}kj=1,{θj}kj=1)∈Ω, we define
R:={ξ(T)∈X|ξ(t)=∫t0(t−s)α−1Γ(α)(−A+∂X¯f(s))ξ(s)ds+k∑j=1θj∫t0(t−s)α−1Γ(α)×(f(t,¯X(s),uj(s))−f(t,¯X(s),¯u(s)))ds,t∈[0,T],({uj(⋅)}kj=1,{θj}kj=1)∈Ω}, | (3.1) |
Q:={ˆX−ζ(T)∈X|ζ(t)=ˆX0+∫t0(t−s)α−1Γ(α)(−A+∂X¯f(s))ζ(s)ds,t∈[0,T],(ˆX0,ˆX)∈F0×FT}. | (3.2) |
Assumption 3. R−Q is finite codimensional in X.
We state the main result of this paper, i.e., the Pontryagin maximum principle of (P), in Theorem 3.1 below. Indeed, Theorem 3.1 constitutes the necessary condition for optimality of (P). The proof of Theorem 3.1 is relegated to Section 5.
Theorem 3.1. Let Assumptions 1–3 hold. Assume that (¯X(⋅),¯u(⋅))∈C([0,T];X)×U is the optimal solution of (P). Then there exist (p0,φ1,φ2)∈R×X∗×X∗ such that the following conditions hold:
(i) Nontriviality condition: (p0,φ1,φ2)≠0, where p0≤0 and (φ1,φ2)∈NF0(¯X0)×NFT(¯X(T)).
(ii) Adjoint equation: P(⋅)∈L1([0,T];X∗) is a unique solution to the following right RL fractional evolution equation:
RLDαT−[P](t)=−A∗P(t)+∂X¯f(t)∗P(t)−p0(T−t)β−1Γ(β)∂X¯l(t),t∈[0,T). | (3.3) |
(iii) Transversality condition: For any (X′′,X′)∈F0×FT,
⟨I1−αT−[P](0)−p0∂X0¯m,(X′′−¯X0)⟩X×X∗−⟨I1−αT−[P](T)−p0∂X¯m,(X′−¯X(T))⟩X×X∗≤0, |
and the adjoint equation holds the following boundary conditions:
{I1−αT−[P](T)=−φ2+p0∂X¯m,I1−αT−[P](0)=−(−φ1+p0∂X0¯m). |
(iv) Hamiltonian maximization condition: ¯u (pointwisely) maximizes the Hamiltonian, i.e.,
maxu∈UH(t,¯X(t),P(t),p0,u)=H(t,¯X(t),P(t),p0,¯u(t)),a.e.t∈[0,T], |
where H:[0,T]×X×X∗×R×U→R is the Hamiltonian defined by
H(t,X,P,p0,u):=⟨P,f(t,X,u)⟩X×X∗+p0(T−t)β−1Γ(β)l(t,X,u). |
Some important remarks of Theorem 3.1 are given below.
Remark 3.1. (i) We have assumed the existence of the optimal solution in Theorem 3.1. Indeed, the problem for existence of optimal controls requires different technical analysis under additional restrictive conditions (e.g., convexity and/or linearity) of (2.1) and (2.3). Some results on these additional conditions as well as the techniques for proving the existence (including the finite-dimensional case) can be found in [4,22,41,42,47] and the references therein.
(ii) When there is no endpoint state constraint, i.e., F0=FT=X, Theorem 3.1 is reduced to the unconstrained case studied in [33,Theorem 3]. Moreover, Theorem 3.1 is different from the classical infinite-dimensional nonfractional maximum principles in [15,25,26], where Theorem 3.1 involves the RL fractional adjoint equation and the fractional Hamiltonian. If X=Rn and U⊂Rp in (P), then Theorem 3.1 is reduced to the maximum principle for the finite-dimensional fractional control problem with the state constraint studied in [1,4,23].
(iii) As mentioned in Section 1, the proof of Theorem 3.1 is different from that for the unconstrained case in [33], the finite-dimensional fractional control problem in [1,4,23], and the infinite-dimensional nonfractional control problem in [15,25,26]. Specifically, in this paper, we need to establish new variational and duality analysis via several technical lemmas for fractional evolution equations, which are new in the context of infinite-dimensional optimal control problems.
Remark 3.2. (i) Assumption 3 states that the algebraic difference between initial- and end-point variational sets R and Q is finite codimensional in X (see [26,Definition 1.5,Chapter 4]). Some conditions and approaches to check the finite codimensionality can be found in [26,Chapter 1.6 and 4], [30,Theorems 3.1 and 3.2], and [14,29,39]. Note that [26,Proposition 3.2 and Corollary 3.3,Chapter 4] provide the general conditions for the finite codimensionality, which can be applied to the case of fractional evolution equations. Assumption 3 is very common in various infinite-dimensional maximum principles and optimal control for partial differential equations [5,14,15,21,26,27,29,30,35,39,40,45,46]. In fact, the finite codimensionality guarantees that the variables (p0,φ1,φ2)∈R×X∗×X∗ in Theorem 3.1 are nontrivial, i.e., (p0,φ1,φ2)≠0. On the other hand, in the finite-dimensional case (when X=Rn), the associated nontriviality can be obtained without Assumption 3.
(ii) Unlike [26], we do not need the strict convexity of X∗, the dual space of X. This assumption is satisfied when X is separable or reflexive. The strict convexity of X∗ has been used in various earlier infinite-dimensional maximum principles, which is crucial to obtain the explicit derivative of the distance function in variational analysis [5,15,21,26,27,35,40,46]. We relax this assumption in Theorem 3.1 by establishing the separation argument and constructing a family of spike variations using Ω, which can be viewed as an extension of [25] to the fractional control problem. Specifically, in Section 5.4, our proof shows that the directional derivative of the distance function is expressed by the duality paring between X and X∗, where the element of X∗ is regarded as the generalized gradient of the distance function. This alternative duality paring form is used in the rest of the proof to derive the optimality conditions in Theorem 3.1.
In this section, we apply (P) for optimal control of fractional diffusion PDEs, which can be applied to analyze general diffusion aspects appeared in engineering and applied sciences [30,41,48]. The purpose of this section is to show the conversion of the fractional PDE control problem into the abstract infinite-dimensional problem (P) and to demonstrate the application of Theorem 3.1.
Let O⊂R3 be open with smooth boundary ∂O⊂R3. Let Lp(O) be the set of real-valued pth-integrable functions on O (when p=∞, it is the space of real-valued essential bounded functions). We denote Hk(O) by the usual Sobolev space of real-valued functions on O, whose distributional derivatives, up to the order k, are square-integrable, and Hk0(O) by the closure of C∞c(O) in Hk(O), where C∞c(O) is the set of continuously differentiable real-valued functions on O having compact support.
Consider the following fractional diffusion PDE:
{CDα0+[x(⋅,w)](t)−Δx(t,w)=f1(t,x(t,w))+f2(t)v(t,w),(t,w)∈(0,T)×O,x(t,w)=0,(t,w)∈[0,T]×∂O, | (4.1) |
where x:[0,T]×O→R is the state trajectory describing the evolution of the diffusion process with initial and boundary conditions, Δ is the Laplacian operator, and v:[0,T]×O→[−1,1]⊂R is the control input (the external force acting at every point of O). The objective functional is given by
J(x0;v)=∫T0(T−s)β−1Γ(β)∫O(l1(s,x(s,w))+l2(s)v(s,w))dwds. | (4.2) |
The minimization problem of (4.2) subject to (4.1) can be regarded as optimal control of fractional diffusion equations, where its nonfractional standard version was considered in various settings in the existing literature; see [13,26,30] and the references therein. Notice that (4.2) is the objective functional with RL fractional integral, which can be regarded as the minimization of the general time fractional and/or nonlocal diffusion situations of fractional diffusion PDEs.
Let X=L2(O) and X(t):=x(t,⋅)∈X. Let Ax:=−Δx, where x∈D(A)=H2(O)∩H10(O). By [36,Theorem 2.7,Chapter 7] and the embedding theorem of Sobolev spaces, −A is the generator of the compact analytic semigroup of uniformly bounded linear operators, and −A is self adjoint. Let u(t):=v(t,⋅) and U=L∞([0,T];U), where U=L2(O;[−1,1]), the space of square-integrable functions on O taking values in [−1,1]. It is easy to see that U is a nonempty closed and bounded convex subset of L∞([0,T];X), which has at least one interior point.
Under the above setup, (4.1) can be written as the following X-valued Caputo fractional evolution equation (with the Nemytskii operator f1(t,X(t))(w):=f1(t,x(t,w))):
CDα0+[X](t)+AX(t)=f1(t,X(t))+f2(t)u(t),t∈(0,T]. | (4.3) |
Similar to [45,Section 2.4,page 1324], we consider the following endpoint state constraint for (4.1) and (4.2) (note that X(0)=x(0,⋅) and X(T)=x(T,⋅)):
(X(0),X(T))∈F0×FT, | (4.4) |
where for ai,bi∈R and hi(⋅),gi(⋅)∈X, i=1,…,m,
F0={y′′(⋅)∈X|∫Oy′′(w)hi(w)dw=ai,i=1,…,m},FT={y′(⋅)∈X|∫Oy′(w)gi(w)dw=bi,i=1,…,m}. |
We assume that F0 and FT are nonempty. It is easy to see that F0 and FT are closed convex subsets of X. Let L1(s,X(s)):=∫Ol1(s,x(s,w))dw and L2(s,u(s)):=∫Ol2(s)v(s,w)dw. Then (4.2) becomes
J(X0;u(⋅))=∫T0(T−s)β−1Γ(β)(L1(s,X(s))+L2(s,u(s)))ds. |
Let us assume that f1(⋅,x),f2(⋅),l1(⋅,x),l2(⋅)∈C([0,T];R) hold Assumptions 1 and 2.
Based on the above discussion, (P) can be applied to formulate the optimal control problem of (4.3) with the end-points state constraint in (4.4):
infu(⋅)∈UJ(X0;u(⋅)),subject to (4.3) and (4.4). | (4.5) |
Suppose that (¯X(⋅),¯u(⋅))=(¯x,¯v)∈C([0,T];X)×U is the optimal solution of (4.5). In view of (5.28) and (5.29) with Π from (B.1), R and Q can be written as follows (see also [26,page 160,Chapter 4]):
R={ξ(T)∈X|ξ(t)=k∑j=1θj∫t0Π(t,s)f2(s)(uj(s)−¯u(s))ds,t∈[0,T],({uj(⋅)}kj=1,{θj}kj=1)∈Ω},Q=FT−(I+∫T0Π(T,s)(−A+∂Xf1(t,¯X(s))ds)F0. |
For Assumption 3, the finite codimensionality of FT in X is crucial. Indeed, by [26,Proposition 3.4 and page 160,Chapter 4]‡, the finite codimensionality of FT in X implies the finite codimensionality of Q in X, which (by [26,Proposition 3.4,Chapter 4] again) further implies the finite codimensionality of R−Q in X in Assumption 3. Then it follows from [45,Section 2.4,page 1324]§ that FT is finite codimensional in X. Hence, Assumption 3 holds.
‡Note that [26,Proposition 3.4,Chapter 4] does not need the fractional calculus in the proof.
§Note that [45,Section 2.4,page 1324] does not need the fractional calculus in the corresponding result.
Since X∗=X, we have (note that ¯X(0)=¯x(0,⋅) and ¯X(T)=¯x(T,⋅))
NF0(¯x(0,⋅))={z1(⋅)∈X|∫Oz1(w)(x′′(w)−¯x(0,w))dw≤0,∀x′′(⋅)∈F0}, | (4.6) |
NFT(¯x(T,⋅))={z2(⋅)∈X|∫Oz2(w)(x′(w)−¯x(T,w))dw≤0,∀x′(⋅)∈FT}. | (4.7) |
Based on the preceding analysis and Theorem 3.1, we state the following result:
Proposition 4.1. Assume that (¯X(⋅),¯u(⋅))=(¯x,¯v)∈C([0,T];X)×U is the optimal solution of (4.5). Then there exist (p0,φ1,φ2)∈R×X×X such that the following conditions hold:
(i) Nontriviality condition: (p0,φ1,φ2)≠0, where p0≤0 and (φ1,φ2)∈NF0(¯x(0,⋅))×NFT(¯x(T,⋅)).
(ii) Adjoint equation and boundary condition: p∈L1([0,T];X) is the unique solution to the following RL fractional evolution equation with the boundary condition:
{RLDαT−[p(⋅,w)](t)−Δp(t,w)=∂xf1(t,¯x(t,w))p(t,w)−p0(T−t)β−1Γ(β)∂xl1(t,¯x(t,w)),(t,w)∈(0,T)×O,p(t,w)=0,(t,w)∈[0,T]×∂O,I1−αT−[p(⋅,w)](T)=−φ2(w),w∈O,I1−αT−[p(⋅,w)](0)=φ1(w),w∈O. |
(iii) Maximum condition: The optimal control ¯v satisfies the following maximum condition:
∫O(p(t,w)f2(t)+p0(T−t)β−1Γ(β)l2(t))¯v(t,w)dw≥∫O(p(t,w)f2(t)+p0(T−t)β−1Γ(β)l2(t))v(t,w)dw,∀v(t,⋅)∈U,a.e.t∈[0,T], |
which is equivalent to the bang-bang control:
¯v(t,w)={1,ifp(t,w)f2(t)+p0(T−t)β−1Γ(β)l2(t)>0,−1,ifp(t,w)f2(t)+p0(T−t)β−1Γ(β)l2(t)<0. |
Proof. The entire proof is an application of Theorem 3.1. We observe that (ⅰ) follows from the nontriviality condition of Theorem 3.1. Then (ⅱ) can be deduced from the adjoint equation and its boundary conditions in Theorem 3.1, where P(⋅)∈L1([0,T];X) with P(t):=p(t,⋅). Note that the inequality in the transversality condition of Theorem 3.1 holds immediately, since by (4.6) and (4.7), for any (x′′(⋅),x′(⋅))∈F0×FT, it follows that
∫Oφ1(w)(x′′(w)−¯x(0,w))dw+∫Oφ2(w)(x′(w)−¯x(T,w))dw≤0⇔⟨I1−αT−[P](0),(X′′−¯X0)⟩X×X∗−⟨I1−αT−[P](T),(X′−¯X(T))⟩X×X∗≤0. |
The maximum condition and its equivalent bang-bang control in (ⅲ) are obtained from the Hamiltonian maximization condition in Theorem 3.1. We complete the proof.
This section is devoted to proving Theorem 3.1. We use ⟨⋅,⋅⟩:=⟨⋅,⋅⟩X×X∗ when there is no confusion.
The key concept of proving the maximum principle in Theorem 3.1 is to apply the Ekeland variational principle under the family of spike variations, where the key techniques are applications of intrinsic properties of fractional integral and derivative, analysis of variational equations in Lemmas 5.1 and 5.2, and the representation results of linear fractional evolutions in Lemmas B.3–B.5. Specifically, the proof is divided into nine steps, where the outline of each step is given below:
Step 1. Ekeland variational principle: We formulate the unconstrained fractional optimal control problem of (P) with the penalized objective functional. Then the Ekeland variational principle is applied to obtain the ϵ-optimal solution to the corresponding unconstrained control problem as well as the associated inequalities (see (5.2)).
Step 2. Family of spike variations: By Lemma C.6 and (5.3), the family of spike variations is defined on {Eδj⊂[0,T],j=1…,k}, where ∪kj=1meas|Eδj|=δT (see (5.4)). Then the estimate of l and f under the spike variations is obtained with respect to δ (see (5.6)).
Step 3. Variational analysis of δ: By Step 2, variational analysis of the state trajectories and the objective functionals controlled by the family of spike variations and the ϵ-optimal solution is obtained with respect to δ (see Lemma 5.1). Based on Lemma 5.1, by taking δ↓0 of the inequalities in Step 1, we obtain the variational inequality as well as the multiplier condition (see (5.12) and (5.13)).
Step 4. Separation: By the separation argument and Lemma C.7, we obtain the alternative forms of (5.12) and (5.13) as well as the transversality-like conditions for the endpoint state constraints (see (5.19)–(5.22)).
Step 5. Variational analysis of ϵ: Similar to Step 3, variational analysis of the state trajectories and the objective functionals controlled by the ϵ-optimal solution and the optimal solution is obtained with respect to ϵ (see Lemma 5.2). Based on Lemma 5.2 and the multiplier condition (5.20) in Step 4, by taking ϵ↓0 of the inequality in Step 4, we obtain the variational inequality (see (5.26)).
Step 6. Proof of (ⅰ), Nontriviality condition: By the finite codimensionality in Assumption 3 and [26,Lemma 3.6,Chapter 4], together with the variational inequality in Step 5, the multiplier condition (5.20) in Step 4, and Lemma B.4 (see Appendix 6), we prove (p0,φ1,φ2)≠0 with p0≤0.¶ Then we show (φ1,φ2)∈NF0(¯X0)×NFT(¯X(T)) by the definition of subdifferential of convex functions. From the nontriviality condition, we obtain the limiting form of the variational inequality and multiplier condition in Step 5 with respect to ϵ↓0 (see (5.30)).
¶In this proof, we first show p≥0, and then p0:=−p.
Step 7. Proof of (ⅱ), Duality and adjoint equation: By Lemma B.5, we prove (ⅱ) of Theorem 3.1. Then using Lemmas B.3–B.5, we obtain the duality form of the variational inequality in Step 6 (see (5.33)).
Step 8. Proof of (ⅲ), Transversality condition: By the definitions of (RL and Caputo) fractional integrals, we prove (ⅲ) of Theorem 3.1 when u=¯u of the (duality form) variational inequality in Step 7.
Step 9. Proof of (ⅳ), Hamiltonian maximization condition: Using the (duality form) variational inequality in Step 7 and [6,Theorem 5.6.2], we prove (ⅳ) of Theorem 3.1.
Recall that (¯X(⋅),¯u(⋅))∈C([0,T];X)×U is the optimal solution of (P). Let dF0:X→R+ be the distance function of F0 defined by dF0(X):=infy∈F0|y−X| for X∈X. Let dFT:X→R+ be the distance function of FT defined similarly. Then the endpoint state constraint (X0,X(T))∈F0×FT is equivalent to dF0(X0)=0 and dFT(X(T))=0.
We introduce the penalized objective functional
\begin{align} J_{\epsilon}(X_0;u(\cdot)) = \Bigl ( \bigl ( \bigl [ J(X_0;u(\cdot)) - J(\overline{X}_0;\overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr)^2 + d_{\mathsf{F}_0}(X_0)^2 + d_{\mathsf{F}_T}(X(T))^2 \Bigr)^{\frac{1}{2}}, \end{align} | (5.1) |
where [\cdot]^{+} : = \max\{\cdot, 0\} . Define the Ekeland metric by
\begin{align*} \tilde{d}(u(\cdot),\widetilde{u}(\cdot)) : = \mathrm{meas}|\{t \in [0,T]\; |\; u(t) \neq \widetilde{u}(t)\}|, \end{align*} |
where \mathrm{meas}|\cdot| is understood as the Lebesgue measure of the corresponding set. Let us define the metric
\begin{align*} \widehat{d}((X_0,X_0^\prime),(u(\cdot),\widetilde{u}(\cdot))) : = |X_0 - X_0^\prime|_{\mathsf{X}} + \tilde{d}(u(\cdot),\widetilde{u}(\cdot)). \end{align*} |
We observe that (\mathsf{X} \times \mathcal{U}, \widehat{d}) is a complete metric space [26,Proposition 3.10,Chapter 4]. Furthermore, by our assumptions and Theorem 2.1, J_{\epsilon} is a continuous functional on (\mathsf{X} \times \mathcal{U}, \widehat{d}) .
We can observe that
\begin{align*} J_{\epsilon}(X_0;u(\cdot)) & > 0,\; \forall (X_0,u(\cdot)) \in \mathsf{X} \times \mathcal{U}, \\ J_{\epsilon}(\overline{X}_0;\overline{u}(\cdot)) & = \epsilon \leq \inf\limits_{(X_0,u(\cdot)) \in \mathsf{X} \times \mathcal{U}} J_{\epsilon}(X_0; u(\cdot)) + \epsilon. \end{align*} |
By the Ekeland variational principle [26,Chapter 4.2], there exist a pair (X_0^{\epsilon}, u^{\epsilon}(\cdot)) \in \mathsf{X} \times \mathcal{U} such that
\begin{align} \begin{cases} \widehat{d}((X_0^{\epsilon},\overline{X}_0),(u^{\epsilon}(\cdot),\overline{u}(\cdot))) \leq \sqrt{\epsilon}, \\ J_{\epsilon}(X_0^{\epsilon};u^{\epsilon}(\cdot)) \leq J_{\epsilon}(\overline{X}_0;\overline{u}(\cdot)) = \epsilon, \\ J_{\epsilon}(X_0^{\epsilon};u^{\epsilon}(\cdot)) \leq J_{\epsilon}(X_0;u(\cdot)) + \sqrt{\epsilon} \widehat{d}((X_0^{\epsilon},X_0),(u^{\epsilon}(\cdot),u(\cdot))),\; \forall u(\cdot) \in \mathcal{U}. \end{cases} \end{align} | (5.2) |
Let X^{\epsilon}(\cdot) : = X(\cdot; X_0^{\epsilon}, u^{\epsilon}) \in C([0, T]; \mathsf{X}) be the state trajectory of (2.1) controlled by u^{\epsilon}(\cdot) \in \mathcal{U} with X^{\epsilon}(0) = X_0^{\epsilon} . It follows from Theorem 2.1 that \sup_{t \in [0, T]} |X^{\epsilon}(t)| \leq M .
Recall that \omega : = (\{u_j(\cdot)\}_{j = 1}^{k}, \{\theta_j\}_{j = 1}^{k}) \in \Omega , where \sum_{j = 1}^k \theta_j = 1 and \theta_j \geq 0 . Let us define
\begin{align*} h_j^{\epsilon}(s) & : = \begin{bmatrix} l(s,X^{\epsilon}(s),u_j(s)) - l(s,X^{\epsilon}(s),u^{\epsilon}(s)) \\ f(s,X^{\epsilon}(s),u_j(s)) - f(s,X^{\epsilon}(s),u^{\epsilon}(s)) \end{bmatrix} \in \mathbb{R} \times \mathsf{X}. \end{align*} |
Clearly, h_j^{\epsilon}(\cdot) \in L^1([0, T]; \mathbb{R} \times \mathsf{X}) , j = 1, \ldots, k by Assumptions 1 and 2. Hence, by invoking Lemma C.6 in Appendix 6, for any \delta \in (0, 1) , there are \{E_j^{\delta} \subset [0, T], \; j = 1\ldots, k\} such that for \omega \in \Omega uniformly,
\begin{align} \begin{cases} E_j^{\delta} \cap E_i^{\delta} = \emptyset,\; j \neq i, \\ \cup_{j = 1}^k \mathrm{meas}|E_j^{\delta}| = \delta T, \\ \sup\limits_{t \in [0,T]} \Bigl | \delta \sum\limits_{j = 1}^k \theta_j \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} h_j^{\epsilon}(s) {\rm{d}}s - \sum\limits_{j = 1}^k \int_{E_j^{\delta} \cap [0,t]} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} h_j^{\epsilon}(s) {\rm{d}}s \Bigr |_{\mathbb{R} \times \mathsf{X}} = o(\delta). \end{cases} \end{align} | (5.3) |
We define the family of spike variations by
\begin{align} u^{\epsilon,\delta}_{\omega}(t) : = \begin{cases} u_j(t), & \text{for}\; t \in E_j^{\delta},\; j = 1,\ldots,k,\\ u^{\epsilon}(t), & \text{for}\; t \in [0,T] \setminus \cup_{j = 1}^k E_j^{\delta}. \end{cases} \end{align} | (5.4) |
Clearly u^{\epsilon, \delta}_{\omega} (\cdot) \in \mathcal{U} and \tilde{d}(u^{\epsilon, \delta}_{\omega}(\cdot), u^\epsilon(\cdot)) \leq \mathrm{meas}|E_{\delta}| = \delta T , where E^{\delta} : = \cup_{j = 1}^k E_j^{\delta} . Note that
\begin{align} \sum\limits_{j = 1}^k \int_{E_j^{\delta} \cap [0,t] } \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} h_j^{\epsilon}(s) {\rm{d}}s = \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} h_{\omega}^{\epsilon,\delta}(s) {\rm{d}}s, \end{align} | (5.5) |
where h^{\epsilon, \delta}_{\omega}(\cdot) \in \mathbb{R} \times \mathsf{X} is defined by
\begin{align*} h^{\epsilon,\delta}_{\omega}(s) : = \begin{bmatrix} l(s,X^{\epsilon}(s),u^{\epsilon,\delta}_{\omega}(s)) - l(s,X^{\epsilon}(s),u^{\epsilon}(s)) \\ f(s,X^{\epsilon}(s),u^{\epsilon,\delta}_{\omega}(s)) - f(s,X^{\epsilon}(s),u^{\epsilon}(s)) \end{bmatrix}. \end{align*} |
Then from (5.3) and (5.5), it follows that
\begin{align} \sup\limits_{(t,\omega) \in [0,T] \times \Omega} \Bigl | \delta \sum\limits_{k = 1}^k \theta_j \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} h_j^{\epsilon}(s) {\rm{d}}s - \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} h^{\epsilon,\delta}_{\omega}(s) {\rm{d}}s \Bigr |_{\mathbb{R} \times \mathsf{X}} = o(\delta). \end{align} | (5.6) |
Notice that (5.6) will be used to prove the variational analysis in Lemma 5.1.
We consider the state variation X^{\epsilon, \delta}_{\omega}(\cdot) : = X(\cdot; X_0^\epsilon+\delta a, u^{\epsilon, \delta}_{\omega}) \in C([0, T]; \mathsf{X}) , where a \in \mathsf{X} . Note that X^{\epsilon, \delta}_{\omega} is the variational equation under (5.4) and the perturbed initial condition a \in \mathsf{X} . Let us define (with \widetilde{X}^{\epsilon, \delta}_{\omega}(\cdot) : = X^{\epsilon, \delta}_{\omega}(\cdot) - X^{\epsilon}(\cdot) )
\begin{align*} f^\epsilon(s) &: = f(s,X^{\epsilon}(s),u^\epsilon(s)),\; \partial_X f^\epsilon(s) : = \partial_X f(s,X^{\epsilon}(s),u^\epsilon(s)), \\ \widehat{f}^{\epsilon}_j(s) & : = f(s,X^{\epsilon}(s),u_j(s)) - f(s,X^{\epsilon}(s),u^\epsilon(s)),\; f_X^{\epsilon,\delta}(s) : = \int_0^1 \partial_X f(s,X^\epsilon(s) + r \widetilde{X}^{\epsilon,\delta}_{\omega}(s),u^{\epsilon,\delta}_{\omega}(s)) {\rm{d}}r, \\ l^\epsilon(s) &: = l(s,X^{\epsilon}(s),u^\epsilon(s)),\; \partial_X l^\epsilon(s) : = \partial_X l(s,X^{\epsilon}(s),u^\epsilon(s)) ,\\ \widehat{l}^{\epsilon}_j(s) & : = l(s,X^{\epsilon}(s),u_j(s)) - l(s,X^{\epsilon}(s),u^\epsilon(s)),\; l_X^{\epsilon,\delta}(s) : = \int_0^1 \partial_X l(s,X^\epsilon(s) + r \widetilde{X}^{\epsilon,\delta}_{\omega}(s),u^{\epsilon,\delta}_{\omega}(s)) {\rm{d}}r , \\ m^{\epsilon} & : = m(X_0^\epsilon,X^\epsilon(t_f)),\; \partial_{X_0} m^{\epsilon} : = \partial_{X_0} m(X_0^\epsilon,X^\epsilon(t_f)),\; \partial_{X} m^{\epsilon} : = \partial_{X} m(X_0^\epsilon,X^\epsilon(t_f)) , \\ m_{X_0}^{\epsilon,\delta}(t_f) & : = \int_0^1 \partial_{X_0} m(X_0^\epsilon + r \delta a, X^{\epsilon}(t_f) + r \widetilde{X}^{\epsilon,\delta}_{\omega}(r)) {\rm{d}}r,\; m_{X}^{\epsilon,\delta}(t_f) : = \int_0^1 \partial_{X} m (X_0^\epsilon + r \delta a, X^{\epsilon}(t_f) + r \widetilde{X}^{\epsilon,\delta}_{\omega}(r)) {\rm{d}}r ,\\ \widehat{f}_j(s) &: = f(t,\overline{X}(s),u_j(s)) - f(t,\overline{X}(s),\overline{u}(s)),\; \widehat{l}_j(s) : = l(s,\overline{X}(s),u_j(s)) - l(t,\overline{X}(s),\overline{u}(s)). \end{align*} |
We state the variational analysis with respect to \delta .
Lemma 5.1. The following estimates hold:
\begin{align*} &\sup\limits_{\omega \in \Omega} \sup\limits_{t \in [0,T]} \Biggl | \frac{X_{\omega}^{\epsilon,\delta}(t) - X^{\epsilon}(t)}{\delta} - Z^{\epsilon}_{\omega}(t) \Biggr |_{\mathsf{X}} = o(1),\\ & \sup\limits_{ \omega \in \Omega} \Biggl | \frac{J(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}(\cdot)) - J(X_0^{\epsilon}; u^{\epsilon}(\cdot))}{\delta} - \widehat{Z}^{\epsilon}_{\omega} \Biggr | = o(1), \end{align*} |
where Z_{\omega}^{\epsilon} is the \mathsf{X} -valued variational equation given by
\begin{align} & \begin{cases} ^{ \mathrm{C}}{ \mathrm{D}}{_{0+}^{\alpha}}[Z^{\epsilon}_{\omega}](t) + A Z^{\epsilon}_{\omega}(t) = \partial_X f^{\epsilon}(t) Z^{\epsilon}_{\omega}(t) + \sum\limits_{j = 1}^k \theta_j \widehat{f}_j^{\epsilon}(t),\; t \in (0,T],\\ Z^{\epsilon}_{\omega}(0) = a \in \mathsf{X}, \end{cases} \end{align} | (5.7) |
and \widehat{Z}_{\omega}^{\epsilon} is the \mathbb{R} -valued variational equation:
\begin{align} & \widehat{Z}^{\epsilon}_{\omega} = \sum\limits_{j = 1}^k \theta_j \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Bigl ( \partial_X l^{\epsilon}(s) Z^{\epsilon}_{\omega}(s) + \widehat{l}^{\epsilon}_j(s) \Bigr ) {\rm{d}}s + \partial_{X_0} m^{\epsilon} a + \partial_{X} m^{\epsilon} Z^{\epsilon}_{\omega}(T). \end{align} | (5.8) |
Proof. Let Z^{\epsilon, \delta}_{\omega}(\cdot) : = \frac{X^{\epsilon, \delta}_{\omega}(\cdot) - X^{\epsilon}(\cdot)}{\delta} . Then Z^{\epsilon, \delta}_{\omega} satisfies
\begin{align*} Z^{\epsilon,\delta}_{\omega}(t) = & \frac{1}{\delta} \Bigl ( X_0 + \delta a + \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A X^{\epsilon,\delta}_{\omega}(s) + f(s,X^{\epsilon,\delta}_{\omega}(s),u^{\epsilon,\delta}_{\omega}(s)) \Bigr ) {\rm{d}}s \nonumber\\ &\; \; \; - X_0 - \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A X^{\epsilon}(s) + f(s,X^{\epsilon}(s),u^{\epsilon}(s)) \Bigr ) {\rm{d}}s \Bigr ) \nonumber\\ & = - \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} A Z^{\epsilon,\delta}_{\omega}(s) {\rm{d}}s \nonumber\\ &\; \; \; + \frac{1}{\delta} \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( f(s,X^{\epsilon,\delta}_{\omega}(s),u^{\epsilon,\delta}_{\omega}(s)) - f(s,X^{\epsilon}(s),u^{\epsilon,\delta}_{\omega}(s)) \\ &\; \; \; + f(s,X^{\epsilon}(s),u^{\epsilon,\delta}_{\omega}(s)) - f(s,X^{\epsilon}(s),u^{\epsilon}(s)) \Bigr ) {\rm{d}}s. \end{align*} |
By (5.6), we get
\begin{align*} Z^{\epsilon,\delta}_{\omega}(t) & = \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + f_X^{\epsilon,\delta}(s) \Bigr ) Z^{\epsilon,\delta}_{\omega}(s) {\rm{d}}s + \sum\limits_{j = 1}^k \theta_j \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \widehat{f}^{\epsilon}_j(s) {\rm{d}}s + \frac{o(\delta)}{\delta}. \end{align*} |
We then have
\begin{align*} Z^{\epsilon,\delta}_{\omega}(t) - Z^{\epsilon}_{\omega}(t) & = - \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} A (Z^{\epsilon,\delta}_{\omega}(s) - Z^{\epsilon}_{\omega}(s)) {\rm{d}}s + o(1) \nonumber\\ & + \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} f_X^{\epsilon,\delta}(s) (Z^{\epsilon,\delta}_{\omega}(s) - Z^{\epsilon}_{\omega}(s) ) {\rm{d}}s \nonumber\\ & + \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigr ( f_X^{\epsilon,\delta}(s) - \partial_X f^{\epsilon}(s) \Bigr ) Z^{\epsilon}_{\omega}(s) {\rm{d}}s. \end{align*} |
Since Z^{\epsilon; \omega} is a linear Caputo fractional evolution equation, by Theorem 2.1, it follows that
\begin{align} \sup\limits_{(\omega,t) \in \Omega \times [0,T]} |Z^{\epsilon}_{\omega}(t)|_{\mathsf{X}} \leq M. \end{align} | (5.9) |
Similarly, by (5.4) and Theorem 2.1, it follows that
\begin{align} \sup\limits_{(\omega,t) \in \Omega \times [0,T]} |X^{\epsilon,\delta}_{\omega}(s) - X^{\epsilon}(s)|_{\mathsf{X}} \leq M \widehat{d}(u^{\epsilon,\delta}_{\omega}(\cdot),u^{\epsilon}(\cdot)) \leq M \delta. \end{align} | (5.10) |
Recall E^{\delta} : = \cup_{j = 1}^k E_j^{\delta} and \{E_j^{\delta} \subset [0, T], \; j = 1\ldots, k\} with \cup_{j = 1}^k \mathrm{meas}|E_j^{\delta}| = \delta T . We need to obtain the estimate of the following expression:
\begin{align*} &\int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigr ( f_X^{\epsilon,\delta}(s) - \partial_X f^{\epsilon}(s) \Bigr ) Z^{\epsilon}_{\omega}(s) {\rm{d}}s \nonumber\\ & = \int_{[0,t] \cap E^{\delta}} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigr ( f_X^{\epsilon,\delta}(s) - \partial_X f^{\epsilon}(s) \Bigr ) Z^{\epsilon}_{\omega}(s) {\rm{d}}s \nonumber\\ & + \int_{[0,t] \setminus E^{\delta}} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigr ( f_X^{\epsilon,\delta}(s) - \partial_X f^{\epsilon}(s) \Bigr ) Z^{\epsilon}_{\omega}(s) {\rm{d}}s = o(1). \end{align*} |
By (5.9) and the fact that \mathrm{meas}|E^{\delta}| = \delta T , we have
\begin{align*} \sup\limits_{(\omega,t) \in \Omega \times [0,T]} \Bigl | \int_{[0,t] \cap E^{\delta}} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigr ( f_X^{\epsilon,\delta}(s) - \partial_X f^{\epsilon}(s) \Bigr ) Z^{\epsilon}_{\omega}(s) {\rm{d}}s \Bigr |_{\mathsf{X}} = o(1). \end{align*} |
Also, by (5.9) and (5.10), we also have
\begin{align*} & \sup\limits_{(\omega,t) \in \Omega \times [0,T]} \Bigl | \int_{[0,t] \setminus E^{\delta}} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigr ( f_X^{\epsilon,\delta}(s) - \partial_X f^{\epsilon}(s) \Bigr ) Z^{\epsilon}(s) {\rm{d}}s \Bigr |_{\mathsf{X}} \nonumber\\ & \leq M \sup\limits_{(\omega,t) \in \Omega \times [0,T]} \Bigl | \int_{[0,t] \setminus E^{\delta}} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} |X^{\epsilon,\delta}_{\omega}(s) - X^{\epsilon}(s)|_{\mathsf{X}} |Z^{\epsilon}_{\omega}(s)|_{\mathsf{X}} {\rm{d}}s \Bigr |_{\mathsf{X}} \leq M \delta = o(1). \end{align*} |
Based on the preceding estimates, it follows that
\begin{align*} Z^{\epsilon,\delta}_{\omega}(t) - Z^{\epsilon}_{\omega}(t) \leq \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl (-A + M \Bigr ) (Z^{\epsilon,\delta}_{\omega}(s) - Z^{\epsilon}_{\omega}(s) ) {\rm{d}}s + o(1). \end{align*} |
By Lemma A.2,
\begin{align*} \sup\limits_{(\omega,t) \in \Omega \times [0,T]} |Z^{\epsilon,\delta}_{\omega}(t) - Z^{\epsilon}_{\omega}(t)|_{\mathsf{X}} \leq o(1) E_{\alpha}(M\Gamma(\alpha)T^{\alpha}). \end{align*} |
As \delta \downarrow 0 , the first estimate follows.
To prove the second estimate, we note that
\begin{align*} &\frac{J(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) - J(X_0^{\epsilon}; u^{\epsilon}(\cdot))}{\delta} - \widehat{Z}^{\epsilon}_{\omega} \nonumber\\ & = \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} l_X^{\epsilon,\delta}(s) (Z^{\epsilon,\delta}_{\omega}(s) - Z^{\epsilon}_{\omega}(s)) {\rm{d}}s \nonumber\\ & + \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Bigl ( l_X^{\epsilon,\delta}(s) - \partial_X l^{\epsilon}(s) \Bigr) Z^{\epsilon}_{\omega}(s) {\rm{d}}s + (m_{X_0}^{\epsilon,\delta}(t_f) - \partial_{X_0} m^{\epsilon}) a \nonumber\\ & + m_X^{\epsilon,\delta}(T) (Z^{\epsilon,\delta}_{\omega}(T) - Z^{\epsilon}_{\omega}(T)) + (m_X^{\epsilon,\delta}(T) - \partial_X m^{\epsilon}) Z^{\epsilon}_{\omega}(T). \end{align*} |
Then the rest of the proof is analogous to that for the first estimate. This completes the proof.
Now, consider the situation of \delta \downarrow 0 . For any \omega = (\{u_j(\cdot)\}_{j = 1}^{k}, \{\theta_j\}_{j = 1}^{k}) \in \Omega , in view of (5.2), it follows that
\begin{align*} \frac{J_{\epsilon}(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) - J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))}{\delta} \geq - \frac{1}{\delta} \widehat{d}((X_0^{\epsilon} + \delta X_0,X_0^{\epsilon})(u^{\epsilon,\delta}_{\omega}(\cdot),u^{\epsilon}(\cdot))) \geq - \sqrt{\epsilon} (T + |a|_{\mathsf{X}}). \end{align*} |
We then have
\begin{align} - \sqrt{\epsilon} (T + |a|_{\mathsf{X}}) & \leq \frac{1}{\delta} \frac{1}{J_{\epsilon}(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) + J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \\ & \times \Biggl ( \Bigl ( \bigl ( \bigl [ J(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) - J(\overline{X}_0; \overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr)^2 + d_{\mathsf{F}_0}(X_0^{\epsilon} + \delta a)^2 + d_{\mathsf{F}_T}(X^{\epsilon,\delta}_{\omega}(T))^2 \Bigr) \nonumber\\ & - \Bigl ( \bigl ( \bigl [ J(X_0^{\epsilon}; u^{\epsilon}(\cdot)) - J(\overline{X}_0;\overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr)^2 + d_{\mathsf{F}_0}(X_0^{\epsilon})^2 + d_{\mathsf{F}_T}(X^{\epsilon}(T))^2 \Bigr) \Biggr ). \nonumber \end{align} | (5.11) |
Let us consider the case when \delta \downarrow 0 in (5.11).
(F.1) Note that \lim_{\delta \downarrow 0} J_{\epsilon}(X_0^{\epsilon} + \delta a; u^{\epsilon, \delta}_{\omega}(\cdot)) = J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot)) + o(1) by Lemma 5.1. Hence,
\begin{align*} & \lim\limits_{\delta \downarrow 0} \frac{1}{\delta} \frac{1}{J_{\epsilon}(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) + J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \nonumber\\ &\; \; \; \; \; \; \; \times \Biggl ( \bigl ( \bigl [ J(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) - J(\overline{X}_0; \overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr)^2 - \bigl ( \bigl [ J((X_0^{\epsilon}; u^{\epsilon}(\cdot)) - J(\overline{X}_0; \overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr)^2 \Biggr ) \nonumber\\ & = \lim\limits_{\delta \downarrow 0} \frac{1}{J_{\epsilon}(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) + J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \nonumber\\ &\; \; \; \; \; \; \; \times \lim\limits_{\delta \downarrow 0} \Biggl ( \bigl ( \bigl [ J(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) - J(\overline{X}_0; \overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr) + \bigl ( \bigl [ J((X_0^{\epsilon}; u^{\epsilon}(\cdot)) - J(\overline{X}_0; \overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr) \Biggr ) \nonumber\\ &\; \; \; \; \; \; \; \times \lim\limits_{\delta \downarrow 0} \frac{1}{\delta} \Biggl ( \bigl ( \bigl [ J(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) - J(\overline{X}_0; \overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr) - \bigl ( \bigl [ J((X_0^{\epsilon}; u^{\epsilon}(\cdot)) - J(\overline{X}_0; \overline{u}(\cdot)) + \epsilon \bigr ]^+ \bigr) \Biggr )\nonumber \\ & = \frac{\bigl [ J(X_0^{\epsilon}; u^{\epsilon}(\cdot)) - J(\overline{X}_0; \overline{u}(\cdot)) + \epsilon \bigr ]^+}{J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \widehat{Z}^{\epsilon}_{\omega} = : \lambda_{\epsilon} \widehat{Z}^{\epsilon}_{\omega}, \end{align*} |
where by definition, \lambda_{\epsilon} \geq 0 . (F.2) By Lemma 5.1, it follows that
\begin{align*} & \lim\limits_{\delta \downarrow 0} \frac{1}{\delta} \frac{1}{J_{\epsilon}(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) + J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \Bigl ( d_{\mathsf{F}_T}(X^{\epsilon,\delta}_{\omega}(T))^2 - d_{\mathsf{F}_T}(X^{\epsilon}(T))^2 \Bigr ) \nonumber\\ & = \lim\limits_{\delta \downarrow 0} \frac{1}{J_{\epsilon}(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) + J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \nonumber \\ &\; \; \; \; \; \; \; \times \lim\limits_{\delta \downarrow 0} \bigl ( d_{\mathsf{F}_T}(X^{\epsilon,\delta}_{\omega}(T)) + d_{\mathsf{F}_T}(X^{\epsilon}(T)) \bigr ) \lim\limits_{\delta \downarrow 0} \frac{1}{\delta} \bigl ( d_{\mathsf{F}_T}(X^{\epsilon,\delta}_{\omega}(T)) - d_{\mathsf{F}_T}(X^{\epsilon}(T)) \bigr ) \nonumber\\ & = \frac{1}{ J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} d_{\mathsf{F}_T}(X^{\epsilon}(T)) \lim\limits_{\delta \downarrow 0} \frac{1}{\delta} \Bigl ( d_{\mathsf{F}_T}(X^{\epsilon}(T) + \delta Z^{\epsilon}_{\omega}(T)) - d_{\mathsf{F}_T}(X^{\epsilon}(T)) \Bigr )\nonumber \\ &\; \; \; + \frac{1}{ J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} d_{\mathsf{F}_T}(X^{\epsilon}(T)) \lim\limits_{\delta \downarrow 0} \frac{1}{\delta} \Bigl ( d_{\mathsf{F}_T}(X^{\epsilon}(T) + \delta Z^{\epsilon}_{\omega}(T) + o(\delta)) - d_{\mathsf{F}_T}(X^{\epsilon}(T) + \delta Z^{\epsilon}_{\omega}(T)) \Bigr ) \nonumber\\ & = \frac{1}{ J_{\epsilon}(X_0^{\epsilon};u^{\epsilon}(\cdot))} d_{\mathsf{F}_T}(X^{\epsilon}(T)) \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T);Z^{\epsilon}_{\omega}(T)) = : \nu_{\epsilon} \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T);Z^{\epsilon}_{\omega}(T)), \end{align*} |
where \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T); Z^{\epsilon}_{\omega}(T)) is the directional (or Gâteaux) derivative of d_{\mathsf{F}_T} at X^{\epsilon}(T) and \nu_{\epsilon} \geq 0 . (F.3) Similarly, we can prove that
\begin{align*} & \lim\limits_{\delta \downarrow 0} \frac{1}{\delta} \frac{1}{J_{\epsilon}(X_0^{\epsilon} + \delta a; u^{\epsilon,\delta}_{\omega}(\cdot)) + J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \Bigl ( d_{\mathsf{F}_0}(X_0^{\epsilon} + \delta a)^2 - d_{\mathsf{F}_0}(X_0^{\epsilon})^2 \Bigr ) \nonumber\\ & = \frac{1}{ J_{\epsilon}(X_0^{\epsilon};u^{\epsilon}(\cdot))} d_{\mathsf{F}_0}(X_0^{\epsilon}) \nabla d_{\mathsf{F}_0}(X_0^{\epsilon};a) = : \eta_{\epsilon} \nabla d_{\mathsf{F}_0}(X_0^{\epsilon};a), \end{align*} |
where \nabla d_{\mathsf{F}_0}(X_0^{\epsilon}; a) is the directional (or Gâteaux) derivative of d_{\mathsf{F}_0} at X_0^{\epsilon} and \eta_{\epsilon} \geq 0 .
Based on Lemma 5.1 and (F.1)–(F.3) above, as \delta \downarrow 0 , (5.11) becomes
\begin{align} - \sqrt{\epsilon} (T + |a|_{\mathsf{X}}) & \leq \frac{\bigl [ J(X_0^{\epsilon}; u^{\epsilon}(\cdot)) - J(\overline{X}_0;\overline{u}(\cdot)) + \epsilon \bigr ]^+}{J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \widehat{Z}^{\epsilon}_{\omega} \\ &\; \; \; + \frac{d_{\mathsf{F}_0}(X_0^{\epsilon})}{ J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \nabla d_{\mathsf{F}_0}(X_0^{\epsilon};a) + \frac{d_{\mathsf{F}_T}(X^{\epsilon}(T))}{ J_{\epsilon}(X_0^{\epsilon}; u^{\epsilon}(\cdot))} \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T);Z^{\epsilon}_{\omega}(T)) \\ & = \lambda_{\epsilon} \widehat{Z}^{\epsilon}_{\omega} + \eta_{\epsilon} \nabla d_{\mathsf{F}_0}(X_0^{\epsilon};a) + \nu_{\epsilon} \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T);Z^{\epsilon}_{\omega}(T)) = : \Psi_{\omega}^{\epsilon}(\lambda_{\epsilon},\eta_{\epsilon},\nu_{\epsilon}), \end{align} | (5.12) |
where \lambda_{\epsilon}, \eta_{\epsilon}, \nu_{\epsilon} \geq 0 and by definition of J_{\epsilon} , the multiplier condition is given by
\begin{align} \lambda_{\epsilon}^2 + \eta_{\epsilon}^2 + \nu_{\epsilon}^2 = 1. \end{align} | (5.13) |
This subsection is not needed when X^* , the dual space of X , is strictly convex (see Remark 3.2). Note that the strict convexity of \mathsf{X}^* is crucial in various results on infinite-dimensional maximum principles to obtain the explicit derivative of the distance function. In this subsection, we obtain an alternative expression of (5.12) and (5.13) via the separation argument.
Let us define
\begin{align*} \mathcal{A}^{\epsilon} & : = \{ (a,Z,\widehat{Z},\gamma) \in \mathsf{X} \times \mathsf{X} \times \mathbb{R} \times \mathbb{R}\; | \; \lambda_{\epsilon} \widehat{Z} + \eta_{\epsilon} \nabla d_{\mathsf{F}_0} (X_0^{\epsilon};a) + \nu_{\epsilon} \nabla d_{\mathsf{F}_T} (X^{\epsilon}(T);Z) \leq \gamma \}, \\ \mathcal{B}^{\epsilon} & : = \{ (\bar{a},\bar{Z},\widehat{\bar{Z}},\bar{\gamma}) \in \mathsf{X} \times \mathsf{X} \times \mathbb{R} \times \mathbb{R}\; |\; (\bar{a},Z,\widehat{Z},\gamma) = (\bar{a},Z^{\epsilon}_{\omega}(T),\widehat{Z}^{\epsilon}_{\omega},- \sqrt{\epsilon}(T + |\bar{a}|_{\mathsf{X}})) \}. \end{align*} |
By Lemma C.7 in Appendix 6, \mathcal{B}^{\epsilon} is a convex subset of \mathsf{X} \times \mathsf{X} \times \mathbb{R} \times \mathbb{R} . In addition, \mathcal{A}^{\epsilon} is convex and nonempty, where the former follows from the fact that the Gâteaux derivative is (Lipschitz) continuous and convex its direction, while the latter is due to the finiteness of the Gâteaux derivatives [10,Chapter 2]. Note also that \mathcal{B}^{\epsilon} contains no interior points of \mathcal{A}^{\epsilon} .
By the separation theorem (also known as the geometric version of Hahn-Banach theorem) for convex sets, there exist (\varphi_1^{\epsilon}, \varphi_2^{\epsilon}, \mathsf{p}^{\epsilon}, \mathsf{q}^{\epsilon}) \in \mathsf{X}^* \times \mathsf{X}^* \times \mathbb{R} \times \mathbb{R} with (\varphi_1^{\epsilon}, \varphi_2^{\epsilon}, \mathsf{p}^{\epsilon}, \mathsf{q}^{\epsilon}) \neq 0 such that for any (a, Z, \widehat{Z}, \gamma) \in \mathcal{A}^{\epsilon} and (\bar{a}, \bar{Z}, \widehat{\bar{Z}}, - \sqrt{\epsilon}(T + |\bar{a}|_{\mathsf{X}})) \in \mathcal{B}^{\epsilon} ,
\begin{align} & \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon}, Z \rangle + \mathsf{p}^{\epsilon} \widehat{Z} + \mathsf{q}^{\epsilon} \gamma \leq \langle \varphi_1^{\epsilon}, \bar{a} \rangle + \langle \varphi_2^{\epsilon}, \bar{Z} \rangle + \mathsf{p}^{\epsilon} \widehat{\bar{Z}} - \mathsf{q}^{\epsilon} \sqrt{\epsilon}(T + |\bar{a}|_{\mathsf{X}}). \end{align} | (5.14) |
Notice that (0, 0, 0, - \sqrt{\epsilon}(T + |\bar{a}|_{\mathsf{X}})) \in \mathcal{B}^{\epsilon} , which corresponds to the case of no variations.
Let us assume that the left-hand side of (5.14) holds for some (a, Z, \widehat{Z}, \gamma) \in \mathcal{A}^{\epsilon} ,
\begin{align*} \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon}, Z \rangle + \mathsf{p}^{\epsilon} \widehat{Z} + \mathsf{q}^{\epsilon} \gamma > 0 . \end{align*} |
Since (ka, kZ, k\widehat{Z}, k\gamma) \in \mathcal{A}^{\epsilon} for k \geq 1 , it follows that \langle \varphi_1^{\epsilon}, ka \rangle + \langle \varphi_2^{\epsilon}, kZ \rangle + k \mathsf{p}^{\epsilon} \widehat{Z} + k\mathsf{q}^{\epsilon} \rightarrow \infty as k \rightarrow \infty , which contradicts (5.14). Therefore, for any (a, Z, \widehat{Z}, \gamma) \in \mathcal{A}^{\epsilon} and (\bar{a}, \bar{Z}, \widehat{\bar{Z}}, - \sqrt{\epsilon}(T + |\bar{a}|_{\mathsf{X}})) \in \mathcal{B}^{\epsilon} , (5.14) can be rewritten as
\begin{align} \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon}, Z \rangle + \mathsf{p}^{\epsilon} \widehat{Z} + \mathsf{q}^{\epsilon} \gamma \leq 0 \leq \langle \varphi_1^{\epsilon}, \bar{a} \rangle + \langle \varphi_2^{\epsilon}, \bar{Z} \rangle + \mathsf{p}^{\epsilon} \widehat{\bar{Z}} - \mathsf{q}^{\epsilon} \sqrt{\epsilon}(T + |\bar{a}|_{\mathsf{X}}). \end{align} | (5.15) |
By (5.15) and (5.12), for a fixed \omega \in \Omega , it follows that
\begin{align*} & \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon} , Z^{\epsilon}_{\omega}(T) \rangle + \mathsf{p}^{\epsilon} \widehat{Z}^{\epsilon}_{\omega} + \mathsf{q}^{\epsilon} \Psi_{\omega}^{\epsilon}(\lambda_{\epsilon},\eta_{\epsilon},\nu_{\epsilon}) \leq \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon} , Z^{\epsilon}_{\omega}(T) \rangle + \mathsf{p}^{\epsilon} \widehat{Z}^{\epsilon}_{\omega} - \mathsf{q}^{\epsilon} (\sqrt{\epsilon}(T + |a|_{\mathsf{X}})), \end{align*} |
which leads to \mathsf{q}^{\epsilon} \Psi_{\omega}^{\epsilon}(\lambda_{\epsilon}, \eta_{\epsilon}, \nu_{\epsilon}) \leq - \mathsf{q}^{\epsilon} (\sqrt{\epsilon}(T + |a|_{\mathsf{X}})) . By (5.12), this implies \mathsf{q}^{\epsilon} = -1 . Note that when \mathsf{q}^{\epsilon} = 0 , from (5.15), we have
\begin{align*} \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon} , Z \rangle + \mathsf{p}^{\epsilon} \widehat{Z} \leq 0,\; \forall (a,Z,\widehat{Z}) \in \mathsf{X}^* \times \mathsf{X}^* \times \mathbb{R}. \end{align*} |
This is possible only if \varphi_1^{\epsilon} = \varphi_2^{\epsilon} = 0 \in \mathsf{X}^* and \mathsf{p}^{\epsilon} = 0 , which is impossible. Hence, only \mathsf{q}^{\epsilon} = -1 is possible in (5.15).
From (5.15), it follows that for any \omega \in \Omega and (\bar{a}, \bar{Z}, \widehat{\bar{Z}}, - \sqrt{\epsilon}(T + |\bar{a}|_{\mathsf{X}})) \in \mathcal{B}^{\epsilon} ,
\begin{align*} 0 \leq \langle \varphi_1^{\epsilon}, \bar{a} \rangle + \langle \varphi_2^{\epsilon} , Z \rangle + \mathsf{p}^{\epsilon} \widehat{Z} + \sqrt{\epsilon}(T + |\bar{a}|_{\mathsf{X}}), \end{align*} |
which implies
\begin{align} 0 \leq \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon} , Z^{\epsilon}_{\omega}(T) \rangle + \mathsf{p}^{\epsilon} \widehat{Z}^{\epsilon}_{\omega} + \sqrt{\epsilon}(T + |a|_{\mathsf{X}}). \end{align} | (5.16) |
Also, from (5.15) and the definition of \mathcal{A}^{\epsilon} , for any (a, Z, \widehat{Z}) \in \mathsf{X} \times \mathsf{X} \times \mathbb{R} ,
\begin{align} \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon}, Z \rangle + \mathsf{p}^{\epsilon} \widehat{Z} \leq \lambda_{\epsilon} \widehat{Z} + \eta_{\epsilon} \nabla d_{\mathsf{F}_0}(X_0^{\epsilon};a) + \nu_{\epsilon} \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T);Z). \end{align} | (5.17) |
Thus, with a = Z = 0 \in \mathsf{X} in (5.17), we have \mathsf{p}^{\epsilon} \widehat{Z} \leq \lambda_{\epsilon} \widehat{Z} , which leads to \mathsf{p}^{\epsilon} = \lambda_{\epsilon} \geq 0 . Therefore, it follows that for (a, Z) \in \mathsf{X} \times \mathsf{X} , (5.17) becomes
\begin{align} \langle \varphi_1^{\epsilon}, a \rangle + \langle \varphi_2^{\epsilon}, Z \rangle \leq \eta_{\epsilon} \nabla d_{\mathsf{F}_0}(X_0^{\epsilon};a) + \nu_{\epsilon} \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T);Z). \end{align} | (5.18) |
We now consider three different cases of (5.18) to obtain alternative form of (5.13):
● When \eta_{\epsilon} = \nu_{\epsilon} = 0 , (5.18) holds for any (a, Z) \in \mathsf{X} \times \mathsf{X} . Hence, \varphi_1^{\epsilon} = \varphi_2^{\epsilon} = 0 \in \mathsf{X}^* , by which \lambda_{\epsilon}^2 = |\mathsf{p}^{\epsilon}|^2 = 1 with \lambda_{\epsilon} = \mathsf{p}^{\epsilon} \geq 0 .
● When \eta_{\epsilon} = 0 and \nu_{\epsilon} \neq 0 , we have \varphi_1^{\epsilon} = 0 \in \mathsf{X}^* and (5.18) becomes
\begin{align*} \Bigl \langle \frac{\varphi_2^{\epsilon}}{\nu_{\epsilon}}, Z \Bigr \rangle \leq \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T);Z). \end{align*} |
Then by [10,Proposition 2.1.2], \frac{\varphi_2^{\epsilon}}{\nu_{\epsilon}} \in \partial d_{\mathsf{F}_T}(X^{\epsilon}(T)) , i.e., \frac{\varphi_2^{\epsilon}}{\nu_{\epsilon}} \in \mathsf{X}^* is a generalized gradient of d_{\mathsf{F}_T}(X^{\epsilon}(T)) , and |\frac{\varphi_2^{\epsilon}}{\nu_{\epsilon}}|_{\mathsf{X}^*} = 1 , i.e., |\varphi_2^{\epsilon}|_{\mathsf{X}^*} = \nu_{\epsilon} . Similarly, when \eta_{\epsilon} \neq 0 and \nu_{\epsilon} = 0 , we have \varphi_2^{\epsilon} = 0 \in \mathsf{X}^* and \varphi_1^{\epsilon} \in \partial d_{\mathsf{F}_0}(X_0^{\epsilon}) , implying |\frac{\varphi_1^{\epsilon}}{\eta_{\epsilon}}|_{\mathsf{X}^*} = 1 , i.e., |\varphi_1^{\epsilon}|_{\mathsf{X}^*} = \eta_{\epsilon} .
● When \eta_{\epsilon} \neq 0 and \nu_{\epsilon} \neq 0 , by (5.18), we have
\begin{align*} \eta_{\epsilon} \Bigl \langle \frac{\varphi_1^{\epsilon}}{\eta_{\epsilon}}, a \Bigr \rangle + \nu_{\epsilon} \Bigl \langle \frac{\varphi_2^{\epsilon}}{\nu_{\epsilon}}, Z \Bigr \rangle \leq \eta_{\epsilon} \nabla d_{\mathsf{F}_0}(X_0^{\epsilon};a) + \nu_{\epsilon} \nabla d_{\mathsf{F}_T}(X^{\epsilon}(T);Z). \end{align*} |
Analogously, \frac{\varphi_1^{\epsilon}}{\eta_{\epsilon}} \in \partial d_{\mathsf{F}_0}(X_0^{\epsilon}) and \frac{\varphi_2^{\epsilon}}{\nu_{\epsilon}} \in \partial d_{\mathsf{F}_T}(X^{\epsilon}(T)) , which imply |\varphi_1^{\epsilon}|_{\mathsf{X}^*} = \eta_{\epsilon} and |\varphi_2^{\epsilon}|_{\mathsf{X}^*} = \nu_{\epsilon} .
Hence, by (5.16), for any \omega \in \Omega , we have
\begin{align} - \sqrt{\epsilon}(T + |a|_{\mathsf{X}}) \leq \langle \varphi_1^{\epsilon}, a \rangle_{\mathsf{X} \times \mathsf{X}^*} + \langle \varphi_2^{\epsilon} , Z^{\epsilon}_{\omega}(T) \rangle_{\mathsf{X} \times \mathsf{X}^*} + \mathsf{p}^{\epsilon} \widehat{Z}^{\epsilon}_{\omega}, \end{align} | (5.19) |
where in view of all three cases discussed above and (5.13), it follows that the multiplier condition is equivalent to (note that \mathsf{p}^{\epsilon} \geq 0 )
\begin{align} |\mathsf{p}^{\epsilon}|^2 + |\varphi_1^{\epsilon}|^2_{\mathsf{X}^*} + |\varphi_2^{\epsilon}|^2_{\mathsf{X}^*} = 1. \end{align} | (5.20) |
To obtain the transversality condition, by the fact that \mathsf{F}_0 and \mathsf{F}_{T} are convex, the definition of subdifferential for convex functions (see [10,page 36]) implies for any (X^{\prime \prime}, X^\prime) \in \mathsf{X} \times \mathsf{X} ,
\begin{align*} \Bigl \langle \frac{\varphi_1^{\epsilon}}{\eta_{\epsilon}}, X^{\prime \prime} - X_0^{\epsilon} \Bigr \rangle & \leq d_{\mathsf{F}_0}(X^{\prime \prime}) - d_{\mathsf{F}_0}(X_0^{\epsilon}), \\ \Bigl \langle \frac{\varphi_2^{\epsilon}}{\nu_{\epsilon}}, X^\prime - X^{\epsilon}(T) \Bigr \rangle & \leq d_{\mathsf{F}_T}(X^\prime) - d_{\mathsf{F}_T}(X^{\epsilon}(T)). \end{align*} |
Hence, for (X^{\prime \prime}, X^\prime) \in \mathsf{F}_0 \times \mathsf{F}_T , we have (see (F.2) and (F.3))
\begin{align} \frac{d_{\mathsf{F}_0}(X_0^{\epsilon})^2}{ J_{\epsilon}(X_0^{\epsilon};u^{\epsilon}(\cdot))} + \langle \varphi_1^{\epsilon}, X^{\prime \prime} - X_0^{\epsilon} \rangle & \leq 0 , \end{align} | (5.21) |
\begin{align} \frac{d_{\mathsf{F}_T}(X^{\epsilon}(T))^2}{ J_{\epsilon}(X_0^{\epsilon};u^{\epsilon}(\cdot))} + \langle \varphi_2^{\epsilon}, X^\prime - X^{\epsilon}(T) \rangle & \leq 0. \end{align} | (5.22) |
The following lemma shows the variational analysis with respect to \epsilon . The proof is analogous to that of Lemma 5.1.
Lemma 5.2. The following estimates hold
\begin{align*} \sup\limits_{\omega \in \Omega} \sup\limits_{t \in [0,T]} | Z^{\epsilon}_{\omega}(t) - Z_{\omega}(t) |_{\mathsf{X}} = o(1),\; \; \; \; \; \; \; \sup\limits_{ \omega \in \Omega} | \widehat{Z}^{\epsilon}_{\omega} - \widehat{Z}_{\omega} | = o(1), \end{align*} |
where Z_{\omega} is the \mathsf{X} -valued variational equation given by
\begin{align} \begin{cases} ^{ \mathrm{C}}{ \mathrm{D}}{_{0+}^{\alpha}}[Z_{\omega}](t) + A Z_{\omega}(t) = \partial_X \overline{f}(t) Z_{\omega}(t) + \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(t),\; t \in (0,T],\\ Z_{\omega}(0) = a \in \mathsf{X}, \end{cases} \end{align} | (5.23) |
and \widehat{Z}_{\omega} is the \mathbb{R} -valued variational equation:
\begin{align} \widehat{Z}_{\omega} & = \sum\limits_{j = 1}^k \theta_j \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Bigl ( \langle \partial_X \overline{l}(s), Z_{\omega}(s) \rangle + \widehat{l}_j(s) \Bigr ) {\rm{d}}s + \langle \partial_{X_0} \overline{m}, a \rangle + \langle \partial_{X} \overline{m}, Z_{\omega}(T) \rangle. \end{align} | (5.24) |
From (5.21), (5.22) and the fact that | \varphi_1^{\epsilon}|_{\mathsf{X}^*} \leq 1 and | \varphi_2^{\epsilon}|_{\mathsf{X}^*} \leq 1 , for (X^{\prime \prime}, X^\prime) \in \mathsf{F}_0 \times \mathsf{F}_T , as \epsilon \downarrow 0 ,
\begin{align} \begin{cases} \langle \varphi_1^{\epsilon}, X^{\prime \prime} - \overline{X}_0 \rangle = \langle \varphi_1^{\epsilon}, X^{\prime \prime} - X_0^{\epsilon} \rangle + \langle \varphi_1^{\epsilon}, X_0^{\epsilon} - \overline{X}_0 \rangle \leq | \varphi_1^{\epsilon}|_{\mathsf{X}^*} |X_0^{\epsilon} - \overline{X}_0|_{\mathsf{X}} \rightarrow 0, \\ \langle \varphi_2^{\epsilon}, X^\prime - \overline{X}(T) \rangle \leq | \varphi_2^{\epsilon}|_{\mathsf{X}^*} |X^{\epsilon}(T) - \overline{X}(T) |_{\mathsf{X}} \rightarrow 0. \end{cases} \end{align} | (5.25) |
By Lemma 5.2, for any \omega \in \Omega , (5.19) becomes
\begin{align*} & \langle \varphi_1^{\epsilon},a \rangle + \langle \varphi_2^{\epsilon},Z_{\omega}(T) \rangle + \mathsf{p}^{\epsilon} \widehat{Z}_{\omega} \\ & = \langle \varphi_1^{\epsilon},a \rangle + \langle \varphi_2^{\epsilon},Z_{\omega}^{\epsilon}(T) \rangle + \mathsf{p}^{\epsilon} \widehat{Z}_{\omega}^{\epsilon} - \langle \varphi_2^{\epsilon}, Z_{\omega}^{\epsilon}(T) - Z_{\omega}(T) \rangle - \mathsf{p}^{\epsilon} (\widehat{Z}_{\omega}^{\epsilon} - \widehat{Z}_{\omega}) \\ & \geq - \sqrt{\epsilon}(T + |a|_{\mathsf{X}}) - |\varphi_2^{\epsilon}|_{\mathsf{X}^*} |Z_{\omega}^{\epsilon}(T) - Z_{\omega}(T)|_{\mathsf{X}} - |\mathsf{p}^{\epsilon}| |\widehat{Z}_{\omega}^{\epsilon} - \widehat{Z}_{\omega}| \\ & \geq - \sqrt{\epsilon}(T + |a|_{\mathsf{X}}) - |Z_{\omega}^{\epsilon}(T) - Z_{\omega}(T)|_{\mathsf{X}} - |\widehat{Z}_{\omega}^{\epsilon} - \widehat{Z}_{\omega}| = : c_{\epsilon}. \end{align*} |
Note that by Lemma 5.2, for any \omega \in \Omega , we have \lim_{\epsilon \downarrow 0} c_{\epsilon} = 0 . Then using (5.25), it follows that for any \omega \in \Omega and (X^{\prime \prime}, X^\prime) \in \mathsf{F}_0 \times \mathsf{F}_T
\begin{align} \langle \varphi_1^{\epsilon},a - (X^{\prime \prime} - \overline{X}_0) \rangle_{\mathsf{X} \times \mathsf{X}^*} + \langle \varphi_2^{\epsilon},Z_{\omega}(T) - (X^\prime - \overline{X}(T) ) \rangle_{\mathsf{X} \times \mathsf{X}^*} + \mathsf{p}^{\epsilon} \widehat{Z}_{\omega} \geq c_{\epsilon} \rightarrow 0\; \text{as } \epsilon \downarrow 0 . \end{align} | (5.26) |
Recall from Lemma B.4 in Appendix 6 that the variational equation in (5.23) can be written as
\begin{align} Z_{\omega}(t) & = a + \int_0^t \Pi(t,s)\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s + \sum\limits_{j = 1}^k \theta_j \int_0^t \Pi(t,s) \widehat{f}_j(s) {\rm{d}}s,\; t \in [0,T]. \end{align} | (5.27) |
In view of (5.27) and Lemma B.4, \mathcal{R} and \mathcal{Q} in (3.1) and (3.2) can be rewritten as
\begin{align} & \mathcal{R} = \Biggl \{ \xi(T) \in \mathsf{X}\; |\; \xi(t) = \sum\limits_{j = 1}^k \theta_j \int_0^t \Pi(t,s) \widehat{f}_j(s) {\rm{d}}s,\; t \in [0,T],\; \Bigl ( \{u_j(\cdot)\}_{j = 1}^{k}, \{\theta_j\}_{j = 1}^{k} \Bigr ) \in \Omega \Biggr \} , \end{align} | (5.28) |
\begin{align} &\mathcal{Q} = \Biggl \{ \hat{X} - \hat{X}_0 - \int_0^T \Pi(T,s)\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) \hat{X}_0 {\rm{d}}s \in \mathsf{X}\; |\; (\hat{X}_0,\hat{X}) \in \mathsf{F}_0 \times \mathsf{F}_T \Biggr \}. \end{align} | (5.29) |
Let us define
\begin{align*} \widehat{\mathcal{R}} : = \Biggl \{(\hat{X}_0,\hat{X}) \in \mathsf{X} \times \mathsf{X}\; |\; \hat{X} = \hat{X}_0 + \int_0^T \Pi(T,s)\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) \hat{X}_0 {\rm{d}}s + \xi,\; \xi \in \mathcal{R},\; \hat{X}_0 \in \mathsf{X} \Biggr \}. \end{align*} |
Since \mathcal{R}-\mathcal{Q} is finite codimensional in \mathsf{X} , by [26,Proposition 3.5,Chapter 4], it follows that \widehat{\mathcal{R}} - \mathsf{F} (note that \mathsf{F} = \mathsf{F}_0 \times \mathsf{F}_T ) is finite codimensional in \mathsf{X} \times \mathsf{X} . Then by [26,Proposition 3.4,Chapter 4], \widehat{\mathcal{R}} - \mathsf{F} + \begin{bmatrix}\overline{X}_0 \\ \overline{X}(T) \end{bmatrix} is also finite codimensional in \mathsf{X} \times \mathsf{X} . Hence, (5.26) can be written as for any \omega \in \Omega ,
\begin{align*} \Bigl \langle \begin{bmatrix} \varphi_1^{\epsilon} \\ \varphi_2^{\epsilon} \end{bmatrix}, \begin{bmatrix} X^{\prime \prime} \\ X^\prime \end{bmatrix} \Bigr \rangle + \mathsf{p}^{\epsilon} \widehat{Z}_{\omega} \geq c_{\epsilon},\; \forall \begin{bmatrix} X^{\prime \prime} \\ X^\prime \end{bmatrix} \in \widehat{\mathcal{R}} - \mathsf{F} + \begin{bmatrix}\overline{X}_0 \\ \overline{X}(T) \end{bmatrix}. \end{align*} |
Let (\mathsf{p}^{\epsilon_k}, \varphi_1^{\epsilon_k}, \varphi_2^{\epsilon_k}) \in \mathbb{R} \times \mathsf{X}^* \times \mathsf{X}^* be the sequence satisfying (5.20) for k \geq 1 . Consider \epsilon_k \downarrow 0 as k \rightarrow \infty . Then by the Banach-Alaoglu theorem and (5.20), we can extract the subsequence, still denoted by (\mathsf{p}^{\epsilon_k}, \varphi_1^{\epsilon_k}, \varphi_2^{\epsilon_k}) \in \mathbb{R} \times \mathsf{X}^* \times \mathsf{X}^* , such that (\mathsf{p}^{\epsilon_k}, \varphi_1^{\epsilon_k}, \varphi_2^{\epsilon_k}) \rightarrow (\mathsf{p}, \varphi_1, \varphi_2) \neq 0 . Notice that \mathsf{p} \geq 0 and the convergence of (\varphi_1^{\epsilon_k}, \varphi_2^{\epsilon_k}) to (\varphi_1, \varphi_2) as k \rightarrow \infty is understood in the weak -^* sense. Here, the nontriviality \mathbb{R} \times \mathsf{X}^* \times \mathsf{X}^* \ni (\mathsf{p}, \varphi_1, \varphi_2) \neq 0 follows from [26,Lemma 3.6,Chapter 4] and by the fact that \widehat{\mathcal{R}} - \mathsf{F} is finite codimensional in \mathsf{X} \times \mathsf{X} . In particular, when \mathsf{p} \neq 0 ( \mathsf{p} > 0 ), we are done. Otherwise, if \mathsf{p}^{\epsilon_k} \rightarrow 0 as k \rightarrow \infty , then |\varphi_1^{\epsilon_k}|_{\mathsf{X}^*} + |\varphi_2^{\epsilon_k}|_{\mathsf{X}^*} > 0 for sufficiently large k , which implies the nontriviality \mathsf{X}^* \times \mathsf{X}^* \ni (\varphi_1, \varphi_2) \neq 0 [26,Lemma 3.6,Chapter 4].
Hence, as \epsilon \downarrow 0 , (5.26) can be written as for any \omega \in \Omega and (X^{\prime \prime}, X^\prime) \in \mathsf{F}_0 \times \mathsf{F}_T ,
\begin{align} \langle \varphi_1,a - (X^{\prime \prime} - \overline{X}_0) \rangle + \langle \varphi_2,Z_{\omega}(T) - (X^\prime - \overline{X}(T) ) \rangle + \mathsf{p} \widehat{Z}_{\omega} \geq 0, \end{align} | (5.30) |
where (\mathsf{p}, \varphi_1, \varphi_2) \in \mathbb{R} \times \mathsf{X}^* \times \mathsf{X}^* with (\mathsf{p}, \varphi_1, \varphi_2) \neq 0 , \mathsf{p} \geq 0 , and |\mathsf{p}|^2 + |\varphi_1|_{\mathsf{X}^*}^2 + |\varphi_2|_{\mathsf{X}^*}^2 \leq 1 from (5.20). Notice that by (5.25), it follows that for any (X^{\prime \prime}, X^{\prime}) \in \mathsf{F}_0 \times \mathsf{F}_T ,
\begin{align*} \langle \varphi_1, X^{\prime \prime} - \overline{X}_0 \rangle \leq 0,\; \langle \varphi_2, X^{\prime} - \overline{X}(T) \rangle \leq 0. \end{align*} |
Then by definition of subdifferential of convex functions, it follows that
(\varphi_1,\varphi_2) \in N_{\mathsf{F}_0}(\overline{X}_0) \times N_{\mathsf{F}_T}(\overline{X}(T)), |
see [10,Proposition 2.4.4]. This proves the nontriviality condition of Theorem 3.1.
Let \mathsf{p}^{0} : = - \mathsf{p} \leq 0 . The proof of (ⅱ) is given in Lemma B.5 in Appendix 6, by which we recall that the adjoint equation in (3.3) can be written as
\begin{align} P(t) & = \Pi(T,t)^* \Bigl ( -\varphi_2 + \mathsf{p}^{0} \partial_X \overline{m} \Bigr ) + \mathsf{p}^{0} \int_t^{T} \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Pi(s,t)^* \partial_X \overline{l}(s) {\rm{d}}s. \end{align} | (5.31) |
Then for any \omega \in \Omega and (X^{\prime \prime}, X^\prime) \in \mathsf{F}_0 \times \mathsf{F}_T , (5.30) becomes
\begin{align} \langle \varphi_1, (X^{\prime \prime} - \overline{X}_0) \rangle + \langle \varphi_1, -a \rangle - \langle -\varphi_2, (X^\prime - \overline{X}(T) ) \rangle - \langle -\varphi_2, - Z_{\omega}(T) \rangle + \mathsf{p}^{0} \widehat{Z}_{\omega} \leq 0. \end{align} | (5.32) |
By (5.27) and \widehat{Z}_{\omega} in (5.24), we have
\begin{align*} & \langle \varphi_1, -a \rangle - \langle -\varphi_2, - Z_{\omega}(T) \rangle + \mathsf{p}^{0} \widehat{Z}_{\omega} \\ & = \langle -\varphi_1 + \mathsf{p}^{0} \partial_{X_0} \overline{m} , a \rangle + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \sum\limits_{j = 1}^k \theta_j \widehat{l}_j(s) {\rm{d}}s \\ & + \langle - \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m}, a \rangle + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \langle \partial_X \overline{l}(s), a \rangle {\rm{d}}s \\ & + \Bigl \langle - \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m}, \int_0^T \Pi(T,s)\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s \Bigr \rangle \\ & + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Bigl \langle \partial_X \overline{l}(s), \int_0^s \Pi(s,\tau) \Bigl (-A + \partial_X \overline{f}(\tau) \Bigr )a {\rm{d}}\tau \Bigr \rangle {\rm{d}}s \\ & + \Bigl \langle - \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m}, \sum\limits_{j = 1}^k \theta_j \int_0^T \Pi(T,s) \widehat{f}_j(s) {\rm{d}}s \Bigr \rangle \\ & + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Bigl \langle \partial_X \overline{l}(s), \int_0^s \Pi(s,\tau) \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(\tau) {\rm{d}}\tau \Bigr \rangle {\rm{d}}s. \end{align*} |
By Fubini's formula ([6,Theorem 3.4.4]),
\begin{align*} & \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Bigl \langle \partial_X \overline{l}(s), \int_0^s \Pi(s,\tau) \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(\tau) {\rm{d}}\tau \Bigr \rangle {\rm{d}}s \\ & = \int_0^T \Bigl \langle \int_s^T \frac{(T-\tau)^{\beta-1}}{\Gamma(\beta)} \Pi(\tau,s)^* \partial_X \overline{l}(\tau) {\rm{d}}\tau, \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(s) \Bigr \rangle {\rm{d}}s. \end{align*} |
Then using (5.31) and Lemma B.3 in Appendix 6, we have
\begin{align*} & \Bigl \langle - \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m}, \sum\limits_{j = 1}^k \theta_j \int_0^T \Pi(T,s) \widehat{f}_j(s) {\rm{d}}s \Bigr \rangle \\ &\; \; \; + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Bigl \langle \partial_X \overline{l}(s), \int_0^s \Pi(s,\tau) \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(\tau) {\rm{d}}\tau \Bigr \rangle {\rm{d}}s \\ & = \int_0^T \Bigl \langle \Pi(T,s)^*\Bigl (- \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m} \Bigr) \\ &\; \; \; + \mathsf{p}^{0} \int_s^T \frac{(T-\tau)^{\beta-1}}{\Gamma(\beta)} \Pi(\tau,s)^* \partial_X \overline{l}(\tau) {\rm{d}}\tau, \sum\limits_{k = 1}^j \theta_j \widehat{f}_j(s) \Bigr \rangle {\rm{d}}s \\ & = \int_0^T \Bigl \langle P(s), \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(s) \Bigr \rangle_{\mathsf{X} \times \mathsf{X}^*} {\rm{d}}s, \end{align*} |
and similarly
\begin{align*} & \Bigl \langle - \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m}, \int_0^T \Pi(T,s)\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s \Bigr \rangle \\ &\; \; \; + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Bigl \langle \partial_X \overline{l}(s), \int_0^s \Pi(s,\tau) \Bigl (-A + \partial_X \overline{f}(\tau) \Bigr )a {\rm{d}}\tau \Bigr \rangle {\rm{d}}s \\ & = \int_0^T \Bigl \langle \Pi(T,s)^* \Bigl ( - \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m} \Bigr ) + \mathsf{p}^{0} \int_s^T \frac{(T-\tau)^{\beta-1}}{\Gamma(\beta)} \Pi(\tau,s)^* \partial_X \overline{l}(\tau) {\rm{d}}\tau, \Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a \Bigr \rangle {\rm{d}}s \\ & = \int_0^T \Bigl \langle P(s),\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a \Bigr \rangle_{\mathsf{X} \times \mathsf{X}^*} {\rm{d}}s. \end{align*} |
Using the preceding analysis, (5.32) can be written as the following duality form:
\begin{align} & \langle \varphi_1, (X^{\prime \prime} - \overline{X}_0) \rangle + \langle \varphi_1, -a \rangle - \langle -\varphi_2, (X^\prime - \overline{X}(T) ) \rangle - \langle -\varphi_2, - Z_{\omega}(T) \rangle + \mathsf{p}^{0} \widehat{Z}_{\omega} \\ & = \langle \varphi_1, (X^{\prime \prime} - \overline{X}_0) \rangle - \langle -\varphi_2, (X^\prime - \overline{X}(T) ) \rangle + \langle -\varphi_1 + \mathsf{p}^{0} \partial_{X_0} \overline{m} ,- \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m} a \rangle \\ &\; \; \; + \int_0^T \Bigl \langle P(s),\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a \Bigr \rangle {\rm{d}}s + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \langle \partial_X \overline{l}(s), a \rangle {\rm{d}}s \\ &\; \; \; + \int_0^T \Bigl \langle P(s), \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(s) \Bigr \rangle {\rm{d}}s + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \sum\limits_{j = 1}^k \theta_j \widehat{l}_j(s) {\rm{d}}s \leq 0. \end{align} | (5.33) |
We use (5.33) to derive the transversality condition as well as the Hamiltonian maximization condition in Theorem 3.1.
When u = \overline{u} , (5.33) becomes
\begin{align} &\langle \varphi_1, (X^{\prime \prime} - \overline{X}_0) \rangle - \langle -\varphi_2, (X^\prime - \overline{X}(T) ) \rangle + \langle -\varphi_1 + \mathsf{p}^{0} \partial_{X_0} \overline{m} - \varphi_2 + \mathsf{p}^{0} \partial_X \overline{m} , a \rangle \\ &\; \; \; + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \langle \partial_X \overline{l}(s), a \rangle {\rm{d}}s + \int_0^T \Bigl \langle P(s),\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a \Bigr \rangle {\rm{d}}s \leq 0. \end{align} | (5.34) |
Let us take
\begin{align} \begin{cases} \mathrm{I}_{T-}^{1-\alpha}[P](T) = -\varphi_2 + \mathsf{p}^{0} \partial_X \overline{m}, \\ \mathrm{I}_{T-}^{1-\alpha}[P](0) = -(-\varphi_1 + \mathsf{p}^{0} \partial_{X_0} \overline{m}). \end{cases} \end{align} | (5.35) |
Using the adjoint equation in (3.3),
\begin{align*} & \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \langle \partial_X \overline{l}(s), a \rangle {\rm{d}}s + \int_0^T \Bigl \langle P(s),\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a \Bigr \rangle {\rm{d}}s \\ & = \int_0^T \Bigl \langle ^{RL}{ \mathrm{D}}{_{T-}^{\alpha}}[P](s), a \Bigr \rangle {\rm{d}}s = \int_0^T \Bigl \langle - \frac{{\rm{d}}}{{\rm{d}}s} \Bigl [ \mathrm{I}_{T-}^{1-\alpha}[P] \Bigr ](s),a \Bigr \rangle {\rm{d}}s \\ & = \Bigl \langle \mathrm{I}_{T-}^{1-\alpha}[P] (0) - \mathrm{I}_{T-}^{1-\alpha}[P](T), a \Bigr \rangle = \Bigl \langle \Bigl ( -(-\varphi_1 + \mathsf{p}^{0} \partial_{X_0} \overline{m}) \Bigr) - \Bigl ( -\varphi_2 + \mathsf{p}^{0} \partial_X \overline{m} \Bigr ), a \Bigr \rangle. \end{align*} |
Then (5.34) becomes for (X^{\prime \prime}, X^{\prime}) \in \mathsf{F}_0 \times \mathsf{F}_T ,
\begin{align} &\langle \mathrm{I}_{T-}^{1-\alpha}[P](0) - \mathsf{p}^{0} \partial_{X_0} \overline{m}, (X^{\prime \prime} - \overline{X}_0) \rangle - \langle \mathrm{I}_{T-}^{1-\alpha}[P](T) - \mathsf{p}^{0} \partial_X \overline{m}, (X^\prime - \overline{X}(T) ) \rangle \leq 0. \end{align} | (5.36) |
Therefore, (5.35) and (5.36) prove the transversality condition in Theorem 3.1.
We prove the Hamiltonian maximization condition in Theorem 3.1. Notice that when (X^{\prime \prime}, X^\prime) = (\overline{X}_0, \overline{X}(T)) and a = 0 , (5.33) becomes
\begin{align*} \int_0^T \Bigl \langle P(s), \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(s) \Bigr \rangle {\rm{d}}s + \mathsf{p}^{0} \int_0^T \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \sum\limits_{j = 1}^k \theta_j \widehat{l}_j(s) {\rm{d}}s \leq 0. \end{align*} |
Take k = 1 . The definition of the Hamiltonian in Theorem 3.1 implies
\begin{align*} \int_0^T H(s,\overline{X}(s),P(s), \mathsf{p}^{0},u(s)) {\rm{d}}s \leq \int_0^T H(s,\overline{X}(s),P(s), \mathsf{p}^{0}, \overline{u}(s)). \end{align*} |
Since \mathsf{U} is separable, there exists a countable dense set \mathsf{U}_0 = \{u_i, \; i \geq 1\} \subset \mathsf{U} . Moreover, there exists a measurable set S_i \subset [0, T] such that |S_i| = T and any t \in S_i is the Lebesgue point of H(t, \overline{X}(t), P(t), \mathsf{p}^{0}, u(t)) due to the fact that H(\cdot, X, P, \mathsf{p}^{0}, u) \in L^1([0, T]; \mathbb{R}) , i.e., we have \lim_{\tau \downarrow 0} \frac{1}{2\tau} \int_{t-\tau}^{t+\tau} H(s, \overline{X}(s), P(s), \mathsf{p}^{0}, u(s)) {\rm{d}}s = H(t, \overline{X}(t), P(t), \mathsf{p}^{0}, u(t)) [6,Theorem 5.6.2]. We fix u_i \in \mathsf{U}_0 . For any t \in S_i , define
\begin{align*} u(s) : = \begin{cases} \overline{u}(s), & s \in [0,T] \setminus (t-\tau,t+\tau),\\ u_i, & s \in (t-\tau,t+\tau). \end{cases} \end{align*} |
It then follows that
\begin{align*} & \lim\limits_{\tau \downarrow 0} \frac{1}{2\tau} \int_{t - \tau}^{t + \tau} H(s,\overline{X}(s),P(s), \mathsf{p}^{0},u(s)) {\rm{d}}s \\ &\leq \lim\limits_{\tau \downarrow 0} \frac{1}{2\tau} \int_{t - \tau}^{t + \tau} H(s,\overline{X}(s),P(s), \mathsf{p}^{0},\overline{u}(s)) {\rm{d}}s \\ & \Rightarrow \; H(t,\overline{X}(t),P(t), \mathsf{p}^{0},u(t)) \leq H(t,\overline{X}(t),P(t), \mathsf{p}^{0},\overline{u}(t)). \end{align*} |
Since \cap_{i \geq 1} S_i = [0, T] , H is continuous in u \in \mathsf{U} , and \mathsf{U} is separable, it follows that
\begin{align*} \max\limits_{u \in \mathsf{U}} H(t,\overline{X}(t),P(t), \mathsf{p}^{0},u) = H(t,\overline{X}(t),P(t), \mathsf{p}^{0},\overline{u}(t)),\; \mathrm{a.e.}\; t \in [0,T], \end{align*} |
which shows the Hamiltonian maximization condition. This is the end of the proof of Theorem 3.1.
We state the following interesting potential future research problems of this paper:
● One important future research problem is to study the finite codimensionality in Assumption 3 for fractional evolution equations. This should require different techniques, compared with the nonfractional case studied in the earlier works (see Remark 3.2 and [5,14,15,21,26,27,29,30,35,39,40,45,46]). Indeed, it is important to study the equivalence between the finite codimensionality and the finite codimensional exactly controllability for fractional evolution equations as in the nonfractional case in [30,Theorem 3.2] and [29,Corollary 4.1].
● We may consider the applicability of the derived maximum principle to specific fractional evolution equations arising in different scientific and engineering applications, including various classes of fractional PDEs and differential equations with delay.
● It would be interesting to consider more general endpoint state constraints, such as time-dependent or nonlinear constraints, and to investigate the stability and sensitivity analysis of the fractional optimal control problem in the presence of uncertainties or disturbances.
● Finally, we plan to study the control problem in a distributed control framework and analyze the optimal control of fractional evolution equations on networks or graphs.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research was supported in part by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea (RS-2023-00235742), in part by the National Research Foundation of Korea (NRF) Grant funded by the Ministry of Science and ICT, South Korea (NRF-2021R1A2C2094350) and in part by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-01373, Artificial Intelligence Graduate School Program (Hanyang University)).
The authors would like to thank four anonymous reviewers for their careful reading and suggestions, which have helped to improve the paper. The authors particularly express thanks to the reviewer who pointed out the conditions in Theorem 2.1 and the important issue of the finite codimensionality in Section 4 of the earlier version of the manuscript.
All authors declare no conflicts of interest in this paper.
Lemma A.1. [24,Lemma 2.3] and [3,page 10] For any f(\cdot) \in L^1([0, T]; \mathsf{X}) and \alpha, \beta > 0 , it holds that
\mathrm{I}_{0+}^{\alpha} [ \mathrm{I}_{0+}^{\beta}[f] ](\cdot) = \mathrm{I}_{0+}^{\alpha+\beta}[f](\cdot) = \mathrm{I}_{0+}^{\beta + \alpha}[f](\cdot) = \mathrm{I}_{0+}^{\beta} [ \mathrm{I}_{0+}^{\alpha}[f] ](\cdot) |
and
\mathrm{I}_{T-}^{\alpha} [ \mathrm{I}_{T-}^{\beta}[f] ](\cdot) = \mathrm{I}_{T-}^{\alpha+\beta}[f](\cdot) = \mathrm{I}_{T-}^{\beta + \alpha}[f](\cdot) = \mathrm{I}_{T-}^{\beta} [ \mathrm{I}_{T-}^{\alpha}[f] ](\cdot). |
Lemma A.2. [43,Theorem 1] Let z(\cdot) \in L^1([0, T]; \mathbb{R}) and b (\cdot) \in L^1([0, T]; \mathbb{R}) be nonnegative functions on [0, T] , and h(\cdot) \in C([0, T]; \mathbb{R}) be nonnegative and nondecreasing. Assume that
z(t) \leq b(t) + h(t) \int_{0}^{t} (t-s)^{\alpha-1} z(s) {\rm{d}}s \mathit{{\; \; for\; }\;} t \in [0,T]. |
Then it holds that z(t) \leq b(t) + \int_{0}^{t} \sum_{k = 1}^{\infty} \frac{(h(t)\Gamma(\alpha))^{k}}{\Gamma (k \alpha)} (t-s)^{k\alpha-1} b(s) {\rm{d}}s for t \in [0, T] . In addition, when b is nondecreasing, we have z(t) \leq b(t) E_{\alpha} [ h(t) \Gamma(\alpha) t^{\alpha} ] , where E_{\alpha}(t) : = \sum_{k = 0}^{\infty} \frac{t^k}{\Gamma(\alpha k +1)} is the Mittag-Leffler function.
Proof of Theorem 2.1. The existence and uniqueness of the (mild) solution to (2.1) under (ⅴ) of Assumption 1 follows from [48,Theorem 3.3] (see also [41,Theorem 3.1]), in which the analyticity of -A , the Laplace transformation, and the contraction mapping theorem are key ingredients in the proof. The integral expression in (2.2) can be obtained from Definition 1 and the semigroup property of fractional integral in Lemma B.1. The last two estimates can be shown using Assumption 1 and Lemma B.2. In particular, by Assumption 1 (including the compactness of (\mathscr{T}(t))_{t \geq 0} ), we can show that
\begin{align*} |X(t) - X^\prime(t)|_{\mathsf{X}} & \leq M |X_0 - X_0^\prime|_{\mathsf{X}} + M \int_{0}^{t} (t-s)^{\alpha-1} |X(s) - X^\prime(s)|_{\mathsf{X}} {\rm{d}}s. \end{align*} |
Then we apply Lemma B.2 to get the first one. A similar approach can be applied to show the second estimate. This completes the proof.
This appendix provides the representation results on linear fractional evolution equations. We provide the complete proofs, which are omitted in [33].
Lemma B.3. Let \Pi : [0, T] \times [0, T] \rightarrow \mathcal{L}(\mathsf{X}) be the left RL fractional state-transition evolution operator defined by
\begin{align} \Pi(t,s)x & = \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \mathrm{I} x + \int_{s}^{t} \frac{(t-r)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A + \partial_X \overline{f}(r) \Bigr ) \Pi(r,s) x {\rm{d}}r. \end{align} | (B.1) |
Then (B.1) can be written as the following right RL fractional state-transition evolution operator form:
\begin{align} \Pi(t,s)x & = \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \mathrm{I}x + \int_{s}^{t} \frac{(r-s)^{\alpha-1}}{\Gamma(\alpha)} \Pi(t,r) \Bigl ( -A + \partial_X \overline{f}(r) \Bigr ) x {\rm{d}}r. \end{align} | (B.2) |
Proof. By the right-hand-side of (B.2), let
\begin{align} \widehat{\Pi}(t,\tau) & : = \frac{(t-\tau)^{\alpha-1}}{\Gamma(\alpha)} \mathrm{I} + \int_{\tau}^{t} \frac{(r-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Pi(t,r) \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) {\rm{d}}r. \end{align} | (B.3) |
Then we need to show the equivalence of (B.1) and (B.3). Hence, we prove the following result:
\begin{align} \int_{\tau}^{t} \frac{(t-r)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) \widehat{\Pi}(r,\tau) {\rm{d}}r & = \int_{\tau}^{t} \frac{(r-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Pi(t,r) \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) {\rm{d}}r. \end{align} | (B.4) |
Clearly, (B.4) holds when t = \tau . Then (B.4) follows by Fubini's formula (see [6,Theorem 3.4.4]), since we can show that
\begin{align*} & \int_{\tau}^{t} \frac{(t-r)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) \widehat{\Pi}(r,\tau) {\rm{d}}r \\ & = \int_{\tau}^{t} \frac{(t-r)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) \frac{(r-\tau)^{\alpha-1}}{\Gamma(\alpha)} I_n {\rm{d}}r \\ &\; \; \; + \int_{\tau}^{t} \frac{(t-r)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) \int_{\tau}^{r} \frac{(\nu-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Pi(r,\nu) F(\nu) {\rm{d}}\nu {\rm{d}}r \\ & = \int_{\tau}^{t} \frac{(t-r)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) \frac{(r-\tau)^{\alpha-1}}{\Gamma(\alpha)} \mathrm{I} {\rm{d}}r \\ &\; \; \; + \int_{\tau}^{t} \int_{r}^{t} \frac{(t-\nu)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(\nu) \Bigr ) \Pi(\nu,r) {\rm{d}}\nu \frac{(r-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) {\rm{d}}r \\ & = \int_{\tau}^{t} \frac{(t-r)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) \frac{(r-\tau)^{\alpha-1}}{\Gamma(\alpha)} \mathrm{I} {\rm{d}}r \\ &\; \; \; + \int_{\tau}^{t} \Bigl ( \Pi(t,r) - \frac{(t-r)^{\alpha-1}}{\Gamma(\alpha)} \mathrm{I} \Bigr ) \frac{(r-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) {\rm{d}}r \\ & = \int_{\tau}^{t} \frac{(r-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Pi(t,r) \Bigl ( - A + \partial_X \overline{f}(r) \Bigr ) {\rm{d}}r. \end{align*} |
This completes the proof.
Remark B.1. Notice that by Definition 1, \mathrm{I}_{s+}^{1-\alpha}[\Pi](s, s) = \mathrm{I} and \mathrm{I}_{t-}^{1-\alpha}[\Pi](t, t) = \mathrm{I} for s, t \in [0, T] with s \leq t . Hence, we can see that (B.1) is the left RL fractional state-transition evolution operator with the initial condition, whereas (B.2) is the right RL fractional state-transition evolution operator with the terminal condition.
The next result states the representation result for linear forward Caputo fractional evolution equations. The finite-dimensional case was reported in [7,19].
Lemma B.4. The (mild) solution to the variational equation in (5.23) is as follows:
\begin{align} Z_{\omega}(t) = a + \int_0^t \Pi(t,s)\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s + \sum\limits_{j = 1}^k \theta_j \int_0^t \Pi(t,s) \widehat{f}_j(s) {\rm{d}}s,\; t \in [0,T], \end{align} | (B.5) |
where \Pi is the \mathsf{X} -valued left RL fractional state-transition evolution operator in (B.1).
Proof. Since (5.23) is a linear fractional evolution equation, the existence and uniqueness of the solution follows from Theorem 2.1 under (ⅴ) of Assumption 1.
We now prove (B.5). By Theorem 2.1, it follows that
\begin{align*} Z_{\omega}(t) = a + \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A + \partial_X \overline{f}(s) \Bigr ) Z_{\omega}(s) {\rm{d}}s + \sum\limits_{j = 1}^k \theta_j \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \widehat{f}_j(s) {\rm{d}}s. \end{align*} |
Hence, to prove (B.5), we have to show the following equivalence:
\begin{align} & \int_0^t \Pi(t,s)\Bigl (-A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s + \sum\limits_{j = 1}^k \theta_j \int_0^t \Pi(t,s) \widehat{f}_j(s) {\rm{d}}s \\ & = \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A + \partial_X \overline{f}(s) \Bigr ) Z_{\omega}(s) {\rm{d}}s + \sum\limits_{j = 1}^k \theta_j \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \widehat{f}_j(s) {\rm{d}}s. \end{align} | (B.6) |
Clearly, it holds when t = 0 . Then by Fubini's formula,
\begin{align*} & \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl [ \Bigl ( -A + \partial_X \overline{f}(s) \Bigr ) Z_{\omega}(s) + \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(s) \Bigr ] {\rm{d}}s \\ & = \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s + \sum\limits_{j = 1}^k \theta_j \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \widehat{f}_j(s) {\rm{d}}s \\ &\; \; \; + \int_0^t \int_{s}^t \frac{(t-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A + \partial_X \overline{f}(\tau) \Bigr ) \Pi(\tau,s) {\rm{d}}\tau \Bigl ( -A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s \\ &\; \; \; + \int_0^t \int_{s}^{t} \frac{(t-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A + \partial_X \overline{f}(\tau) \Bigr ) \Pi(\tau,s) {\rm{d}}\tau \sum\limits_{k = 1}^j \theta_j \widehat{f}_j(s) {\rm{d}}s. \end{align*} |
Rearranging the right-hand side of the above expression yields
\begin{align*} &\int_0^t \Biggl [ \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \mathrm{I} + \int_{s}^t \frac{(t-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A + \partial_X \overline{f}(\tau) \Bigr ) \Pi(\tau,s) {\rm{d}}\tau \Biggr ] \Bigl ( -A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s \\ &\; \; \; + \int_0^t \Biggl [ \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} \mathrm{I} + \int_{s}^{t} \frac{(t-\tau)^{\alpha-1}}{\Gamma(\alpha)} \Bigl ( -A + \partial_X \overline{f}(\tau) \Bigr ) \Pi(\tau,s) {\rm{d}}\tau \Biggr ] \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(s) {\rm{d}}s \\ & = \int_0^t \Pi(t,s) \Bigl ( -A + \partial_X \overline{f}(s) \Bigr ) a {\rm{d}}s + \int_0^t \Pi(t,s) \sum\limits_{j = 1}^k \theta_j \widehat{f}_j(s) {\rm{d}}s. \end{align*} |
Clearly, (B.6) holds and the conclusion follows. We complete the proof.
The following result shows the backward representation of the adjoint equation, which has not been reported in the existing literature.
Lemma B.5. The (mild) solution to the adjoint equation in (3.3) can be written as follows:
\begin{align} \begin{cases} P(t) = \Pi(T,t)^* P(T) + \mathsf{p}^{0} \int_t^{T} \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \Pi(s,t)^* \partial_X \overline{l}(s) {\rm{d}}s, \; t \in [0,T), \\ \mathrm{I}_{T-}^{1-\alpha}[P](T) = P(T), \end{cases} \end{align} | (B.7) |
where \Pi is the \mathsf{X} -valued right RL fractional state-transition evolution operator in (B.2).
Proof. Notice that by definition of Fréchet differentiation, we have \partial_X \overline{l}(\cdot) \in \mathsf{X}^* = \mathcal{L}(\mathsf{X}, \mathbb{R}) [16,page 166]. Then by definition of \Pi in (B.2), together with Assumptions 1 and 2, (B.7) is a well-defined RL fractional integral equation. We can easily observe that P is the \mathsf{X}^* -valued function and P(\cdot) \in L^1([0, T]; \mathsf{X}^*) [24,Theorem 3.3,Chapter 3]. Hence, if (B.7) is the integral representation of (3.3), then we prove (ⅱ) of Theorem 3.1.
To show (B.7), we obtain (3.3) from (B.7). Using (B.2) and with W(s) : = \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \partial_X \overline{l}(s) ,
\begin{align*} P(t) & = \frac{(T-t)^{\alpha-1}}{\Gamma(\alpha)} P(T) + \mathsf{p}^{0}\int_t^{T} \frac{(s-t)^{\alpha-1}}{\Gamma(\alpha)} W(s) {\rm{d}}s \\ & \; \; \; + \int_{t}^{T} \frac{(r-t)^{\alpha-1}}{\Gamma(\alpha)} \Bigl (-A^* + \partial_X \overline{f}(r)^* \Bigr ) \Pi(T,r)^* P(T) {\rm{d}}r \\ &\; \; \; + \mathsf{p}^{0} \int_t^{T} \int_t^{s} \frac{(r-t)^{\alpha-1}}{\Gamma(\alpha)} \Bigl (-A^* + \partial_X \overline{f}(r)^* \Bigr ) \Pi(s,r)^* {\rm{d}}r W(s) {\rm{d}}s. \end{align*} |
By Fubini's formula,
\begin{align*} & \int_t^{T} \int_t^{s} \frac{(r-t)^{\alpha-1}}{\Gamma(\alpha)} \Bigl (-A^* + \partial_X \overline{f}(r)^* \Bigr ) \Pi(s,r)^*{\rm{d}}r W(s) {\rm{d}}s \\ & = \int_{t}^{T} \frac{(s-t)^{\alpha-1}}{\Gamma(\alpha)} \Bigl (-A^* + \partial_X \overline{f}(r)^* \Bigr ) \int_{s}^{T} \Pi(r,s)^* W(r) {\rm{d}}r {\rm{d}}s. \end{align*} |
Hence, from Definitions 1 and 2, it follows that
\begin{align*} P(t) & = \frac{(T-t)^{\alpha-1}}{\Gamma(\alpha)} P(T) + \mathsf{p}^{0} \int_t^{T} \frac{(s-t)^{\alpha-1}}{\Gamma(\alpha)} W(s) {\rm{d}}s + \int_t^T \frac{(s-t)^{\alpha-1}}{\Gamma(\alpha)} (-A^* + \partial_X \overline{f}(r)^* ) P(s) {\rm{d}}s \\ & = ^{RL}{ \mathrm{D}}{_{T-}^{1-\alpha}}[P(T)](t) + \mathrm{I}_{T-}^{\alpha}[- A^* P(\cdot) + \partial_X \overline{f}(\cdot)^* P(\cdot) + \mathsf{p}^{0} W(\cdot)](t). \end{align*} |
Definitions 1 and 2 and Lemma A.1 lead to
\begin{align*} - \mathrm{I}_{T-}^{1-\alpha}[P](t) & = - \mathrm{I}_{T-}^{1-\alpha} \Big [ - \frac{{\rm{d}}}{{\rm{d}}t} [ \mathrm{I}_{T-}^{\alpha}[P(T)] ] (\cdot) \Bigr ] (t) - \mathrm{I}_{T-}^{1}[- A^* P(\cdot) + \partial_X \overline{f}(\cdot)^* P(\cdot) + W(\cdot)](t) \\ & = - P(T) - \mathrm{I}_{T-}^{1} [- A^* P(\cdot) + \partial_X \overline{f}(\cdot)^* P(\cdot) + \mathsf{p}^{0} W(\cdot) ](t). \end{align*} |
Note that \mathrm{I}_{T-}^{1-\alpha}[P](T) = P(T) . This, together with Definition 2 and the fact that \mathrm{I}_{T-}^{1-\alpha}[P] (\cdot) \in \mathrm{AC}([0, T]; \mathsf{X}^*) by our assumptions, implies (with W(s) : = \frac{(T-s)^{\beta-1}}{\Gamma(\beta)} \partial_X \overline{l}(s) )
\begin{align*} - \frac{{\rm{d}}}{{\rm{d}}t} \Bigl [ \mathrm{I}_{T-}^{1-\alpha}[P] (\cdot) \Bigr ](t) = ^{RL}{ \mathrm{D}}{_{T-}^{1-\alpha}}[P](t) & = - \frac{{\rm{d}}}{{\rm{d}}t} \Bigl [ \mathrm{I}_{T-}^{1}[- A^* P(\cdot) + \partial_X \overline{f}(\cdot)^* P(\cdot) + \mathsf{p}^{0} W(\cdot)] \Bigr ] (t) \\ & = - A^* P(t) + \partial_X \overline{f}(t)^* P(t) + \mathsf{p}^{0} W(t). \end{align*} |
This shows that (B.7) is the solution to the adjoint equation in (3.3). This completes the proof.
In this appendix, we provide two technical lemmas used in duality and variational analysis.
Lemma C.6. Let h_j(\cdot) \in L^1([0, T]; \mathsf{X}) , j = 1, \ldots, j , and \sum_{j = 1}^k \theta_j = 1 with \theta_j \geq 0 . Then for any \delta \in (0, 1) , there exist measurable E_j^{\delta} \subset [0, T] , j = 1, \ldots, k , such that
\begin{cases} E_j^{\delta} \cap E_i^{\delta} = \emptyset,\; j \neq i, \\ \cup_{j = 1}^k meas|E_j^{\delta}| = \delta T, \\ \sup\limits_{t \in [0,T]} \Bigl | \delta \sum\limits_{k = 1}^k \theta_j \int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} h_j(s) {\rm{d}}s - \sum\limits_{j = 1}^k \int_{E_j^{\delta} \cap [0,t]} \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} h_j(s) {\rm{d}}s \Bigr |_{\mathsf{X}} = o(\delta). \end{cases} | (C.1) |
Proof. Notice that for each t \in [0, T] , \frac{(t-\cdot)^{\alpha-1}}{\Gamma(\alpha)} h_j(\cdot) is Bochner integrable. Then the result follows from [27,Lemma 5.5]; completing the proof.
Lemma C.7. For any \epsilon > 0 , \{(a, Z^{\epsilon}_{\omega}(T), \widehat{Z}^{\epsilon}_{\omega})\; |\; \omega \in \Omega\} \subset \mathsf{X} \times \mathsf{X} \times \mathbb{R} is convex.
Proof. Notice that by Theorem 2.1, (Z^{\epsilon}_{\omega}(\cdot), \widehat{Z}^{\epsilon}_{\omega}) admit unique solutions in C([0, T]; \mathsf{X}) \times \mathbb{R} for any (t, a, \omega) \in [0, T] \times \mathsf{X} \times \Omega . For i = 1, \ldots, l , consider \omega_k^{(i)} : = (\{u_j^{(i)}(\cdot)\}_{j = 1}^{k}, \{\theta_j^{(i)}\}_{j = 1}^{k}) \in \Omega . Let (Z^{\epsilon, (i)}_{\omega}(\cdot), \widehat{Z}^{\epsilon, (i)}_{\omega}) be the solutions of (5.7) and (5.8) under \omega_k^{(i)} . Let a^{(i)} \in \mathsf{X} . With \gamma^{(i)} \in [0, 1] and \sum_{i = 1}^l \gamma^{(i)} = 1 , we define
\begin{align} a = \sum\limits_{i = 1}^l \gamma^{(i)} a^{(i)},\; Z^{\epsilon}_{\omega}(T) = \sum\limits_{i = 1}^l \gamma^{(i)} Z^{\epsilon,(i)}_{\omega}(T),\; \widehat{Z}^{\epsilon}_{\omega} = \sum\limits_{i = 1}^l \gamma^{(i)} \widehat{Z}^{\epsilon,(i)}_{\omega}. \end{align} | (C.2) |
By defining
\omega_k : = (\{\gamma^{(i)} \omega_k^{(i)} \}_{i = 1}^l) = (\{ \{u_j^{(i)}(\cdot)\}_{j = 1}^{k}, \{\gamma^{(i)} \theta_j^{(i)}\}_{j = 1}^{k} \}_{i = 1}^l ) \in \Omega, |
we observe that (C.2) are well-defined solutions and a \in \mathsf{X} . This completes the proof.
[1] | G. H. Hardy, J. E. Littlewood, G. Pólya, Inequalities, 2 Eds., Cambridge University Press, 1934. |
[2] |
L. Debnath, B. Yang, Recent developments of Hilbert-type discrete and integral inequalities with applications, Int. J. Math. Math. Sci., 2012 (2012), 871845. https://doi.org/10.1155/2012/871845 doi: 10.1155/2012/871845
![]() |
[3] |
G. H. Hardy, Note on a theorem of Hilbert concerning series of positive term, P. Lond. Math. Soc., 23 (1925), 45–46. https://doi.org/10.1112/plms/s2-23.1.45 doi: 10.1112/plms/s2-23.1.45
![]() |
[4] |
B. G. Pachpatte, On some new inequalities similar to Hilbert's inequality, J. Math. Anal. Appl., 226 (1998), 166–179. https://doi.org/10.1006/jmaa.1998.6043 doi: 10.1006/jmaa.1998.6043
![]() |
[5] |
Y. H. Kim, An improvement of some inequalities similar to Hilbert's inequality, Int. J. Math. Math. Sci., 28 (2001), 211–221. https://doi.org/10.1155/S0161171201006937 doi: 10.1155/S0161171201006937
![]() |
[6] |
C. Zhao, L. Debnath, Some new inverse type Hilbert integral inequalities, J. Math. Anal. Appl., 262 (2001), 411–418. https://doi.org/10.1006/JMAA.2001.7595 doi: 10.1006/JMAA.2001.7595
![]() |
[7] |
Y. H. Kim, Some new inverse-type Hilbert-Pachpatte integral inequalities, Acta Math. Sin., 20 (2004), 57–62. https://doi.org/10.1007/s10114-003-0255-5 doi: 10.1007/s10114-003-0255-5
![]() |
[8] | Z. Changjian, M. Bencze, On the inverse inequalities of two new type Hilbert integral inequalities, An. Sti. U. Ovid. Co.-Mat., 16 (2008), 67–72. |
[9] | S. Hilger, Ein Maßkettenkalkül mit Anwendung auf Zentrumsmannigfaltigkeiten, PhD thesis, Universität Würzburg, 1988. |
[10] |
S. Hilger, Analysis on measure chains–-a unified approach to continuous and discrete calculus, Results Math., 18 (1990), 18–56. https://doi.org/10.1007/BF03323153 doi: 10.1007/BF03323153
![]() |
[11] | M. Bohner, A. Peterson, Dynamic equations on time scales: An introduction with applications, Boston: Birkhäuser, 2001. https://doi.org/10.1007/978-1-4612-0205-6 |
[12] | M. Bohner, A. Peterson, Advances in dynamic equations on time scales, Boston: Birkhäuser, 2003. https://doi.org/10.1007/978-1-4612-0011-3 |
[13] | S. H. Saker, H. M. Rezk, D. O'Regan, R. P. Agarwal, A variety of inverse Hilbert type inequalities on time scales, Dyn. Contin. Discret. I., 24 (2017), 347–373. |
[14] |
A. A. El-Deeb, S. D. Makharesh, B. Almarri, Some new inveres Hilbert's type on time scales, Symmetry, 14 (2022), 2234. https://doi.org/10.3390/sym14112234 doi: 10.3390/sym14112234
![]() |
[15] |
Y. Sun, T. Hassan, Some nonlinear dynamic integral inequalities on time scales, Appl. Math. Comput., 220 (2013), 221–225. https://doi.org/10.1016/j.amc.2013.06.021 doi: 10.1016/j.amc.2013.06.021
![]() |
[16] |
G. AlNemer, A. Saied, M. Zakarya, H. A. A. El-Hamid, O. Bazighifan, H. M. Rezk, Some new reverse Hilbert's inequalities on time scales, Symmetry, 13 (2021), 2431. https://doi.org/10.3390/sym13122431 doi: 10.3390/sym13122431
![]() |
[17] |
M. Zakarya, A. I. Saied, G. AlNemer, H. M. Rezk, A study on some new reverse Hilbert-type inequalities and its generalizations on time scales, J. Math., 2022 (2022), 6285367. https://doi.org/10.1155/2022/8567890 doi: 10.1155/2022/8567890
![]() |
[18] | R. Agarwal, D. O'Regan, S. H. Saker, Dynamic inequalities on time scales, Switzerland: Springer International Publishing, 2014. https://doi.org/10.1007/978-3-319-08126-0 |
[19] |
S. H. Saker, A. M. Ahmed, H. M. Rezk, D. O'Regan, R. P. Agarwal, New Hilbert's dynamic inequalities on time scales, J. Math. Inequal. Appl., 20 (2017), 1017–1039. https://doi.org/10.7153/jmi-20-40 doi: 10.7153/jmi-20-40
![]() |
[20] |
S. H. Saker, A. A. El-Deeb, H. M. Rezk, R. P. Agarwal, On Hilbert's inequality on time scales, Appl. Anal. Discr. Math., 11 (2017), 1–11. https://doi.org/10.2298/AADM1700001S doi: 10.2298/AADM1700001S
![]() |
[21] |
S. H. Saker, R. R. Mahmoud, A. Peterson, Weighted Hardy-type inequalities on time scales with applications, Mediterr. J. Math., 11 (2014), 1–16. https://doi.org/10.1007/s00009-013-0314-0 doi: 10.1007/s00009-013-0314-0
![]() |
[22] | D. S. Mitrinovié, J. E. Pčcarié, A. M. Fink, Classical and new inequalities in analysis, Dordrech: Kluwer Academic, 1993. https://doi.org/10.1007/978-94-017-1043-9 |
1. | Jie Kong, Bo Zhao, 2024, Optimal Control for Fractional-Order Nonlinear Systems Using Fractional-Order Online Policy Iteration, 978-9-8875-8158-1, 2558, 10.23919/CCC63176.2024.10662444 | |
2. | Jun Moon, The Pontryagin type maximum principle for Caputo fractional optimal control problems with terminal and running state constraints, 2025, 10, 2473-6988, 884, 10.3934/math.2025042 |