Processing math: 100%
Research article

Stochastic differential equations in infinite dimensional Hilbert space and its optimal control problem with Lévy processes

  • Received: 12 April 2021 Accepted: 04 November 2021 Published: 12 November 2021
  • MSC : 60H10, 93E24

  • The paper is concerned with a class of stochastic differential equations in infinite dimensional Hilbert space with random coefficients driven by Teugels martingales which are more general processes and the corresponding optimal control problems. Here Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes (see Nualart and Schoutens [21]). There are three major ingredients. The first is to prove the existence and uniqueness of the solutions by continuous dependence theorem of solutions combining with the parameter extension method. The second is to establish the stochastic maximum principle and verification theorem for our optimal control problem by the classic convex variation method and dual techniques. The third is to represent an example of a Cauchy problem for a controlled stochastic partial differential equation driven by Teugels martingales which our theoretical results can solve.

    Citation: Meijiao Wang, Qiuhong Shi, Maoning Tang, Qingxin Meng. Stochastic differential equations in infinite dimensional Hilbert space and its optimal control problem with Lévy processes[J]. AIMS Mathematics, 2022, 7(2): 2427-2455. doi: 10.3934/math.2022137

    Related Papers:

    [1] Jiali Wu, Maoning Tang, Qingxin Meng . A stochastic linear-quadratic optimal control problem with jumps in an infinite horizon. AIMS Mathematics, 2023, 8(2): 4042-4078. doi: 10.3934/math.2023202
    [2] Dennis Llemit, Jose Maria Escaner IV . Value functions in a regime switching jump diffusion with delay market model. AIMS Mathematics, 2021, 6(10): 11595-11609. doi: 10.3934/math.2021673
    [3] Chao Wei . Parameter estimation for partially observed stochastic differential equations driven by fractional Brownian motion. AIMS Mathematics, 2022, 7(7): 12952-12961. doi: 10.3934/math.2022717
    [4] Yuna Oh, Jun Moon . The infinite-dimensional Pontryagin maximum principle for optimal control problems of fractional evolution equations with endpoint state constraints. AIMS Mathematics, 2024, 9(3): 6109-6144. doi: 10.3934/math.2024299
    [5] Chunhong Li, Sanxing Liu . Stochastic invariance for hybrid stochastic differential equation with non-Lipschitz coefficients. AIMS Mathematics, 2020, 5(4): 3612-3633. doi: 10.3934/math.2020234
    [6] Noorah Mshary, Hamdy M. Ahmed . Discussion on exact null boundary controllability of nonlinear fractional stochastic evolution equations in Hilbert spaces. AIMS Mathematics, 2025, 10(3): 5552-5567. doi: 10.3934/math.2025256
    [7] Chunli You, Linxin Shu, Xiao-bao Shu . Approximate controllability of second-order neutral stochastic differential evolution systems with random impulsive effect and state-dependent delay. AIMS Mathematics, 2024, 9(10): 28906-28930. doi: 10.3934/math.20241403
    [8] Hamidou Tembine . Mean-field-type games. AIMS Mathematics, 2017, 2(4): 706-735. doi: 10.3934/Math.2017.4.706
    [9] Wuyuan Jiang, Zechao Miao, Jun Liu . Optimal investment and reinsurance for the insurer and reinsurer with the joint exponential utility. AIMS Mathematics, 2024, 9(12): 35181-35217. doi: 10.3934/math.20241672
    [10] Lidiya Melnikova, Valeriy Rozenberg . One dynamical input reconstruction problem: tuning of solving algorithm via numerical experiments. AIMS Mathematics, 2019, 4(3): 699-713. doi: 10.3934/math.2019.3.699
  • The paper is concerned with a class of stochastic differential equations in infinite dimensional Hilbert space with random coefficients driven by Teugels martingales which are more general processes and the corresponding optimal control problems. Here Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes (see Nualart and Schoutens [21]). There are three major ingredients. The first is to prove the existence and uniqueness of the solutions by continuous dependence theorem of solutions combining with the parameter extension method. The second is to establish the stochastic maximum principle and verification theorem for our optimal control problem by the classic convex variation method and dual techniques. The third is to represent an example of a Cauchy problem for a controlled stochastic partial differential equation driven by Teugels martingales which our theoretical results can solve.



    This paper is concerned with the following stochastic differential equations in infinite dimensional Hilbert space

    (1.1)

    in the framework of a Gelfand triple VH=HV, where H and V are two given Hilbert spaces. Here on a given filtrated probability space (Ω,F,{Ft}0tT,P), W is a one-dimensional Brownian motion and {Hi(t),0tT}i=1 is a Teugels martingale associated with a one-dimensional Lévy process {L(t),0tT}, A:[0,T]×ΩL(V,V), B:[0,T]×ΩL(V,H), b:[0,T]×Ω×HH, g:[0,T]×Ω×HH and σi:[0,T]×Ω×E×HH are given random mappings. Here we denote by L(V,V) the space of bounded linear transformations of V into V, by L(V,H) the space of bounded linear transformations of H into V. An adapted solution of (1.1) is a V-valued, {Ft}0tT-adapted process X() which satisfies (1.1) under some appropriate sense. Such a model as (1.1) represents a large classes of stochastic partial differential equations, for instance the nonlinear filtering equation and other stochastic parabolic PDEs (cf. [6]), but it is by no means the largest one. Partial differential equation are too diverse to be covered by a single model, like ordinary equations.

    In 2000, Nualart and Schoutens [21] got a martingale representation theorem for a type of Lévy processes through Teugels martingales which are a family of pairwise strongly orthonormal martingales associated with Lévy processes. Later, they proved in [22] the existence and uniqueness theory of BSDE driven by Teugels martingales. The above results are further extended to the one-dimensional BSDE driven by Teugels martingales and an independent multi-dimensional Brownian motion by Bahlali et al. [2]. One can refer to [10,11,28,29] for more results on such kind of BSDEs.

    In the mean time, the stochastic optimal control problems related to Teugels martingales were studied for example [33]. In 2008, a stochastic linear-quadratic problem with Lévy processes was considered by Mitsui and Tabata [20], in which they established the closeness property of a multi-dimensional backward stochastic Riccati differential equation (BSRDE) with Teugels martingales and proved the existence and uniqueness of the solution to such kind of one-dimensional BSRDE, moreover, in their paper an application of BSDE to a financial problem with full and partial observations was demonstrated. Motivated by [20], Meng and Tang [19] studied the general stochastic optimal control problem for the forward stochastic systems driven by Teugels martingales and an independent multi-dimensional Brownian motion, of which the necessary and sufficient optimality conditions in the form of stochastic maximum principle with the convex control domain are obtained. In 2012, Tang and Zhang [35] studied the optimal control problem of backward stochastic systems driven by Teugels martingales and an independent multi-dimensional Brownian motion and obtained the corresponding stochastic maximum principle.

    Due to the interesting analytical contents and wide applications in various sciences such as physics, mechanical engineering, control theory and economics, the theory of SPDEs driven by Wiener processes or Gaussian random processes now has been investigated extensively and has already achieved fruitful results on the existence uniqueness, stability, invariant measure other quantitative and qualitative properties of solution and so on. There are a great amount of literature on this topic, for example [6,7,30] and references therein. On the one hand, non-Gaussian random processes play an increasing role in modeling stochastic dynamical systems. Typical examples of non-Gaussian stochastic processes are Lévy processes and processes arising by Poisson random measures. In neurophysiology the driving noise of the cable equation is basically impulsive e.g., of a Poisson type (see [36]) or, on the other hand, Woyczyński describes in [37] a number of phenomena from fluid mechanics, solid state physics, polymer chemistry, economic science etc., for which non-Gaussian Lévy processes can be used as their mathematical model in describing the related stochastic behavior. Thus, from the point of view of applications one might feel that the restriction to Wiener processes or Gaussian noise is unsatisfactory; to handle such cases one can replace Wiener processes or Gaussian noise by a Poisson random measure. Most recently, thanks to comprehensive practical applications, many attentions have been paid to SPDEs driven by jump processes, (cf., for example [1,27,30-32,38-40] and the references therein). It is worth mentioning that Röcker and Zhang [30] established the existence and uniqueness theory for solutions of stochastic evolution equations of type (1.1) by a successive approximations, in which case the operator B does not exist.

    One of the purposes of this paper is to establish the continuous dependence of the solution on the coefficients and the existence and uniqueness of solutions to the stochastic evolution equation (1.1). It is well known that there are two different methods to analyzing SPDEs: The semigroup (or mild solution) approach (cf. [7]) and the variational approach (cf.[26]). For (1.1), since its coefficients are allowed to be random, we need to use the variational approach in the weak solution framework (in the PDE sense) of the Gelfand triple and can not use the mild solution approach to study it. In fact, when the coefficients are deterministic, we always study the stochastic evolution equation in the mild solution framework. However, due to the randomness of coefficients, it seems very difficult or even impossible to tackle the problem in the mild solution sense. Indeed, if we define the mild solution as usual, the adaptability of the integrand in the stochastic integral may not be satisfied due to the randomness of the operator A. The advantage of the variational approach is that a version of itô's formula exists in the context of the Gelfand triple of Hilbert spaces (see [14] for details). Such a formula will play an important role in proving the main results throughout this paper.

    Another purpose of this paper is to establish the maximum principle and verification theorem for the optimal control problem where the state process is driven by a controlled stochastic evolution equation (1.1). A classical approach for optimal control problems is to derive necessary conditions satisfied by an optimal control, such as Pongtryagin's maximum principle. Since the 1970s, the maximum principle has been extensively studied for stochastic control systems: In the finite-dimensional case it has been solved by Peng [25] in a general setting where the control was allowed to take values in a nonconvex set and enter into the diffusion, while in the infinte-dimensional case the existing literature e.g., [3,15,16,41], required at least one of the following three assumptions: (1) The control domain was convex; (2) The diffusion did not depend on the control; (3) The state equation and cost functional were both linear in the state variable. So far the general maximum principle for infinite-dimensional stochastic control systems, the counterpart of Peng's result, remained open for a long time. Until recently, Du and Meng attempt to fill this gap in [8] where they developed a new procedure to perform the second-order duality analysis: By virtue of the Lebesgue differentiation theorem and an approximation argument to establish the corresponding maximum principle. Meanwhile other very important works concerned with the general stochastic maximum principle in infinite dimensions were given in [12,18] besides [8]. From the above references, works on optimal control problems of infinite dimension stochastic evolution equation or stochastic partial differential equation are mainly concerned with systems driven only by Wiener processes. In contrast, there have not been a number of results on the optimal control for stochastic partial differential equations driven by jump processes. In 2005, Øksendal, Prosk and Zhang [23] studied the optimal control problem of quasilinear semielliptic SPDEs driven by Poisson random measure and gave sufficient maximum principle results, not necessary ones. In 2017, Tang and Meng [34] studied the optimal control problem of more general stochastic evolution equations driven by Poisson random measure with random coefficients and gave necessary and sufficient maximum principle results. In this paper, for a controlled stochastic evolution equation (1.1), we suppose that the control domain is convex. We adopt the convex variation method and the first adjoint duality analysis to show a necessary maximum principle where the continuous dependence theorem (see Theorem 3.2) plays a key role in proving the variation inequality for the cost functional (see Lemma 6.2). Under the convexity assumption of the Hamiltonian and the terminal cost, we provide a sufficient maximum principle for this optimal problem which is the so-called verification theorem. It is worth mentioning that if the admissible control set is non-convex and the diffusion terms of the state equation is independent of the control variable we can use the first-order spike variation method to obtain the maximum principle in the global form by establishing some subtle L2-estimate for the state equation. All the details shall be given in our forthcoming paper. But for the general setting, it seems very difficult or even impossible to obtain the corresponding maximum principle because it seems impossible to establish some Lp (p>2) estimates as in [8] for the state process which play a key role in the second variation analysis. Finally, to illustrate our results, we apply the stochastic maximum principles to solve an optimal control of a Cauchy problem for a controlled stochastic linear partial differential equation. Furthermore, it is worth mentioning that under non classical conditions, the optimal control problem of stochastic partial differential equations has become a research hotspot and has been widely studied, and the relevant literature can be referred to [4,13,17,24]. We will also do further research in this research direction.

    The rest of this paper is structured as follows. In Section 2, we provide the basic notations and recall itô formula for Teugels martingales in Hilbert space used frequently in this paper. Section 3 establishes the continuous dependence and the existence and uniqueness of solutions to the stochastic evolution equation (1.1). Section 4 formulates the optimal control problem specifying the hypotheses. In section 5, adjoint equation is introduced which turns out to be a backward stochastic evolution equation driven by Teugels martingales. In Section 6, we establish the stochastic maximum principle by the classical convex variation method. In Sections 7, the verification theorem for optimal controls is obtained by dual technique. In section 8, we present an application of our results. The final section concludes the paper.

    Let (Ω,F,{Ft}0tT,P) be a filtrated complete probability space on which a one-dimensional Lévy process {Z(t),0tT} and a one-dimensional standard Brownian motion {W(t),0tT} are defined with {Ft}0tT being the natural filtration completed by the totality N of all null sets of FT. For Lévy process {Z(t),0tT}, we assume that its characteristic function is given by

    E[eiθZ(t)]=exp[iaθt12σ2θ2t+tR1(eiθx1iθxI{|x|<1})v(dx)],θR.

    Here σ>0, aR1 and v is a measure on R1 satisfying (i) there exists δ>0 and λ>0, such that {δ,δ}ceλ|x|v(dx)< and (ii)TT(1x2)v(dx)<. In view of these conditions, it is easy to check that that the random variables Z(t) have moments of all orders. Denote by P the predictable sub-σ field of B([0,T])×F, then we introduce the following notation used throughout this paper.

    X: A Hilbert space with norm X.

    (,)X: The inner product in Hilbert space X.

    l2: The space of all real-valued sequences x=(xn)n1 satisfying

    xl2i=1x2i<+.

    l2(X): The space of all H-valued sequence f={fi}i1 satisfying

    fl2(X)i=1||fi||2X<+.

    l2F(0,T;X): The space of all l2(X)-valued and Ft-predictable processes f={fi(t,ω), (t,ω)[0,T]×Ω}i1 satisfying

    fl2F(0,T,X)ET0i=1||fi(t)||2Xdt<.

    M2F(0,T;X): The space of all X-valued and Ft-adapted processes f={f(t,ω), (t,ω)[0,T]×Ω} satisfying

    fM2F(0,T;X)ET0f(t)2Xdt<.

    S2F(0,T;X): the space of all X-valued and Ft-adapted càdlàg processes f={f(t,ω), (t,ω)[0,T]×Ω} satisfying

    fS2F(0,T;X)Esup0tTf(t)2X<+.

    L2(Ω,F,P;X): The space of all H-valued random variables ξ on (Ω,F,P) satisfying

    ξL2(Ω,F,P;X)Eξ2X<.

    We denote by {Hi(t),0tT}i=1 the Teugels martingales associated with the Lévy process {L(t),0tT}. Hi(t) is given by

    Hi(t)=ci,iY(i)(t)+ci,i1Y(i1)(t)++ci,1Y(1)(t),

    where Y(i)(t)=L(i)(t)E[L(i)(t)] for all i1, L(i)(t) are so called power-jump processes with L(1)(t)=L(t), L(i)(t)=0<st(ΔL(s))i for i2 and the coefficients cij corresponding to the orthonormalization of polynomials 1,x,x2, w.r.t. the measure μ(dx)=x2v(dx)+σ2δ0(dx). The Teugels martingales {Hi(t)}i=1 are pathwise strongly orthogonal and their predictable quadratic variation processes are given by

    H(i)(t),H(j)(t)=δijt

    For more details of Teugels martingales, we invite the reader to consult Nualart and Schoutens [21].

    Let V and H be two separable (real) Hilbert spaces such that V is densely embedded in H. We identify H with its dual space by the Riesz mapping. Then we can take H as a pivot space and get a Gelfand triple VH=HV, where H and V denote the dual spaces of H and V, respectively. Denote by V,H and V the norms of V,H and V, respectively, by (,)H the inner product in H, by , the duality product between V and V. Moreover we write L(V,V) the space of bounded linear transformations of V into V. Throughout this paper, we let C and K be two generic positive constants, which may be different from line to line.

    Now we present an itô's formula in Hilbert space which will be frequently used in this paper.

    Lemma 2.1. Let φL2(Ω,F0,P;H). Let Y,Z,Γ and R(Ri)i=1 be three progressively measurable stochastic processes defined on [0,T]×Ω with values in V,H and V such that YM2F(0,T;V),ZM2F(0,T;H), ΓM2F(0,T;V) and Rl2F(0,T;H), respectively. Suppose that for every ηV and almost every (ω,t)Ω×[0,T], it holds that

    (η,Y(t))H=(η,φ)H+t0η,Γ(s)ds+t0(η,Z(s))HdW(s)+i=1t0(η,Ri(s))HdHi(s).

    Then Y is a H-valued strongly càdlàg Ft-adapted process such that the following itô formula hold

    ||Y(t)||2H=||φ||2+2t0Γ(s),Y(s)ds+2t0(Y(s),Z(s))HdW(s)+t0||Z(s)||2Hds+2i=1t0(Y(s),Ri(s))HdHi(s)+i=1j=1t0(Ri(s),Rj(s))Hd[Hi(s),Hj(s)]. (2.1)

    Proof. The proof follows that of Theorem 1 in Gyöngy and Krylov [13].

    In this section, we present some preliminary results of the following stochastic evolution equation (SEE for short) in infinite dimensional Hilbert space driven by Brownian motion {W(t),0tT} and Teugels Martingales {Hi(t),0tT}i=1:

    (3.1)

    where A,B,b,g and σ(σi)i=1 are given random mappings which satisfy the following standard assumptions.

    Assumption 3.1. The operator processes A:[0,T]×ΩL(V,V) and B:[0,T]×ΩL(V,H) are weakly predictable; i.e., A()x,y and (B()x,y)H are both predictable process for every x,yV, and satisfy the coercive condition, i.e., there exist some constants C,α>0 and λ such that for any xV and each (t,ω)[0,T]×Ω,

    A(t)x,x+λ||x||2Hα||x||2V+||Bx||2H, (3.2)

    and

    sup(t,ω)[0,T]×ΩA(t,ω)L(V,V)+sup(t,ω)[0,T]×ΩB(t,ω)L(V,H)C . (3.3)

    Assumption 3.2. The mappings b:[0,T]×Ω×HH and g:[0,T]×Ω×HH are both P×B(H)/B(H)-measurable such that b(,0),g(,0)M2F(0,T;H); the mapping σ:[0,T]×Ω×Hl2(H) is P×B(H)/B(l2(H))-measurable such that σ(,0)l2F(0,T,H). And there exists a constant C such that for all x,ˉxV and a.s.(t,ω)[0,T]×Ω,

    ||b(t,x)b(t,ˉx)||H+||g(t,x)g(t,ˉx)||H+||σ(t,x)σ(t,ˉx)||l2(H)C||xˉx||H. (3.4)

    Definition 3.1. A V-valued, {Ft}0tT-adapted process X() is said to be a solution to the SEE (3.1), if X()M2F(0,T;V) such that for every ϕV and a.e. (t,ω)[0,T]×Ω, it holds that

    (3.5)

    or alternatively, X() satisfies the following itô's equation in V:

    {X(t)= x+t0A(s)X(s)ds+t0b(s,X(s))ds+t0[B(s)X(s)+g(s,X(s))]dW(s)+i=1t0σi(s,X(s))dHi(s),X(0)= xH,t[0,T]. (3.6)

    Now we state our main result.

    Theorem 3.1. Let Assumptions 3.1 and 3.2 be satisfied by any given coefficients (A,B,b,g,σ) of the SEE (3.1). Then for any initial value X(0)=x, the SEE (3.1) has a unique solution X()M2F(0,T;V).

    To prove this theorem, we first show the following result on the continuous dependence of the solution to the SEE (3.1).

    Theorem 3.2. Let X() be a solution to the SEE (3.1) with the initial value X(0)=x and the coefficients (A,B,b,g,σ) which satisfy Assumptions 3.1 and 3.2. Then the following estimate holds:

    sup0tTE[||X(t)||2H]+E[T0X(t)2Vdt]K{||x||2H+E[T0b(t,0)2Hdt]+E[T0g(t,0)2Hdt]+E[T0σ(t,0)2l2(H)dt]}. (3.7)

    Furthermore, suppose that ˉX() is a solution to the SEE (3.1) with the initial value ˉX(0)=ˉxH and the coefficients (A,B,ˉb,ˉg,ˉσ) satisfying Assumptions 3.1 and 3.2, then we have

    sup0tTE[||X(t)ˉX(t)||2H]+E[T0X(t)ˉX(t)2Vdt]K{xˉx2H+E[T0b(t,ˉX(t))ˉb(t,ˉX(t))2Hdt]+E[T0g(t,ˉX(t))ˉg(t,ˉX(t))2Hdt]+E[T0σ(t,ˉX(t))ˉσ(t,ˉX(t))2l2(H)dt]}. (3.8)

    Proof. The estimate (3.7) can be directly obtained by the estimate (3.8) by taking the initial value ˉX(0)=0 and the coefficients (A,B,ˉb,ˉg,ˉσ)=(A,B,0,0,0) which imply that ˉX()0. Therefore, it suffices to prove the estimate (3.8). For the sake of simplicity, in the following discussion, we will use the following shorthand notation:

    ˆX(t)X(t)ˉX(t),ˆxxˉx,Λxˉx2H+E[T0b(t,ˉX(t))ˉb(t,ˉX(t))2Hdt]+E[T0g(t,ˉX(t))ˉg(t,ˉX(t))2Hdt]+E[T0σ(t,ˉX(t))ˉσ(t,ˉX(t))2l2(H)dt], (3.9)

    and for ϕ=b,g,σ

    ˜ϕ(t)ϕ(t,X(t))ˉϕ(t,ˉX(t)),ˆϕ(t)ϕ(t,ˉX(t))ˉϕ(t,ˉX(t)),Δϕ(t)ϕ(t,X(t))ϕ(t,ˉX(t)),t[0,T], (3.10)

    where when ϕ=σ, the terms X(t) and ˉX(t) will be replaced by X(t) and ˉX(t), respectively.

    Applying itô formula in Lemma 2.1 to ||ˆX(t)||2H and using Assumptions 3.1 and 3.2 and the elementary inequalities |a+b|22a2+2b2 and 2aba2+b2, a,b>0, we get that

    (3.11)

    Taking expectation on both sides of the above inequality, we get that

    E[||ˆX(t)||2H]+αE[t0||ˆX(s)||2Vds]KΛ+KE[t0ˆX(s)2Hds]. (3.12)

    Then by virtue of Grönwall's inequality to E[||X(t)||2H], we obtain

    sup0tTE[||ˆX(t)||2H]+E[T0||ˆX(s)||2Vds]KΛ. (3.13)

    The proof is complete.

    In the following, we give the existence and uniqueness result for the solution of the SEE (3.1) for a simple case where the coefficients (b,g,σ) is independent of the variable x.

    Lemma 3.3. Given three stochastic processes b,g and σ such that

    b()M2F(0,T;H),g()M2F(0,T;H)

    and

    σ()M2F(0,T;l2(H)).

    Suppose that the operators A and B satisfy Assumption 3.1. Then there exists a unique solution X()M2F(0,T;V) to the following SEE:

    {dX(t)= [A(t)X(t)+b(t)]dt+[B(t)X(t)+g(t)]dW(t)+i=1σi(t)dHi(t),X(0)= x,t[0,T]. (3.14)

    Proof. The proof can be obtained by Galerkin approximations in the same way as the proof of Theorem 3.2 in [5] with minor change. Now we begin our proof. First of all, we fix a standard complete orthogonal basis {ei|i=1,2,3,} in the space H which is dense in the space V. For any n, consider the following finite-dimensional stochastic differential equation in Rn:

    {xn1(t)= (x,e1)H+t0[nj=1xnj(s)A(s)ej,e1+(b(s),e1)H]ds+t0[nj=1xnj(s)(B(s)ej,e1)H+(g(s),e1)H]dW(s)+i=1t0(σi(s),e1)HdHi(s),xn2(t)= (x,e2)H+t0[nj=1xnj(s)A(s)ej,e1+(b(s),e2)H]ds+t0[nj=1xnj(s)(B(s)ej,e2)H+(g(s),e2)H]dW(s)+i=1t0(σi(s),e2)HdHi(s),xnn(t)= (x,en)H+t0[nj=1xnj(s)A(s)ej,en+(b(s),en)H]ds+t0[nj=1xnj(s)(B(s)ej,en)H+(g(s),en)H]dW(s)+i=1t0(σi(s),en)HdHi(s), (3.15)

    Under Assumptions 3.1 and 3.2, from the existence and uniqueness theory for the finite dimensional SDE driven by Teugels martingale, the above equation admits a unique strong solution xn()M2F(0,T;Rn), where xn()=(xn1(),,xnn()).

    Now we can define an approximation solution to (3.14) as follows:

    Xn(t):=ni=1xni(t)ei,

    where

    Xn(0):=ni=1(x,ei)Hei.

    Then Eq (3.15) can be written as

    (Xn(t),ei)H=(Xn(0),ei)H+t0[A(s)Xn(s),ei+(b(s),ei)H]ds+t0[(B(s)Xn(s),ei)+(g(s),ei)H]dW(s)+i=1t0(σi(s),en)HdHi(s),i=1,,n. (3.16)

    Now applying itô formula to ||Xn(t)||2H, we get that

    ||Xn(t)||2H=||ˆXn(0)||2H+2t0A(s)Xn(s),Xn(s)ds+2t0(Xn(s),b(s))Hds+t0||B(s)Xn(s)+g(s)||2Hds+i=1j=1t0(σi(s),σj(s))Hd[Hi(s),Hj(s)]+2t0(Xn(s),B(s)Xn(s)+g(s))HdW(s)+2i=1t0(Xn(s),σi(s))dHi(s).

    Therefore, under Assumptions 3.1 and 3.2, similar to the proof of the estimate (3.7), using Grönwall's inequality, we can easily get the following estimate:

    E[T0Xn(t)2Vdt]K{||x||2H+E[T0b(t)2Hdt]+E[T0g(t)2Hdt]+E[T0σ(t)2l2(H)dt]}. (3.17)

    This inequality implies that there is a subsequence {n} of {n} and a triple X()M2F(0,T;V) such that

    XnX weakly in M2F(0,T;V). (3.18)

    Let Π be an arbitrary bounded random variable on (Ω,F) and ψ be an arbitrary bounded measurable function on [0,T]. From the equality (3.16), for nN and basis ei, where in, we have

    E[T0Πψ(t)(Xn(t),ei)Hdt]=E[T0Πψ(t){(Xn(0),ei)H+t0[A(s)Xn(s),ei+(b(s),ei)H]ds+t0[(B(s)Xn(s),ei)+(g(s),ei)H]dW(s)+i=1t0(σi(s),ei)HdHi(s)}dt]. (3.19)

    Now letting n on the both sides of the above equation to get its limit. Firstly, from the weak convergence property of {Xn}n=1 in M2F(0,T;V), we have

    limnE[T0Πψ(t)(Xn(t),ei)Hdt]=limnE[T0E[Π|Ft]ψ(t)(Xn(t),ei)Hdt]=limnE[T0(Xn(t),E[Π|Ft]ψ(t)ei)Hdt]=E[T0(X(t),E[Π|Ft]ψ(t)ei)dt]=E[T0Πψ(t)(X(t),ei)dt], (3.20)

    and

    limnE[t0ΠA(s)Xn(s),eids]=limnE[t0E[Π|Fs]A(s)Xn(s),eids]=limnE[t0A(s)Xn(s),E[Π|Fs]eids]=limnE[t0Xn(s),A(s)E[Π|Fs]eids]=E[t0X(s),A(s)E[Π|Fs]eids]=E[t0ΠA(s)X(s),eids]. (3.21)

    In view of (3.3) and (3.17), we conclude that

    E[|t0ΠA(s)Xn(s),eids|]C{E[T0||Xn(s)||2Vds]}12<C<,

    where the constant C is independent of n. Hence from Fubini's Theorem and Lebesgue's Dominated Convergence Theorem, we have

    limnE[T0Πψ(t)t0A(s)Xn(s),eidsdt]=limnT0ψ(t)E[t0ΠA(s)Xn(s),eids]dt=T0ψ(t)E[t0ΠA(s)X(s),eids]dt=E{T0ψ(t)[t0ΠA(s)X(s),eids]dt}. (3.22)

    Similarly, in view of (3.3), from (3.17) and (3.18), it is easy to check that

    (BXn,ei)H(BX,ei)H weakly in M2F(0,t;R).

    Since the stochastic integral with respect to the Brownian motion W are linear and strong continuous mappings from M2F(0,t;R) to L2(Ω,FT,P;R), it is weakly continuous. Therefore,

    limnE[Πt0(B(s)Xn(s),ei)HdW(s)]=E[Πt0(B(s)X(s),ei)HdW(s)]. (3.23)

    Moreover, from (3.17), we get

    ψ(t)E[Π(t0(B(s)Xn(s),ei)HdW(s)]12ψ2(t)E|Π|2+C{E[T0||B(s)Xn(s)||2Hds]}C.

    Hence, by Fubini's Theorem and Lebesgue's Dominated Convergence Theorem, we have

    llimnE[T0ψ(t)Π(t0(B(s)Xn(s),ei)HdW(s)]dt=limnT0ψ(t)E[Π(t0(B(s)Xn(s),ei)HdW(s)]dt=Ttψ(t)E[Π(t0(B(s)X(s),ei)HdW(s))]dt=ETt[ψ(t)Π(t0(B(s)X(s),ei)HdW(s))]dt. (3.24)

    Therefore, combining (3.20), (3.22)–(3.24), and letting n in (3.20), we can conclude that

    E[T0Πψ(t)(X(t),ei)Hdt]=E[T0Πψ(t){(X(0),ei)H+t0[A(s)X(s),ei+(b(s),ei)H]ds+t0[(B(s)X(s),ei)+(g(s),ei)H]dW(s)+i=1t0(σi(s),ei)HdHi(s)}dt]. (3.25)

    This implies that for a.s. (t,ω)[0,T]×Ω,

    (X(t),ei)H=(X(0),ei)H+t0[A(s)X(s),ei+(b(s),ei)H]ds+t0[(B(s)X(s),ei)+(g(s),ei)H]dW(s)+i=1t0(σj(s),ei)HdHj(s), (3.26)

    thanks to the arbitrariness of Π and ψ(). Since the standard complete orthogonal basis {ei|i=1,2,3,} in H is dense in the space V, for every ϕV and a.e. (t,ω)[0,T]×Ω, it holds that

    (X(t),ϕ)H=(X(0),ϕ)H+t0[A(s)X(s),ϕ+(b(s),ϕ)H]ds+t0[(B(s)X(s),ϕ)+(g(s),ϕ)H]dW(s)+i=1t0(σj(s),ϕ)HdHj(s). (3.27)

    Therefore, from the Definition 3.1, we conclude that the triple X() is the solution to the SEE (3.14). Thus the existence is proved. For the uniqueness, the proof can be directly by the priori estimate (3.8). The proof is complete.

    Proof of Theorem 3.1. The uniqueness of the solution to the SEE (3.1) can be got by the a priori estimate (3.8) directly. For ρ[0,1] and any three given stochastic processes b0M2F(0,T;H), g0M2F(0,T;H), and σ0M2F(0,T;l2(H)) we introduce a family of parameterized SEEs as follows:

    X(t)= x+t0A(s)X(s)ds+t0[ρb(s,X(s))]+b0(s)]ds+t0[B(s)X(s)+ρg(s,X(s))+g0(s)]dW(s)+i=1t0[ρσi(s,X(s))+σi0(s)]dHi(s). (3.28)

    It is easy to see that when we take the parameter ρ=1 and b00,g00,σ00, the SEE (3.28) is reduced to the original SEE (3.1). Obviously, the coefficients of the SEE (3.28) satisfy Assumptions 3.1 and 3.2 with (A,B,b,g,σ) replaced by (A,B,ρb+b0,ρg+g0,ρσ+σ0). Suppose for any b0M2F(0,T;H), g0M2F(0,T;H), σ0M2F(0,T;l2(H)) and some parameter ρ=ρ0, there exists a unique solution X()M2F(0,T;V) to the SEE (3.28). For any parameter ρ, the SEE (3.28) can be rewritten as

    X(t)= x+t0A(s)X(s)ds+t0[ρ0b(s,X(s))+b0(s)+(ρρ0)b(s,X(s))]ds+t0[B(s)X(s)+ρ0g(s,X(s))+g0(s)+(ρρ0)g(s,X(s))]dW(s)+i=1t0[ρ0σi(s,X(s))+σi0(s)+(ρρ0)σi(s,X(s))]dHi(s). (3.29)

    Therefor by the above assumption, for any x()M2F(0,T;V), the following SEE

    X(t)= x+t0A(s)X(s)ds+t0[ρ0b(s,X(s))+b0(s)+(ρρ0)b(s,x(s))]ds+t0[B(s)X(s)+ρ0g(s,X(s))+g0(s)+(ρρ0)g(s,x(s))]dW(s)+i=1t0[ρ0σi(s,X(s))+σi0(s)+(ρρ0)σi(s,x(s))]dHi(s) (3.30)

    admits a unique solution X()M2F(0,T;V). Now define a mapping from M2F(0,T;V) onto itself denoted by

    X()=Γ(x()).

    Then for any xi()M2F(0,T;V), i=1,2, from the Lipschitz continuity of b, g, σ and a priori estimate (3.8), it follows that

    ||Γ(x1())Γ(x2())||2M2F(0,T;V)=||X1()X2()||2M2F(0,T;V)K|ρρ0|2||x1()x2()||2M2F(0,T;V).

    Here K is a positive constant independent of ρ. If |ρρ0|<12K, the mapping Γ is strictly contractive in M2F(0,T;V). Hence it implies that the SEE (3.14) with the coefficients (A,B,ρb+b0,ρg+g0,ρσ+σ0) admits a unique solution X()M2F(0,T;V). From Lemma 3.3, the uniqueness and existence of the solution to the SEE (3.14) is true for ρ=0. Then starting from ρ=0, one can reach ρ=1 in finite steps and this finishes the proof of solvability of the SEE (3.1). This completes the proof.

    Let U be a real-valued Hilbert space standing for the control space. Let U be a nonempty convex closed subset of U. An admissible control process u(){u(t),0tT} is defined as follows.

    Definition 4.1. A stochastic process u() defined on [0,T]×Ω is called an admissible control process if it is a predictable process such that u()M2F(0,T;U) and u(t)U, a.e. t[0,T], P-a.s.. Write A for the set of all admissible control processes.

    In the Gelfand triple (V,H,V), for any admissible control u()A, we consider the following controlled SEE driven by Teugels martingales

    {dX(t)= [A(t)X(t)+b(t,X(t),u(t))]dt+[B(t)X(t)+g(t,X(t),u(t))]dW(t)+i=1σi(t,X(t),u(t))dHi(t),X(0)= x,t[0,T] (4.1)

    with the cost functional

    J(u())=E[T0l(t,x(t),u(t))dt+Φ(x(T))]. (4.2)

    where the coefficients satisfy the following basic assumptions:

    Assumption 4.1.

    (i) A:[0,T]×ΩL(V,V) and B:[0,T]×ΩL(V,H) are operator-valued stochastic processes satisfying (i) in Assumption 3.1;

    b,g:[0,T]×Ω×H×UH

    are P×B(H)×B(U)/B(H) measurable mappings and

    σ=(σi)i=1:[0,T]×Ω×H×Ul2(H)

    is a P×B(H)×B(U)/B(l2(H))-measurable mapping such that

    b(,0,0),g(,0,0)M2F(0,T;H),
    σ(,0,0)M2F(0,T;l2(H)).

    Moreover, for almost all (t,ω)[0,T]×Ω, b, g and σ are Gâteaux differentiable in (x,u) with continuous bounded Gâteaux derivatives bx,gx,σx,bu,gu and σu;

    (ii) l:[0,T]×Ω×H×UR is a PB(H)B(U)/B(R)-measurable mapping and Φ:Ω×HR is a FTB(H)/B(R)-measurable mapping. For almost all (t,ω)[0,T]×Ω, l is continuous Gâteaux differentiable in (x,u) with continuous Gâteaux derivatives lx and lu, and Φ is Gâteaux differentiable in x with continuous Gâteaux derivative Φx. Moreover, for almost all (t,ω)[0,T]×Ω, there exists a constant C>0 such that for all (x,u)H×U

    |l(t,x,u)|C(1+x2H++u2U),
    lx(t,x,u)H++lu(t,x,u)UC(1+xH+uU),

    and

    |Φ(x)|C(1+x2H),
    Φx(x)HC(1+xH).

    For any admissible control u(), the solution of the system (4.1), denoted by Xu() or X(), if its dependence on u() is clear from the context, is called the state process corresponding to the control process u(), and (u();X()) is called an admissible pair. The following result gives the well-posedness of the state equation as well as some useful estimates.

    Lemma 4.1. Let Assumption 4.1 be satisfied. Then for any admissible control u(), the state equation (4.1) has a unique solution Xu()M2F(0,T;V). Moreover, the following estimate holds

    sup0tTE[||Xu(t)||2H]+E[T0Xu(t)2Vdt]K{1+||x||2H+E[T0u(t)2Udt]} (4.3)

    and

    |J(u())|<. (4.4)

    Furthermore, let Xv() be the state process corresponding to another admissible control v(), then

    sup0tTE[||Xu(t)Xv(t)||2H]+E[T0Xu(t)Xv(t)2Vdt]KE[T0u(t)v(t)2Udt]. (4.5)

    Proof. Under Assumption 4.1, by Theorem 3.1, we can get directly the existence and uniqueness of the solution of the state equation (3.1). By Assumption 3 and the estimates (3.7) and (3.8), we get that

    sup0tTE[||X(t)||2H]+E[T0X(t)2Vdt]K{||x||2H+E[T0b(t,u(t))2Hdt]+E[T0g(t,u(t))2Hdt]+E[T0σ(t,u(t))2l2(H)dt]}K{1+||x||2H+E[T0u(t)2Udt]} (4.6)

    and

    sup0tTE[||Xu(t)Xv(t)||2H]+E[T0Xu(t)Xv(t)2Vdt]K{E[T0b(t,Xu(t))ˉb(t,Xv(t))2Hdt]+E[T0g(t,Xu(t))ˉg(t,Xv(t))2Hdt] (4.7)
    +E[T0σ(t,Xu(t)ˉσ(t,Xv(t))2l2(H)dt]}.KE[T0u(t)v(t)2Udt], (4.8)

    which implies that (4.3) and (4.5) hold.

    Furthermore, from Assumption 4.1 and the estimate (4.3), it follows that

    |J(u())|K{sup0tTE[||X(t)||2H]+E[T0||u(t)||2Udt]+1}K{1+||x||2H+E[T0||u(t)||2Udt]}<. (4.9)

    The proof is complete.

    Therefor by Lemma 4.1, we claim that the cost functional (4.2) is well-defined. Our optimal control problem can be stated as follows.

    Problem 4.2. Find an admissible control process ˉu()A such that

    J(ˉu())=infu()AJ(u()). (4.10)

    The admissible control ˉu() satisfying (4.10) is called an optimal control process of Problem 4.2. Correspondingly, the state process ˉX() associated with ˉu() is called an optimal state process. Then (ˉu();ˉX()) is called an optimal pair of Problem 4.2.

    For any admissible pair (ˉu();ˉX()), the corresponding adjoint processes is defined as a triple (ˉp(),ˉq(),ˉr()) of stochastic processes, which is a solution to the following backward stochastic evolution equation (BSEE for short) driven by Teugels martingales, called the adjoint equation,

    {dˉp(t)=[A(t)ˉp(t)+bx(t,ˉX(t),ˉu(t))ˉp(t)+B(t)ˉq(t)+gx(t,ˉX(t),ˉu(t))ˉq(t)+i=1σix(t,ˉX(t),ˉu(t))ˉri(t)+lx(t,ˉX(t),ˉu(t))]dt+ˉq(t)dW(t)+i=1ˉri(t)dHi(t),0tT,ˉp(T)=Φx(ˉX(T)). (5.1)

    Here A denotes the adjoint operator of the operator A. Similarly, we can define the corresponding adjoint operator for other coefficients.

    Under Assumptions 4.1, we have the following basic result for the adjoint process.

    Lemma 5.1. Let Assumptions 4.1 be satisfied. Then for any admissible pair (ˉu();ˉX()), there exists a unique adjoint process (ˉp(),ˉq(),ˉr())M2F(0,T;V)×M2F(0,T;H)×M2F(0,T;l2(H)). Moreover, the following estimate holds:

    E[T0ˉp(t)2Vdt]+E[T0ˉq(t)2Hdt]+E[T0ˉr(t)2l2(H)dt]K{E[T0lx(t,ˉX(t),ˉu(t))2Hdt]+E[||Φx(ˉX(T))||2H]}. (5.2)

    Proof. From the property of adjoint operator, the adjoint operator A of A and the adjoint operator B of B also satisfies (i) in Assumption 3.1. Therefore, similarly to Theorem 3.1, the existence and uniqueness of the solution can be proved by Galerkin approximations and parameter extension method. Define the Hamiltonian H:[0,T]×Ω×H×U×H×H×l2(H)R by

    H(t,x,u,p,q,r):=(b(t,x,u),p)H+(g(t,x,u),q)H+(σ(t,x,u),r(t))l2(H)+l(t,x,u). (5.3)

    Using Hamiltonian H, the adjoint equation (5.1) can be written in the following form:

    (5.4)

    where we denote

    ˉH(t)H(t,ˉx(t),ˉu(t),ˉp(t),ˉq(t),ˉr(t)). (5.5)

    Let(ˉu();ˉX()) be an optimal pair of Problem 4.2. Define a convex perturbation of ˉu() as follows:

    uϵ()ˉu()+ϵ(v()ˉu()),0ϵ1,

    where v() is an arbitrarily admissible control. Since the control domain U is convex, uε() is also an element of A. We denote by Xε() the state process corresponding to the control uε(). Now we introduce the following first order variation equation:

    {dY(t)= [A(t)Y(t)+bx(t,ˉX(t),ˉu(t))Y(t)+bu(t,ˉX(t),ˉu(t))(v(t)ˉu(t))]dt+[B(t)Y(t)+gx(t,ˉX(t),ˉu(t))Y(t)+gu(t,ˉX(t),ˉu(t))(v(t)ˉu(t))]dW(t)+i=1[σix(t,ˉX(t),ˉu(t))Y(t)+σiu(t,ˉX(t),ˉu(t))(v(t)ˉu(t))]dHi(t),Y(0)=0. (6.1)

    Under Assumption 4.1, by Theorem 3.1, we see that the variation equation (6.1) has a unique solution Y()M2F(0,T;V).

    Lemma 6.1. Let Assumption 4.1 be satisfied. Then we have the following estimates:

    sup0tTE[Xϵ(t)ˉX(t)2H]+E[T0Xϵ(t)ˉX(t)2Vdt]=O(ϵ2), (6.2)
    sup0tTE[Xϵ(t)ˉX(t)εY(t)2H]+E[T0Xϵ(t)ˉX(t)εY(t)2Vdt]=o(ϵ2) . (6.3)

    Proof. From the estimate (4.5), we have

    sup0tTE[Xϵ(t)ˉX(t))2H]+E[T0Xε(t)ˉX(t)2Vdt]KE[T0uε(t)ˉu(t)2Udt]=Kε2E[T0v(t)ˉu(t)2Udt]=O(ε2). (6.4)

    Denote

    Ξε(t):=Xε(t)ˉX(t)εY(t). (6.5)

    From Taylor expanding, we have

    {dΞε(t)= [A(t)Ξε(t)+bx(t,ˉX(t),ˉu(t))Ξε(t)+αε(t)]dt+[B(t)Ξε(t)+gx(t,ˉX(t),ˉu(t))Ξε(t)+βε(t)]dW(t)+i=1[σix(t,ˉX(t),ˉu(t))Ξε(t)+γiε(t)]dHi(t),X(0)= x,t[0,T], (6.6)

    where

    (6.7)

    From the estimates (3.7), (6.2) and Lebesgue dominated convergence theorem, we get that

    sup0tTE[Ξ(t)2H]+E[T0Ξ(t)2Vdt]K{E[T0||αε(t)||2Hdt]+E[T0||βε(t)||2Hdt]+E[T0||γε(t)||2l2(H)dt]}=o(ε2). (6.8)

    The proof is complete.

    Lemma 6.2. Let Assumption 4.1 be satisfied. Let (ˉu();ˉX()) be an optimal pair of Problem 4.2 associated with the first order variation process Y() (see (6.1)). Then,

    J(uε())J(ˉu())=εE[(Φx(ˉX(T)),Y(T))H]+εE[T0(lx(t,ˉX(t),ˉu(t)),Y(t))Hdt]+εE[T0(lu(t,ˉX(t),ˉu(t)),v(t)u(t))Udt]+o(ε). (6.9)

    Proof. From the definition of the cost functional (see (4.2)), we have

    (6.10)

    where

    I1=E[T0(l(t,Xε(t),uε(t))l(t,ˉX(t),ˉu(t))dt],I2=E[Φ(Xε(T))Φ(ˉX(T))]. (6.11)

    Let us concentrate on I1. In terms of Taylor expanding, Lemma 6.1 and the control convergence theorem, we have

    I1=E[T010(lx(t,ˉX(t)+λ(Xε(t)ˉX(t)),ˉu(t)+λ(uε(t)ˉu(t)))lx(t,ˉX(t),ˉu(t)))(Xε(t)ˉX(t))dλdt]+E[T010(lu(t,ˉX(t)+λ(Xε(t)ˉX(t)),ˉu(t)+λ(uε(t)ˉu(t)))lu(t,ˉX(t),ˉu(t)))(uε(t)ˉu(t))dλdt]+E[T0lx(t,ˉX(t),ˉu(t)))Ξε(t)dt]+εE[T0lx(t,ˉX(t),ˉu(t)))Y(t)dt]+εE[T0lu(t,ˉX(t),ˉu(t)))(u(t)ˉu(t))dt]=εE[T0lx(t,ˉX(t),ˉu(t))Y(t)dt]+εE[T0lu(t,ˉX(t),ˉu(t))(u(t)ˉu(t))dt]+o(ε). (6.12)

    Similarly, we have

    I2=εE[Φx(ˉX(T))Y(T)]+o(ε). (6.13)

    Then putting (6.12) and (6.13) into (6.10), we get (6.9). The proof is complete.

    Now we are in position to state and prove the maximum principle for Problem 4.2.

    Theorem 6.3 (Maximum Principle). Let Assumption 4.1 be satisfied. Let (ˉu();ˉX()) be an optimal pair of Problem 4.2 associated with the adjoint processes (ˉp(),ˉq(),ˉr()). Then the following minimum condition holds:

    (6.14)

    Proof. Recalling the adjoint equation (5.4) and the first order variational equation (6.1), and then applying itô formula to (ˉp(t),Y(t))H, we have

    (6.15)

    Since ˉu() is the optimal control, from (6.9), the duality relation (6.15) and the definition of the Hamiltonian H (see (5.3)), we have

    0limε0+J(uε())J(ˉu())ε=E[(Φx(ˉX(T)),Y(T))H]+E[T0(lx(t,ˉX(t),ˉu(t)),Y(t))Hdt]+E[T0(lu(t,ˉX(t),ˉu(t)),v(t)u(t))Udt]=E[T0(v(t)ˉu(t),bu(t,ˉX(t),ˉu(t))ˉp(t)+gu(t,ˉX(t),ˉu(t))ˉq(t)+i=1σiu(t,ˉX(t),ˉu(t))ˉri(t))Udt]+E[T0(lu(t,ˉX(t),ˉu(t)),v(t)ˉu(t))Udt]=E[T0(v(t)ˉu(t),Hu(t,ˉX(t),ˉu(t),ˉp(t),ˉq(t),ˉr(t)))Udt]. (6.16)

    This implies the minimum condition (6.14) holds since v() is any given admissible control.

    In the following, we give a sufficient condition of optimality for the existence of an optimal control of Problem 4.2, which is the so-called verification theorem.

    Theorem 7.1 (Verification Theorem). Let Assumption 4.1 be satisfied. Let (ˉu();ˉX()) be an admissible pair of Problem 4.2 associated with the adjoint processes (ˉp(),ˉq(),ˉr()). Suppose that H(t,x,u,ˉp(t),ˉq(t),ˉr(t)) is convex in (x,u), and Φ(x) is convex in x, moreover assume that the following optimality condition holds for almost all (t,ω)[0,T]×Ω:

    H(t,ˉX(t),ˉu(t),ˉp(t),ˉq(t),ˉr(t))=minuUH(t,ˉX(t),u,ˉp(t),ˉq(t),ˉr(t)). (7.1)

    Then (ˉu();ˉX()) is an optimal pair of Problem 4.2.

    Proof. Let (u();X()) be an any given admissible pair. To simplify our notations, we define

    b(t)b(t,X(t),u(t)),ˉb(t)b(t,ˉX(t),ˉu(t)),g(t)g(t,X(t),u(t)),ˉg(t)g(t,ˉX(t),ˉu(t)),σi(t)σi(t,X(t),u(t)),ˉσi(t)σi(t,ˉX(t),ˉu(t)),H(t)H(t,X(t),u(t),ˉp(t),ˉq(t),ˉr(t)),ˉH(t)H(t,ˉX(t),ˉu(t),ˉp(t),ˉq(t),ˉr(t)). (7.2)

    From the definitions of the cost functional J(u()) and the Hamiltonian H (see (4.2) and (5.3)), we can represent J(u())J(ˉu()) as follows:

    J(u())J(ˉu())=E[T0(H(t)ˉH(t)(ˉp(t),b(t)ˉb(t))H(ˉq(t),g(t)ˉg(t))Hi=1(ˉri(t),σi(t)ˉσi(t))H)dt]+E[Φ(X(T))Φ(ˉX(T))]. (7.3)

    Then recalling the adjoint equation (5.1) and applying itô's formula to (ˉp(t),X(t)ˉX(t))H, we get that

    E[T0((ˉp(t),b(t)ˉb(t))H+(ˉq(t),g(t)ˉg(t))H+i=1(ˉri(t),σi(t)ˉσi(t))H)dt]=E[T0(ˉHx(t),X(t)ˉX(t))Hdt]+E[(Φx(ˉX(T)),X(T)ˉX(T))H]. (7.4)

    Then substituting (7.4) into (7.3) leads to

    J(u())J(ˉu())=E[T0(H(t)ˉH(t)(ˉHx(t),X(t)ˉX(t))H)dt]+E[Φ(X(T))Φ(ˉX(T))(Φx(ˉX(T)),X(T)ˉx(T))H]. (7.5)

    On the other hand, the convexity of H(t) and Φ(x) yields

    H(t)ˉH(t)(ˉHx(t),X(t)ˉX(t))H+(ˉHu(t),u(t)ˉu(t))U, (7.6)

    and

    Φ(X(T))Φ(ˉX(T))(Φx(ˉX(T)),x(T)ˉx(T))H. (7.7)

    In addition, the optimality condition (7.1) and the convex optimization principle (see Proposition 2.21 of [9]) yield that for almost all (t,ω)[0,T]×Ω,

    (ˉHu(t),u(t)ˉu(t))U0. (7.8)

    Then putting (7.6)–(7.8) into (7.5), we get that

    J(u())J(ˉu())0. (7.9)

    Therefore, since u() is arbitrary, ˉu() is an optimal control process and (ˉu();ˉX()) is an optimal pair. The proof is complete.

    In this section, we will apply our theoretical results to solve {a specific example, i.e., } an optimal control problem for a controlled Cauchy problem driven by Teugels martingales.

    First of all, let us recall some preliminaries of Sobolev spaces. For m=0,1, we define the space Hm{ϕ:αzϕL2(Rd), for any α:=(α1,,αd) with |α|:=|α1|++|αd|m} with the norm

    ϕm{|α|mRd|αzϕ(z)|2dz}12.

    We denote by H1 the dual space of H1. We set V=H1, H=H0, V=H1. Then (V,H,V) is a Gelfand triple.

    We choose control domain U=U=H. The admissible control set A is defined as M2F(0,T;U). For any admissible control u(,)M2F(0,T;U), we consider a controlled Cauchy problem, where the system is given by a stochastic partial differential equation driven by Brownian motion W and Poisson random martingale in the following divergence form:

    {dy(t,z)= {zi[aij(t,z)zjy(t,z)]+bi(t,z)ziy(t,z)+c(t,z)y(t,z)+u(t,z)}dt+{zi[ηi(t,z)y(t,z)]+ρ(t,z)y(t,z)+u(t,z)}dW(t)+i=1[Γi(t,z)y(t,z)+u(t,z)]dHi(t),y(0,z)= ξ(z)Rd,(t,z)[0,T]×Rd, (8.1)

    where the coefficients aij,bi,ηi,c,ρ:[0,T]×Ω×RdR and Γi:[0,T]×Ω×Rd are given random mappings and satisfy the suitable measurability. Here we also use the Einstein summation convention to zi[aij(t,z)zjy(t,z)],bi(t,z)ziy(t,z) and zi[ηi(t,z)y(t,z)].

    For any admissible control u(,)A, the following definition gives the generalized weak solution to (8.1).

    Definition 8.1. An R-valued, P×B(Rd)-measurable process y(,) is called a solution to (8.1), if y(,)M2F(0,T;H) such that for every ϕH and a.e. (t,ω)[0,T]×Ω, it holds that

    Rdy(t,z)ϕ(z)dz= Rdξ(z)ϕ(z)dzt0Rdaij(s,z)zjy(s,z)ziϕ(z)dzds+t0Rd[bi(s,z)ziy(s,z)+c(s,z)y(s,z)+u(s,z)]ϕ(z)dzdst0Rdηi(s,z)y(s,z)iϕ(z)dzdW(s)+t0Rd[ρ(s,z)y(s,z)+u(s,z)]ϕ(z)dzdW(s)+i=1t0Rd[Γi(s,z)y(s,z)+u(s,z)]ϕ(z)dzdHi(s). (8.2)

    For any admissible control process u(,) and the solution y(,) of the corresponding state equation (8.1), the objective of the control problem is to minimize the following cost functional

    J(u())=E[Rdy2(T,z)dz+[0,T]×Rdy2(s,z)dsdz+[0,T]×Rdu2(s,z)dsdz]. (8.3)

    To make the control problem well-defined, we make the following assumptions on the coefficients a, b, c, η, ρ, Γ, for some fixed constants K(1,) and κ(0,1):

    Assumption 8.1. The functions a, b, c, η, and ρ are P×B(Rd)-measurable with values in the set of real symmetric d×d matrices, Rd, R, Rd and R, respectively, and are bounded by K. The function Γ is P×B(E)×B(Rd)-measurable with value l2(R) and is bounded by K. ξL2(Rd).

    Assumption 8.2. The super-parabolic condition holds, i.e.,

    κI+η(t,z)(η(t,z))2a(t,ω,z)KI,(t,ω,z)[0,T]×Ω×Rd,

    where I is the (d×d)-identity matrix.

    In order to apply our abstract theoretical results in Section 6 and 7 to our optimal control problem, now we begin to transform (8.1) into a SEE driven by Teugels martingales in the form of (3.1). Set

    X(t)y(t,),(A(t)ϕ)(z)zi[aij(t,z)zjϕ(z)]+bi(t,z)ziϕ(z)+c(t,z)ϕ(z),ϕV,(B(t)ϕ)(z)zi[ηi(t,z)ϕ(z)]+ρ(t,z)ϕ(z),ϕV,b(t,ϕ,u)u,ϕH,uU,g(t,ϕ,u)u,ϕH,uU,σi(t,ϕ,u)Γi(t)ϕ+u,ϕH,uU,l(t,ϕ,u)(ϕ,ϕ)H+(u,u)U,ϕH,uU,Φ(ϕ)(ϕ,ϕ)H,ϕH.

    In the Gelfand triple (V,H,V), using the above notations, we can rewrite the state equation (8.1) as follows:

    {dX(t)= [A(t)X(t)+b(t,X(t),u(t))]dt+[B(t)X(t)+g(t,X(t),u(t))]dW(t)+i=1σi(t,X(t),u(t))dHi(t),X(0)= x,t[0,T], (8.4)

    and the cost functional (8.3) can be rewritten as

    J(u())=E[T0l(t,x(t),u(t))dt+Φ(x(T))], (8.5)

    where we set

    l(t,x,u)(x,x)H+(u,u)H,xH,uU,Φ(x)(x,x)H,xH. (8.6)

    Thus this optimal control problem is transformed into Problem 4.2 as a special case. Under Assumptions 8.1 and 8.2, it is easy to check that the coefficients of this optimal control problem satisfy Assumptions 4.1. So in this case, Theorems 6.3 and 7.1 hold. Moreover, from the a priori estimate (3.8), it is easy to see that the cost functional J(u()) is the strictly convex, coercive lower-semi continuous functional defined on the reflexive Banach space M2F(0,T;U). Therefore the uniqueness and existence of the optimal control can be obtained by the convex optimality principle (see Proposition 2.12 of [9]). Let (ˉu(),ˉX()) be the optimal pair. In the following, we will give the duality characterization of the optimal control ˉu() by the maximum principle. More precisely, in this case the corresponding Hamiltonian H becomes

    H(t,x,u,p,q,r):=(u,p)H+(u,q)H+i=1(Γi(t)x+u,ri(t))H+(x,x)H+(u,u)H. (8.7)

    Let (ˉu();ˉX()) be an optimal pair. Then the corresponding adjoint equation becomes

    {dˉp(t)=[A(t)ˉp(t)+B(t)ˉq(t)+i=1Γi(t)ˉri(t)+2ˉX(t)]dt+ˉq(t)dW(t)+i=1ˉri(t)dHi(t),0tT,ˉp(T)=2ˉX(T), (8.8)

    where

    A(t)ϕ(z)zi[aij(t,z)zjϕ(z)]+zi[bi(t,z)ϕ(z)]+c(t,z)ϕ(z),ϕV,B(t)ϕ(z)ηi(t,z)ziϕ(z),ϕH,Γi(t)ϕ(z)Γi(t,z)ϕ(z),ϕH.

    Since U=U, there is no constraint on the control and therefore the minimum condition (6.14) becomes

    Hu(t,ˉX(t),ˉu(t),ˉp(t),ˉq(t),ˉr(t))=0, (8.9)

    which imply that

    2ˉu(t)+ˉp(t)+ˉq(t)+ˉr(t)=0, (8.10)

    a.e. t[0,T], P-a.s.. Thus the optimal control ˉu() is given by

    ˉu(t)=12[ˉp(t)+ˉq(t)+i=1ˉri(t)].

    Remark 8.1. The above example can be regarded as a special case of the infinite-dimensional linear-quadratic control problem driven by Teugels martiangles which can also be applied to some more practical problems such as the partial observation optimal control driven by Teugels martingales and the optimal harvesting problem associated with Lévy processes and so on. And we will give detailed investigations on these applications in our future publication.

    In this paper, we have developed an infinite-dimensional optimal control problem of the stochastic evolution system driven by Teugels martingales. We have considered the control variable enters the diffusion of the state equation and the control domain is convex. We first provided the existence uniqueness and continuous dependence theorems of solutions to SEE driven by Teugels martingales. Then we established necessary and sufficient conditions for optimal controls in the form of maximum principles by convex variational technique. As an application, we considered an optimal control problem of a Cauchy problem for a controlled stochastic partial differential equation and obtained the dual characterization of the optimal control in terms of the solution to the corresponding stochastic Hamiltonian system. And further investigates will be carried out on the optimal control problem under convex control domain assumption and more practical applications in our future publications

    The authors would like to thank anonymous referees for helpful comments and suggestions which improved the original version of the paper. Q. Meng was supported by the Key Projects of Natural Science Foundation of Zhejiang Province (No. Z22A013952) and the National Natural Science Foundation of China (No. 11871121). Q. Shi was supported by the Natural Science Foundation of Zhejiang Province (No. LQ19A010001). Maoning Tang was supported by the Natural Science Foundation of Zhejiang Province (No. LY21A010001).

    The authors declare that they have no competing interests.



    [1] S. Albeverio, J. L. Wu, T. S. Zhang, Parabolic SPDEs driven by Poisson white noise, Stochastic Processes Appl., 74 (1998), 21-36. doi: 10.1016/S0304-4149(97)00112-9. doi: 10.1016/S0304-4149(97)00112-9
    [2] K. Bahlali, M. Eddahbi, E. Essaky, BSDE associated with Lévy processes and application to PDIE, J. Appl. Math. Stochastic Anal., 16 (2003), 1-17. doi: 10.1155/s1048953303000017. doi: 10.1155/s1048953303000017
    [3] A. Bensoussan, Stochastic maximum principle for distributed parameter systems, J. Franklin Inst., 315 (1983), 387-406. doi: 10.1016/0016-0032(83)90059-5. doi: 10.1016/0016-0032(83)90059-5
    [4] P. Benner, C. Trautwein, A linear quadratic control problem for the stochastic heat equation driven by Q-Wiener processes, J. Math. Anal. Appl., 457 (2018), 776-802. doi: 10.1016/j.jmaa.2017.08.052. doi: 10.1016/j.jmaa.2017.08.052
    [5] S. Chen, S. Tang, Semi-linear backward stochastic integral partial differential equations driven by a Brownian motion and a Poisson point process, arXiv. Available from: https://arXiv.org/abs/1007.3201.
    [6] P. L. Chow, Stochastic partial differential equations, CRC Press, 2014. doi: 10.1201/9781420010305.
    [7] G. Da Prato, J. Zabczyk, Stochastic equations in infinite dimensions, Cambridge University Press, 2014. doi: 10.1017/CBO9781107295513.
    [8] K. Du, Q. Meng, A maximum principle for optimal control of stochastic evolution equations, SIAM J. Control Optim., 51 (2013), 4343-4362. doi: 10.1137/120882433. doi: 10.1137/120882433
    [9] I. Ekeland, R. Témam, Convex analysis and variational problems, North-Holland, Amsterdam, 1999. doi: 10.1137/1.9781611971088.
    [10] M. El Otmani, Generalized BSDE driven by a Lévy process, Int. J. Stochastic Anal., 2006 (2006), 085407. doi: 10.1155/JAMSA/2006/85407. doi: 10.1155/JAMSA/2006/85407
    [11] M. El Otmani, Backward stochastic differential equations associated with Lévy processes and partial integro-differential equations, Commun. Stochastic Anal., 2 (2008), 277-288. doi: 10.31390/cosa.2.2.07. doi: 10.31390/cosa.2.2.07
    [12] M. Fuhrman, Y. Hu, G. Tessitor, Stochastic maximum principle for optimal control of SPDEs, Appl. Math. Optim., 68 (2013), 181-217. doi: 10.1016/j.crma.2012.07.009. doi: 10.1016/j.crma.2012.07.009
    [13] M. Fuhrman, C. Orrieri, Stochastic maximum principle for optimal control of a class of nonlinear SPDEs with dissipative drift, SIAM J. Control Optim., 54 (2016), 341-371. doi: 10.1137/15m1012888. doi: 10.1137/15m1012888
    [14] I. Gyöngy, N. V. Krylov, On stochastics equations with respect to semimartingales ii. itô formula in banach spaces, Stochastics, 6 (1982), 153-173. doi: 10.1080/17442508208833202. doi: 10.1080/17442508208833202
    [15] Y. Hu, N-person differential games governed by semilinear stochastic evolution systems, Appl. Math. Optim., 24 (1991), 257-271. doi: 10.1007/BF01447745. doi: 10.1007/BF01447745
    [16] Y. Hu, S. Peng, Maximum principle for semilinear stochastic evolution control systems, Stochastics Stochastic Rep., 33 (1990), 159-180. doi: 10.1080/17442509008833671. doi: 10.1080/17442509008833671
    [17] S. Lenhart, J. Xiong, J. Yong, Optimal controls for stochastic partial differential equations with an application in population modeling, SIAM J. Control Optim., 54 (2016), 495-535. doi: 10.1137/15m1010233. doi: 10.1137/15m1010233
    [18] Q. Lü, X. Zhang, General Pontryagin-type stochastic maximum principle and backward stochastic evolution equations in infinite dimensions, Springer, 2014. doi: 10.1007/978-3-319-06632-5.
    [19] Q. Meng, M. Tang, Necessary and sufficient conditions for optimal control of stochastic systems associated with Lévy processes, Sci. China Ser. F-Inf. Sci., 52 (2009), 1982. doi: 10.1007/s11432-009-0191-9. doi: 10.1007/s11432-009-0191-9
    [20] K. Mitsui, Y. Tabata, A stochastic linear-quadratic problem with Lévy processes and its application to finance, Stochastic Processes Appl., 118 (2008), 120-152. doi: 10.1016/j.spa.2007.03.011. doi: 10.1016/j.spa.2007.03.011
    [21] D. Nualart, W. Schoutens, Chaotic and predictable representations for Lévy processes, Stochastic Processes Appl., 90 (2000), 109-122. doi: 10.1016/s0304-4149(00)00035-1. doi: 10.1016/s0304-4149(00)00035-1
    [22] D. Nualart, W. Schoutens, Backward stochastic differential equations and Feynman-Kac formula for Lévy processes, with applications in finance, Bernoulli, 7 (2001), 761-776. doi: 10.2307/3318541. doi: 10.2307/3318541
    [23] B. Øksendal, F. Prosk, T. Zhang, Backward stochastic partial differential equations with jumps and application to optimal control of random jump fields, Stochastics Int. J. Probab. Stochastic Processes, 77 (2005), 381-399. doi: 10.1080/17442500500213797. doi: 10.1080/17442500500213797
    [24] C. Orrieri, P. Veverka, Necessary stochastic maximum principle for dissipative systems on infinite time horizon, ESAIM: Control Optim. Calculus Var., 23 (2017), 337-371. doi: 10.1051/cocv/2015054. doi: 10.1051/cocv/2015054
    [25] S. Peng, A general stochastic maximum principle for optimal control problems, SIAM J. Control Optim., 28 (1990), 966-979. doi: 10.1137/0328054. doi: 10.1137/0328054
    [26] C. Prévôt, M. Röckner, A concise course on stochastic partial differential equations, Berlin: Springer, 2007. doi: 10.1007/978-3-540-70781-3.
    [27] Y. Ren, H. Dai, R. Sakthivel, Approximate controllability of stochastic differential systems driven by a Lévy process, Int. J. Control, 86 (2013), 1158-1164. doi: 10.1080/00207179.2013.786188. doi: 10.1080/00207179.2013.786188
    [28] Y. Ren, M. El Otmani, Generalized reflected BSDEs driven by a Lévy process and an obstacle problem for PDIEs with a nonlinear Neumann boundary condition, J. Comput. Appl. Math., 233 (2010), 2027-2043. doi: 10.1016/j.cam.2009.09.037. doi: 10.1016/j.cam.2009.09.037
    [29] Y. Ren, X. Fan, Reflected backward stochastic differential equations driven by a Lévy process, ANZIAM J., 50 (2009), 486-500. doi: 10.1017/s1446181109000303. doi: 10.1017/s1446181109000303
    [30] M. Röckner, T. Zhang, Stochastic evolution equations of jump type: Existenc uniqueness and large deviation principles, Potential Anal., 26 (2007), 255-279. doi: 10.1007/s11118-006-9035-z. doi: 10.1007/s11118-006-9035-z
    [31] Y. Ren, R. Sakthivel, Existenc uniqueness, and stability of mild solutions for second-order neutral stochastic evolution equations with infinite delay and Poisson jumps, J. Math. Phys., 53 (2012), 073517. doi: 10.1063/1.4739406. doi: 10.1063/1.4739406
    [32] R. Sakthivel, Y. Ren, Exponential stability of second-order stochastic evolution equations with Poisson jumps, Commun. Nonlinear Sci. Numer. Simul., 17 (2012), 4517-4523. doi: 10.1016/j.cnsns.2012.04.020. doi: 10.1016/j.cnsns.2012.04.020
    [33] H. Tang, Z. Wu, Stochastic differential equations and stochastic linear quadratic optimal control problem with Lévy processes, J. Syst. Sci. Complex., 22 (2009), 122-136. doi: 10.1007/s11424-009-9151-0. doi: 10.1007/s11424-009-9151-0
    [34] M. Tang, Q. Meng, Stochastic evolution equations of Jump type with random coefficients: Existence, uniqueness and optimal control, Sci. China Inf. Sci., 60 (2017), doi: 10.1007/s11432-016-9107-1. doi: 10.1007/s11432-016-9107-1
    [35] M. Tang, Q. Zhang, Optimal variational principle for backward stochastic control systems associated with Lévy processes, Sci. China Math., 55 (2012), 745-761. doi: 10.1007/s11425-012-4370-6. doi: 10.1007/s11425-012-4370-6
    [36] J. B. Walsh, Finite element methods for parabolic stochastic PDE's, Potential Anal., 23 (2005), 1-43. doi: 10.1007/s11118-004-2950-y. doi: 10.1007/s11118-004-2950-y
    [37] W. A. Woyczyński, Lévy processes in the physical sciences, In: O. E. Barndorff-Nielsen, S. I. Resnick, T. Mikosch, Lévy processes, Boston: Birkhäuser, (2001), 241-266. doi: 10.1007/978-1-4612-0197-7_11.
    [38] X. Yang, J. Zhai, T. Zhang, Large deviations for SPDEs of jump type, Stoch. Dynam., 15 (2015), 1550026. doi: 10.1142/s0219493715500264. doi: 10.1142/s0219493715500264
    [39] H. Zhao, S. Xu, Freidlin-Wentzell's large deviations for stochastic evolution equations with Poisson jumps, Adv. Pure Math., 6 (2016), 676-694. doi: 10.4236/apm.2016.610056. doi: 10.4236/apm.2016.610056
    [40] J. Zhai, T. Zhang, Large deviations for 2-D stochastic Navier-tokes equations driven by multiplicative Lévy noises, Bernoulli, 21 (2015), 2351-2392. doi: 10.3150/14-BEJ647. doi: 10.3150/14-BEJ647
    [41] X. Zhou, On the necessary conditions of optimal controls for stochastic partial differential equations, SIAM J. Control Optim., 31 (1993), 1462-1478. doi: 10.1137/0331068. doi: 10.1137/0331068
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2135) PDF downloads(143) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog