Citation: Manami Inagaki, Masayuki Somei, Tatsunori Oguchi, Ran Ono, Sachie Fukutaka, Ikumi Matsuoka, Mayumi Tsuji, Katsuji Oguchi. Neuroprotective Effects of Dexmedetomidine against Thapsigargin-induced ER-stress via Activity of α2-adrenoceptors and Imidazoline Receptors[J]. AIMS Neuroscience, 2016, 3(2): 237-252. doi: 10.3934/Neuroscience.2016.2.237
[1] | Gerardo Sánchez Licea . Sufficiency for singular trajectories in the calculus of variations. AIMS Mathematics, 2020, 5(1): 111-139. doi: 10.3934/math.2020008 |
[2] | Jun Moon . The Pontryagin type maximum principle for Caputo fractional optimal control problems with terminal and running state constraints. AIMS Mathematics, 2025, 10(1): 884-920. doi: 10.3934/math.2025042 |
[3] | Yanfei Chai . Robust strong duality for nonconvex optimization problem under data uncertainty in constraint. AIMS Mathematics, 2021, 6(11): 12321-12338. doi: 10.3934/math.2021713 |
[4] | Savin Treanţă, Muhammad Bilal Khan, Soubhagya Kumar Sahoo, Thongchai Botmart . Evolutionary problems driven by variational inequalities with multiple integral functionals. AIMS Mathematics, 2023, 8(1): 1488-1508. doi: 10.3934/math.2023075 |
[5] | Asaf Khan, Gul Zaman, Roman Ullah, Nawazish Naveed . Optimal control strategies for a heroin epidemic model with age-dependent susceptibility and recovery-age. AIMS Mathematics, 2021, 6(2): 1377-1394. doi: 10.3934/math.2021086 |
[6] | Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601 |
[7] | Beyza Billur İskender Eroǧlu, Dilara Yapışkan . Generalized conformable variational calculus and optimal control problems with variable terminal conditions. AIMS Mathematics, 2020, 5(2): 1105-1126. doi: 10.3934/math.2020077 |
[8] | Murugan Suvinthra, Krishnan Balachandran, Rajendran Mabel Lizzy . Large Deviations for Stochastic Fractional Integrodifferential Equations. AIMS Mathematics, 2017, 2(2): 348-364. doi: 10.3934/Math.2017.2.348 |
[9] | H. M. Barakat, Magdy E. El-Adll, M. E. Sobh . Bootstrapping $ m $-generalized order statistics with variable rank. AIMS Mathematics, 2022, 7(8): 13704-13732. doi: 10.3934/math.2022755 |
[10] | Caicai Feng, Saratha Sathasivam, Nurshazneem Roslan, Muraly Velavan . 2-SAT discrete Hopfield neural networks optimization via Crow search and fuzzy dynamical clustering approach. AIMS Mathematics, 2024, 9(4): 9232-9266. doi: 10.3934/math.2024450 |
In this paper we establish and prove two new sufficiency theorems for weak and strong minima for an optimal control problem of Lagrange with fixed end-points, nonlinear dynamics, nonlinear isoperimetric inequality and equality constraints and pointwise mixed nonlinear time-state-control inequality and equality restrictions. The proof of the sufficiency theorems is independent of classical methods used to obtain sufficiency in optimal control problems of this type, see for example [32], where the insertion of the original optimal control problem in a Banach space is a fundamental component in order to obtain the corresponding sufficiency theory; [16], where the construction of a bounded solution of a matrix Riccati equation is crucial in this sufficiency approach; or [8,19], where a verification function and a quadratic function satisfying a Hamilton-Jacobi inequality is an indispensable tool in the sufficiency treatments of these theories. Concretely, the sufficiency theorems of this article state that if an admissible process satisfies a first order sufficient condition related with Pontryagin maximum principle, a similar hypothesis of the necessary Legendre-Clebsch condition, the positivity of a quadratic integral on the cone of critical directions, and several conditions of Weierstrass of some functions, where one of them plays a similar role to the Hamiltonian of the problem, then, the previously mentioned admissible process is a strict local minimum. The set of active indices of the corresponding mixed time-state-control inequality constraints must be piecewise constant on the underlying time interval of consideration, the Lagrange multipliers of the inequality constraints must be nonnegative and in fact they have to be zero whenever the associated index of the Lagrange multiplier is inactive. Additionally, the proposed optimal controls need not be continuous on the underlying interval of time but only measurable, see for example [7,8,9,13,14,15,16,17,18,19,21,22,25,26,27,29,32], where the continuity of the proposed optimal control is a crucial assumption in some sufficiency optimal control theories having the same degree of generality as the problems studied in this article. In contrast, in Examples 2.3 and 2.4, we show how two purely measurable optimal controls comprised with the proposed optimal processes satisfy all the hypotheses of Theorems 2.1 and 2.2 becoming in this way strict local minima.
Additionally, it is worth mentioning that in these new sufficiency theorems for local minima presented in this paper, all the premises that must be satisfied by an admissible process to become an optimal process, are imposed in the hypotheses established in the theorems, in contrast, with other second order necessary and sufficiency theories which depend upon the verifiability of some preliminary assumptions, see for example [2,3,5,6,11,24], where the necessary second order conditions for optimality depend on some previous hypotheses involving the full rankness of a matrix whose nature arises from the linear independence of vectors whose role are the gradients of the active inequality and equality constraints and where further assumptions involving some notions of regularity or normality of a solution are fundamental hypotheses; or [28], where the corresponding sufficiency theory for optimality depends upon the existence of a continuous function dominating the norm of one of the partial derivatives of the dynamic of the problem. Another remarkable feature presented in this theory concerns the fact that our sufficiency treatment not only provides sufficient conditions for strict local minima but they allow measuring the deviation between admissible costs and optimal costs. This deviation involves a functional playing the role of the square of a norm of the Banach space $ L^1 $, see for example [1,23], where similar estimations of the growth of the objective functional around the optimal control are established.
On the other hand, it is worth pointing out the existence of some recent optimal control theories which also study optimal control problems with functional inequality or equality restrictions such as the isoperimetric constraints of this paper. Concretely, in [20], necessary optimality conditions for a Mayer optimal control problem involving semilinear unbounded evolution inclusions and inequality and equality Lipschitzian restrictions are obtained by constructing a sequence of discrete approximations and proving that the optimal solutions of discrete approximations converge uniformly to a given optimal process for the primary continuous-time problem. In [12], necessary and sufficient optimality conditions of Mayer optimal control problems involving differential inclusions and functional inequality constraints are presented and the authors study Mayer optimal control problems with higher order differential inclusions and inequality functional constraints. The necessary conditions for optimality obtained in [12], are important generalizations of associated problems for a first order differential inclusions of optimality settings established in [4,10,20]. The sufficiency conditions obtained in [12] include second order discrete inclusions with inequality end-point constraints. The use of convex and nonsmooth analysis plays a crucial role in this related sufficiency treatment. Moreover, one of the fundamental novelties of the work provided in [12] concerns the derivation of sufficient optimality conditions for Mayer optimal control problems having $ m $-th order ordinary differential inclusions with $ m\ge3 $.
The paper is organized as follows. In Section 2, we pose the problem we shall deal with together with some basic definitions, the statement of the main results and two examples illustrating the sufficiency theorems of the article. Section 3 is devoted to state one auxiliary lemma on which the proof of Theorem 2.1, given in the same section, is strongly based. Section 4 is dedicated to state another auxiliary result on which the proof of Theorem 2.2, once again given in the same section, is based.
Suppose we are given an interval $ T: = [t_0, t_1] $ in $ {{\bf R}} $, two fixed points $ \xi_0 $, $ \xi_1 $ in $ {{\bf R}^n} $, functions $ L $, $ L_\gamma $ $ (\gamma = 1, \ldots, K) $ mapping $ {T \times {{\bf R}^n} \times {{\bf R}^m}} $ to $ {{\bf R}} $, two functions $ f $ and $ \varphi = (\varphi_1, \ldots, \varphi_s) $ mapping $ {T \times {{\bf R}^n} \times {{\bf R}^m}} $ to $ {{\bf R}^n} $ and $ {{\bf R}^s} $ respectively. Let
$ {{\cal A}}: = \{(t, x, u)\in{T \times {{\bf R}^n} \times {{\bf R}^m}} \mid \varphi_\alpha(t, x, u)\le0 \, (\alpha\in R), \, \varphi_\beta(t, x, u) = 0 \, (\beta\in S) \} $ |
where $ R: = \{1, \ldots, r\} $ and $ S: = \{r+1, \ldots, s\} $ $ (r = 0, 1, \ldots, s) $. If $ r = 0 $ then $ R = \emptyset $ and we disregard statements involving $ \varphi_\alpha $. Similarly, if $ r = s $ then $ S = \emptyset $ and we disregard statements involving $ \varphi_\beta $.
Let $ \{\Lambda_n\} $ be a sequence of measurable functions and let $ \Lambda $ be a measurable function. We shall say that the sequence of measurable functions $ \{\Lambda_n\} $ converges almost uniformly to a function $ \Lambda $ on $ T $, if given $ {\epsilon > 0} $, there exists a measurable set $ \Upsilon_\epsilon\subset T $ with $ m(\Upsilon_\epsilon) < \epsilon $ such that $ \{\Lambda_n\} $ converges uniformly to $ \Lambda $ on $ T\setminus \Upsilon_\epsilon $. We will also denote uniform convergence by $ \Lambda_n \buildrel \hbox{u} \over \longrightarrow \Lambda $, almost uniform convergence by $ \Lambda_n \buildrel \hbox{au} \over \longrightarrow \Lambda $, strong convergence in $ L^p $ by $ \Lambda_n \buildrel L^p \over \longrightarrow \Lambda $ and weak convergence in $ L^p $ by $ \Lambda_n \buildrel L^p \over \rightharpoonup \Lambda $. From now on we shall not relabel the subsequences of a given sequence since this fact will not alter our results.
It will be assumed throughout the paper that $ L $, $ L_\gamma $ $ (\gamma = 1, \ldots, K) $, $ f $ and $ \varphi $ have first and second derivatives with respect to $ x $ and $ u $. Also, if we denote by $ b(t, x, u) $ either $ L(t, x, u) $, $ L_\gamma(t, x, u) $ $ (\gamma = 1, \ldots, K) $, $ f(t, x, u) $, $ \varphi(t, x, u) $ or any of their partial derivatives of order less or equal than two with respect to $ x $ and $ u $, we shall assume that if $ {{\cal B}} $ is any bounded subset of $ {T \times {{\bf R}^n} \times {{\bf R}^m}} $, then $ |b({{\cal B}})| $ is a bounded subset of $ {{\bf R}} $. Additionally, we shall assume that if $ \{(\Phi_q, \Psi_q)\} $ is any sequence in $ {AC}(T; {{\bf R}^n})\times L^\infty(T; {{\bf R}^m}) $ such that for some $ \Upsilon\subset T $ measurable and some $ (\Phi_0, \Psi_0)\in{AC}(T; {{\bf R}^n})\times L^\infty(T; {{\bf R}^m}) $, $ (\Phi_q, \Psi_q) \buildrel L^\infty \over \longrightarrow (\Phi_0, \Psi_0) $ on $ \Upsilon $, then for all $ q\in{{\bf N}} $, $ b(\cdot, \Phi_q(\cdot), \Psi_q(\cdot)) $ is measurable on $ \Upsilon $ and
$ b(\cdot, \Phi_q(\cdot), \Psi_q(\cdot)) \buildrel L^\infty \over \longrightarrow b(\cdot, \Phi_0(\cdot), \Psi_0(\cdot)) \ \hbox{on $\Upsilon$}. $ |
Note that all conditions given above are satisfied if the functions $ L $, $ L_\gamma $ $ (\gamma = 1, \ldots, K) $, $ f $ and $ \varphi $ and their first and second derivatives with respect to $ x $ and $ u $ are continuous on $ {T \times {{\bf R}^n} \times {{\bf R}^m}} $.
The fixed end-point optimal control problem we shall deal with, denoted by (P), is that of minimizing the functional
$ I(x, u): = {\int_{t_0}^{t_1}} L(t, x(t), u(t))dt $ |
over all couples $ (x, u) $ with $ x\colon T\to{{\bf R}^n} $ absolutely continuous and $ u\colon T\to{{\bf R}^m} $ essentially bounded, satisfying the constraints
$ \left\{˙x(t)=f(t,x(t),u(t))(a.e. in T).x(t0)=ξ0,x(t1)=ξ1.Ii(x,u):=∫t1t0Li(t,x(t),u(t))dt≤0(i=1,…,k).Ij(x,u):=∫t1t0Lj(t,x(t),u(t))dt=0(j=k+1,…,K).(t,x(t),u(t))∈A(t∈T). \right. $
|
Denote by $ {{\cal X}} $ the space of all absolutely continuous functions mapping $ T $ to $ {{\bf R}^n} $ and by $ {{\cal U}}_c: = L^\infty(T; {{\bf R}}^c) $ $ (c\in{{\bf N}}) $. Elements of $ {{\cal X}}\times {{\cal U}}_m $ will be called processes and a process $ (x, u) $ is admissible if it satisfies the constraints. A process $ (x, u) $ solves (P) if it is admissible and $ I(x, u)\le I(y, v) $ for all admissible processes $ (y, v) $. An admissible process $ (x, u) $ is called a strong minimum of (P) if it is a minimum of $ I $ with respect to the norm
$ \|x\|: = \sup\limits_{t\in T}|x(t)|, $ |
that is, if for any $ {\epsilon > 0} $, $ I(x, u)\le I(y, v) $ for all admissible processes $ (y, v) $ satisfying $ \|y-x\| < \epsilon $. An admissible process $ (x, u) $ is called a weak minimum of (P) if it is a minimum of $ I $ with respect to the norm
$ \|(x, u)\|: = \|x\|+\|u\|_\infty, $ |
that is, if for any $ {\epsilon > 0} $, $ I(x, u)\le I(y, v) $ for all admissible processes $ (y, v) $ satisfying $ \|(y, v)-(x, u)\| < \epsilon $. It is a strict minimum if $ I(x, u) = I(y, v) $ only in case $ (x, u) = (y, v) $. Note that the crucial difference between strong and weak minima is that in the former, if $ I $ affords a strong minimum at $ (x_0, u_0) $, then, if $ (x, u) $ is admissible and it is sufficiently close to $ (x_0, u_0) $, in the sense that the quantity $ \|x-x_0\|_\infty $ is sufficiently small, then $ I(x, u)\ge I(x_0, u_0) $, meanwhile for the latter, if $ (x, u) $ is admissible and it is sufficiently close to $ (x_0, u_0) $, in the sense that the quantities $ \|x-x_0\|_\infty $, $ \|u-u_0\|_\infty $ are sufficiently small, then $ I(x, u)\ge I(x_0, u_0) $.
The following definitions will be useful in order to continue with the development of this theory.
$ {\bullet} $ For any $ (x, u)\in {{\cal X}}\times {{\cal U}}_m $ we shall use the notation $ ({{\tilde x}}(t)) $ to represent $ (t, x(t), u(t)) $. Similarly $ ({{\tilde x}}_0(t)) $ represents $ (t, x_0(t), u_0(t)) $. Throughout the paper the notation "$ \ast $" will denote transpose.
$ {\bullet} $ Given $ K $ real numbers $ \lambda_1, \ldots, \lambda_K $, consider the functional $ I_0\colon{{\cal X}}\times{{\cal U}}_m\to{{\bf R}} $ defined by
$ I_0(x, u): = I(x, u)+\sum\limits_{\gamma = 1}^K\lambda_\gamma I_\gamma(x, u) = {\int_{t_0}^{t_1}} L_0({{\tilde x}}(t))dt, $ |
where $ L_0\colon{T \times {{\bf R}^n} \times {{\bf R}^m}}\to{{\bf R}} $ is given by
$ L_0(t, x, u): = L(t, x, u)+\sum\limits_{\gamma = 1}^K\lambda_\gamma L_\gamma(t, x, u). $ |
$ \bullet $ For all $ (t, x, u, \rho, \mu) \in T \times {{\bf R}^n} \times {{\bf R}^m} \times {{\bf R}^n}\times {{\bf R}^s} $, set
$ H(t, x, u, \rho, \mu): = \rho^\ast f(t, x, u) - L_0(t, x, u) - \mu^\ast\varphi(t, x, u). $ |
Given $ \rho \in {{\cal X}} $ and $ \mu\in {{\cal U}}_s $ define, for all $ (t, x, u) \in {T \times {{\bf R}^n} \times {{\bf R}^m}} $,
$ F_0(t, x, u): = -H(t, x, u, \rho(t), \mu(t)) - {\dot \rho}^\ast(t)x $ |
and let
$ J_0(x, u): = \rho^\ast(t_1)\xi_1 - \rho^\ast(t_0)\xi_0 + {\int_{t_0}^{t_1}} F_0({{\tilde x}}(t))dt. $ |
$ \bullet $ Consider the first variations of $ J_0 $ and $ I_\gamma $ $ (\gamma = 1, \ldots, K) $ with respect to $ (x, u) \in {{\cal X}}\times{{\cal U}}_m $ over $ (y, v)\in {{\cal X}}\times L^2(T; {{\bf R}^m}) $ which are given, respectively, by
$ {J^\prime}_0((x, u);(y, v)) : = {\int_{t_0}^{t_1}} \{ F_{0x}({{\tilde x}}(t))y(t) + F_{0u}({{\tilde x}}(t))v(t) \} dt, $ |
$ {I^\prime}_\gamma((x, u);(y, v)): = {\int_{t_0}^{t_1}} \{ L_{\gamma x}({{\tilde x}}(t))y(t) + L_{\gamma u}({{\tilde x}}(t))v(t) \} dt. $ |
The second variation of $ J_0 $ with respect to $ (x, u) \in {{\cal X}}\times{{\cal U}}_m $ over $ (y, v)\in {{\cal X}} \times L^2(T; {{\bf R}^m}) $ is given by
$ {J^{\prime\prime}}_0((x, u);(y, v)): = {\int_{t_0}^{t_1}} 2 \Omega_0 ({{\tilde x}}(t);t, y(t), v(t))dt $ |
where, for all $ (t, y, v) \in {T \times {{\bf R}^n} \times {{\bf R}^m}} $,
$ 2\Omega_0({{\tilde x}}(t);t, y, v): = y^\ast F_{0xx}({{\tilde x}}(t))y + 2y^\ast F_{0xu}({{\tilde x}}(t))v + v^\ast F_{0uu}({{\tilde x}}(t))v. $ |
$ \bullet $ Denote by $ E_0 $ the Weierstrass excess function of $ F_0 $, given by
$ E_0(t, x, u, v): = F_0(t, x, v)-F_0(t, x, u)-F_{0u}(t, x, u)(v-u). $ |
Similarly, the Weierstrass excess function of $ L_\gamma $ $ (\gamma = 1, \ldots, K) $ corresponds to
$ E_\gamma(t, x, u, v): = L_\gamma(t, x, v)-L_\gamma(t, x, u)-L_{\gamma u}(t, x, u)(v-u). $ |
$ \bullet $ For all $ (x, u) \in {{\cal X}}\times L^1(T; {{\bf R}^m}) $ let
$ D(x, u): = \max\{D_1(x), D_2(u)\} $ |
where
$ D_1(x): = V(x(t_0))+{\int_{t_0}^{t_1}} V({{\dot x}}(t))dt\quad\hbox{and}\quad D_2(u): = {\int_{t_0}^{t_1}} V(u(t))dt, $ |
where $ V(\pi): = (1 + |\pi|^2)^{1/2} - 1 $ with $ \pi: = (\pi_1, \ldots, \pi_n)^\ast\in{{\bf R}^n} $ or $ \pi: = (\pi_1, \ldots, \pi_m)^\ast\in{{\bf R}^m} $.
Finally, for all $ (t, x, u)\in{T \times {{\bf R}^n} \times {{\bf R}^m}} $, denote by
$ {{\cal I}}_a(t, x, u): = \{ \alpha\in R \mid \varphi_\alpha(t, x, u) = 0 \}, $ |
the set of active indices of $ (t, x, u) $ with respect to the mixed inequality constraints. For all $ (x, u)\in{{\cal X}}\times{{\cal U}}_m $, denote by
$ i_a(x, u): = \{ i = 1, \ldots, k \mid I_i(x, u) = 0 \}, $ |
the set of active indices of $ (x, u) $ with respect to the isoperimetric inequality constraints. Given $ (x, u)\in{{\cal X}}\times{{\cal U}}_m $, let $ {{\cal Y}}(x, u) $ be the set of all $ (y, v)\in {{\cal X}}\times L^2(T; {{\bf R}^m}) $ satisfying
$ \left\{˙y(t)=fx(˜x(t))y(t)+fu(˜x(t))v(t)(a.e. in T),y(ti)=0(i=0,1).I′i((x,u);(y,v))≤0(i∈ia(x,u)),I′j((x,u);(y,v))=0(j=k+1,…,K).φαx(˜x(t))y(t)+φαu(˜x(t))v(t)≤0(a.e. in T,α∈Ia(˜x(t))).φβx(˜x(t))y(t)+φβu(˜x(t))v(t)=0(a.e. in T,β∈S). \right. $
|
The set $ {{\cal Y}}(x, u) $ is called the cone of critical directions along $ (x, u) $.
Now we are in a position to state the main results of the article, two sufficiency results for strict local minima of problem (P). Given an admissible process $ (x_0, u_0) $ where the proposed optimal controls $ u_0 $ need not be continuous but only measurable, the hypotheses include, two conditions related with Pontryagin maximum principle, a similar assumption of the necessary Legendre-Clebsch condition, the positivity of the second variation on the cone of critical directions and some conditions involving the Weierstrass functions delimiting problem (P). It is worth observing that the sufficiency theorems not only give sufficient conditions for strict local minima but also provides some information concerning the deviation between optimal and feasible costs. In the measure of this deviation are involved the functionals $ D_i $ $ (i = 1, 2) $ which play the role of the square of the norm of the Banach space $ L^1 $.
The following theorem provides sufficient conditions for a strict strong minimum of problem (P).
Theorem 2.1 Let $ (x_0, u_0) $ be an admissible process. Assume that $ {{\cal I}}_a({{\tilde x}}_0(\cdot)) $ is piecewise constant on $ T $, suppose that there exist $ \rho\in {{\cal X}} $, $ \mu\in {{\cal U}}_s $ with $ \mu_\alpha(t)\ge0 $ and $ \mu_\alpha(t)\varphi_\alpha({{\tilde x}}_0(t)) = 0 $ $ (\alpha\in R, t\in T) $, two positive numbers $ \delta, \epsilon $, and multipliers $ \lambda_1, \ldots, \lambda_K $ with $ \lambda_i \ge 0 $ and $ \lambda_i I_i(x_0, u_0) = 0 $ $ (i = 1, \ldots, k) $ such that
$ \dot\rho(t) = -H_x^\ast({{\tilde x}}_0(t), \rho(t), \mu(t)) \ ({{a.e.\; in}\ T}), $ |
$ H_u^\ast({{\tilde x}}_0(t), \rho(t), \mu(t)) = 0 \ (t\in T), $ |
and the following holds:
(ⅰ) $ H_{uu}({{\tilde x}}_0(t), \rho(t), \mu(t)) \le 0 $ $ ({{a.e. \; in}\ T}) $.
(ⅱ) $ {J^{\prime\prime}}_0((x_0, u_0);(y, v)) > 0 $ for all $ (y, v)\not = (0, 0) $, $ (y, v) \in {{\cal Y}}(x_0, u_0) $.
(ⅲ) If $ (x, u) $ is admissible with $ \|x-x_0\| < \epsilon $, then
a. $ E_0(t, x(t), u_0(t), u(t)) \ge 0 $ $ ({{a.e. \; in}\ T}) $.
b. $ {\int_{t_0}^{t_1}} E_0(t, x(t), u_0(t), u(t))dt \ge \delta \max\{ {\int_{t_0}^{t_1}} V({{\dot x}}(t)-{{\dot x}}_0(t))dt, {\int_{t_0}^{t_1}} V(u(t)-u_0(t))dt \} $.
c. $ {\int_{t_0}^{t_1}} E_0(t, x(t), u_0(t), u(t)) dt \ge \delta|{\int_{t_0}^{t_1}} E_\gamma(t, x(t), u_0(t), u(t))dt| $ $ (\gamma = 1, \ldots, K) $.
In this case, there exist $ \theta_1, \theta_2 > 0 $ such that if $ (x, u) $ is admissible with $ \|x-x_0\| < \theta_1 $,
$ I(x, u)\ge I(x_0, u_0)+\theta_2 D(x-x_0, u-u_0). $ |
In particular, $ (x_0, u_0) $ is a strict strong minimum of $ \rm(P) $.
The theorem below gives sufficient conditions for weak minima of problem (P).
Theorem 2.2 Let $ (x_0, u_0) $ be an admissible process. Assume that $ {{\cal I}}_a({{\tilde x}}_0(\cdot)) $ is piecewise constant on $ T $, suppose that there exist $ \rho\in {{\cal X}} $, $ \mu\in {{\cal U}}_s $ with $ \mu_\alpha(t)\ge0 $ and $ \mu_\alpha(t)\varphi_\alpha({{\tilde x}}_0(t)) = 0 $ $ (\alpha\in R, t\in T) $, two positive numbers $ \delta, \epsilon $, and multipliers $ \lambda_1, \ldots, \lambda_K $ with $ \lambda_i \ge 0 $ and $ \lambda_i I_i(x_0, u_0) = 0 $ $ (i = 1, \ldots, k) $ such that
$ \dot\rho(t) = -H_x^\ast({{\tilde x}}_0(t), \rho(t), \mu(t)) \ ({{a.e. \; in}\ T}), $ |
$ H_u^\ast({{\tilde x}}_0(t), \rho(t), \mu(t)) = 0 \ (t\in T), $ |
and the following holds:
(ⅰ) $ H_{uu}({{\tilde x}}_0(t), \rho(t), \mu(t)) \le 0 $ $ ({{a.e.\; in}\ T}) $.
(ⅱ) $ {J^{\prime\prime}}_0((x_0, u_0);(y, v)) > 0 $ for all $ (y, v)\not = (0, 0) $, $ (y, v) \in {{\cal Y}}(x_0, u_0) $.
(ⅲ) If $ (x, u) $ is admissible with $ \|(x, u)-(x_0, u_0)\| < \epsilon $, then
$ {\bf a^\prime.} $ $ {\int_{t_0}^{t_1}} E_0(t, x(t), u_0(t), u(t))dt \ge \delta{\int_{t_0}^{t_1}} V(u(t)-u_0(t))dt $.
$ {\bf b^\prime.} $ $ {\int_{t_0}^{t_1}} E_0(t, x(t), u_0(t), u(t)) dt \ge \delta|{\int_{t_0}^{t_1}} E_\gamma(t, x(t), u_0(t), u(t))dt| $ $ (\gamma = 1, \ldots, K) $.
In this case, there exist $ \theta_1, \theta_2 > 0 $ such that if $ (x, u) $ is admissible with $ \|(x, u)-(x_0, u_0)\| < \theta_1 $,
$ I(x, u)\ge I(x_0, u_0)+\theta_2 D_2(u-u_0). $ |
In particular, $ (x_0, u_0) $ is a strict weak minimum of $ \rm(P) $.
Examples 2.3 and 2.4 illustrate Theorems 2.1 and 2.2 respectively. It is worth mentioning that the sufficiency theory of [28] cannot be applied in both examples. Indeed, if $ f $ denotes the dynamic of the problems, as one readily verifies, in both examples, we have that
$ f_u(t, x, u) = (u_2, u_1) \ \hbox{for all $(t, x, u)\in[0, 1]\times{{\bf R}}\times{{\bf R}}^2$}, $ |
and hence, it does not exist a continuous function $ \psi\colon[0, 1]\times{{\bf R}}\to{{\bf R}} $ such that
$ |f_u(t, x, u)|\le\psi(t, x) \ \hbox{for all $(t, x, u)\in[0, 1]\times{{\bf R}}\times{{\bf R}}^2$}. $ |
Example 2.3 Let $ u_{02}\colon [0, 1]\to{{\bf R}} $ be any measurable function whose codomain belongs to the set $ \{-1, 1\}. $
Consider problem (P) of minimizing
$ I(x, u): = {\int_0^1} \{ \sinh(u_1(t))+u_1^2(t)\cos(2\pi u_2(t))-x^2(t) \} dt $ |
over all couples $ (x, u) $ with $ x\colon[0, 1]\to{{\bf R}} $ absolutely continuous and $ u\colon[0, 1]\to{{\bf R}}^2 $ essentially bounded satisfying the constraints
$ \left\{x(0)=x(1)=0.˙x(t)=u1(t)u2(t)+12x(t) (a.e.in[0,1]).I1(x,u):=∫10{14x2(t)+x(t)u1(t)u2(t)}dt≤0.(t,x(t),u(t))∈A (t∈[0,1]) \right. $
|
where
$ {{\cal A}}: = \{ (t, x, u)\in[0, 1]\times{{\bf R}}\times{{\bf R}}^2 \mid u_1\ge0, \ (u_2-u_{02}(t))^2\le1, \ u_2^2 = 1 \}. $ |
For this case, $ T = [0, 1] $, $ n = 1 $, $ m = 2 $, $ r = 2 $, $ s = 3 $, $ k = K = 1 $, $ \xi_0 = \xi_1 = 0 $,
$ L(t, x, u) = \sinh(u_1)+u_1^2\cos(2\pi u_2)-x^2, \quad f(t, x, u) = u_1u_2+{{ {{1} \over {2}}}} x, $ |
$ L_1(t, x, u) = {{ {{1} \over {4}}}} x^2+xu_1u_2, \quad L_0(t, x, u) = \sinh(u_1)+u_1^2\cos(2\pi u_2)-x^2+\lambda_1[{{ {{1} \over {4}}}} x^2+xu_1u_2], $ |
$ \varphi_1(t, x, u) = -u_1, \quad \varphi_2(t, x, u) = (u_2-u_{02}(t))^2-1, \quad \varphi_3(t, x, u) = u_2^2-1. $ |
Clearly, $ L $, $ L_1 $, $ f $ and $ \varphi = (\varphi_1, \varphi_2, \varphi_3) $ satisfy the hypotheses imposed in the statement of the problem.
Also, as one readily verifies, the process $ (x_0, u_0) = (x_0, u_{01}, u_{02})\equiv(0, 0, u_{02}) $ is admissible.
Moreover,
$ H(t,x,u,ρ,μ)=ρu1u2+12ρx−sinh(u1)−u21cos(2πu2)+x2−λ1[14x2+xu1u2]+μ1u1−μ2[(u2−u02(t))2−1]−μ3[u22−1], $
|
$ H_x(t, x, u, \rho, \mu) = {{ {{1} \over {2}}}}\rho+2x-\lambda_1[{{ {{1} \over {2}}}} x+u_1u_2], $ |
$ H_u(t, x, u, \rho, \mu) = \left(ρu2−cosh(u1)−2u1cos(2πu2)−λ1xu2+μ1ρu1+2πu21sin(2πu2)−λ1xu1−2μ2(u2−u02(t))−2μ3u2 \right)^\ast. $
|
Therefore, if we set $ \rho\equiv0 $, $ \mu_1\equiv1 $, $ \mu_2 = \mu_3\equiv0 $ and $ \lambda_1 = 0 $, we have
$ \dot\rho(t) = -H_x({{\tilde x}}_0(t), \rho(t), \mu(t)) \ ({\hbox{a.e. in}\ T}), \quad H_u({{\tilde x}}_0(t), \rho(t), \mu(t)) = (0, 0) \ (t\in T), $ |
and hence the first order sufficient conditions involving the Hamiltonian of problem (P) are verified. Moreover, if we set $ R: = \{1, 2\} $, observe that
$ \lambda_1\ge0, \quad \lambda_1I_1(x_0, u_0) = 0, $ |
$ \mu_\alpha(t)\ge0, \quad \mu_\alpha(t)\varphi_\alpha({{\tilde x}}_0(t)) = 0 \quad (\alpha\in R, \, t\in T). $ |
Additionally, $ {{\cal I}}_a({{\tilde x}}_0(\cdot))\equiv\{1\} $ is constant on $ T $. Also, it is readily seen that for all $ t\in T $,
$ H_{uu}({{\tilde x}}_0(t), \rho(t), \mu(t)) = \left(−2000 \right), $
|
and so condition (ⅰ) of Theorem 2.1 is satisfied. Observe that, for all $ t\in T $,
$ f_x({{\tilde x}}_0(t)) = {{ {{1} \over {2}}}}, \quad f_u({{\tilde x}}_0(t)) = (u_{02}(t), 0), \quad L_{1x}({{\tilde x}}_0(t)) = 0, \quad L_{1u}({{\tilde x}}_0(t)) = (0, 0), $ |
$ \varphi_{1x}({{\tilde x}}_0(t)) = 0, \quad \varphi_{1u}({{\tilde x}}_0(t)) = (-1, 0), \quad \varphi_{3x}({{\tilde x}}_0(t)) = 0, \quad \varphi_{3u}({{\tilde x}}_0(t)) = (0, 2u_{02}(t)). $ |
Thus, $ {{\cal Y}}(x_0, u_0) $ is given by all $ (y, v)\in{{\cal X}}\times L^2(T; {{\bf R}}^2) $ satisfying
$ \left\{y(0)=y(1)=0.˙y(t)=12y(t)+u02(t)v1(t) (a.e. in T).−v1(t)≤0 (a.e. in T).2u02(t)v2(t)=0 (a.e. in T). \right. $
|
Moreover, note that, for all $ (t, x, u)\in T\times{{\bf R}}\times{{\bf R}}^2 $,
$ F_0(t, x, u) = -H(t, x, u, \rho(t), \mu(t))-\dot\rho(t)x = \sinh(u_1)+u_1^2\cos(2\pi u_2)-x^2-u_1, $ |
and so, for all $ t\in T $,
$ F_{0xx}({{\tilde x}}_0(t)) = -2, \quad F_{0xu}({{\tilde x}}_0(t)) = (0, 0), \quad F_{0uu}({{\tilde x}}_0(t)) = \left(2000 \right). $
|
Consequently, we have
$ 12J′′0((x0,u0);(y,v))=∫10{v21(t)−y2(t)}dt=∫10{(˙y(t)−12y(t))2−y2(t)}dt=∫10{˙y2(t)−y(t)˙y(t)−34y2(t)}dt=∫10{˙y2(t)−34y2(t)}dt>0 $
|
for all $ (y, v)\not = (0, 0) $, $ (y, v)\in{{\cal Y}}(x_0, u_0) $. Hence, condition (ⅱ) of Theorem 2.1 is verified.
Additionally, observe that for all $ (x, u) $ admissible and all $ t\in T $,
$ E_0(t, x(t), u_0(t), u(t)) = \sinh(u_1(t))+u_1^2(t)\cos(2\pi u_2(t))-u_1(t) \ge u_1^2(t)\cos(2\pi u_{02}(t)) = u_1^2(t)\ge0, $ |
and then, condition (ⅲ)(a) of Theorem 2.1 is satisfied for any $ {\epsilon > 0} $. Now, if $ (x, u) $ is admissible, then
$ u-u_0 = (u_1-u_{01}, u_2-u_{02}) = (u_1, u_{02}-u_{02}) = (u_1, 0) $ |
and so, if $ (x, u) $ is admissible,
$ {\int_0^1} E_0(t, x(t), u_0(t), u(t)) dt \ge {\int_0^1} u_1^2(t)dt\ge{\int_0^1} V(u_1(t))dt = {\int_0^1} V(u(t)-u_0(t))dt. $ |
Also, if $ (x, u) $ is admissible, then
$ ∫10E0(t,x(t),u0(t),u(t))dt≥∫10u21(t)dt=∫10{(u1(t)u2(t)+12x(t))2−x(t)u1(t)u2(t)−14x2(t)}dt=∫10{˙x2(t)−x(t)u1(t)u2(t)−14x2(t)}dt≥∫10˙x2(t)dt≥∫10V(˙x(t)−˙x0(t))dt. $
|
Therefore, if $ (x, u) $ is admissible, then
$ {\int_0^1} E_0(t, x(t), u_0(t), u(t)) dt \ge\max\biggl\{ {\int_0^1} V({{\dot x}}(t)-{{\dot x}}_0(t))dt, {\int_0^1} V(u(t)-u_0(t)) dt \biggr\}, $ |
and hence, condition (ⅲ)(b) of Theorem 2.1 is verified for any $ {\epsilon > 0} $ and $ \delta = 1 $. Finally, if $ (x, u) $ is admissible, note that
$ |∫10E1(t,x(t),u0(t),u(t))dt|=|∫10x(t)u1(t)u2(t)dt|=|∫10{x(t)˙x(t)−12x2(t)}dt|=|∫10−12x2(t)dt|=12∫10x2(t)dt≤∫10˙x2(t)dt=∫10(u1(t)u2(t)+12x(t))2dt=∫10u21(t)dt+∫10{x(t)u1(t)u2(t)+14x2(t)}dt≤∫10u21(t)dt+∫10x(t)˙x(t)dt=∫10u21(t)dt≤∫10E0(t,x(t),u0(t),u(t))dt, $
|
implying that condition (ⅲ)(c) of Theorem 2.1 holds for any $ {\epsilon > 0} $ and $ \delta = 1 $. By Theorem 2.1, $ (x_0, u_0) $ is a strict strong minimum of (P).
Example 2.4 Consider problem (P) of minimizing
$ I(x, u): = {\int_0^1} \{ \sinh(u_1(t)+u_1(t)x^3(t))+{{ {{1} \over {2}}}} u_1^2(t)\cos(2\pi u_2(t))-\cosh(x(t))+1 \} dt $ |
over all couples $ (x, u) $ with $ x\colon[0, 1]\to{{\bf R}} $ absolutely continuous and $ u\colon[0, 1]\to{{\bf R}}^2 $ essentially bounded satisfying the constraints
$ \left\{x(0)=x(1)=0.˙x(t)=u1(t)u2(t)+x(t) (a.e.in[0,1]).I1(x,u):=∫10{sin(u1(t))−sinh(u1(t)+u1(t)x3(t))}dt≤0.(t,x(t),u(t))∈A (t∈[0,1]) \right. $
|
where
$ {{\cal A}}: = \{ (t, x, u)\in[0, 1]\times{{\bf R}}\times{{\bf R}}^2 \mid \sin(u_1)\ge0, \ u_2^2 = 1 \}. $ |
For this case, $ T = [0, 1] $, $ n = 1 $, $ m = 2 $, $ r = 1 $, $ s = 2 $, $ k = K = 1 $, $ \xi_0 = \xi_1 = 0 $,
$ L(t, x, u) = \sinh(u_1+u_1x^3)+{{ {{1} \over {2}}}} u_1^2\cos(2\pi u_2)-\cosh(x)+1, \quad f(t, x, u) = u_1u_2+x, $ |
$ L_1(t, x, u) = \sin(u_1)-\sinh(u_1+u_1x^3), $ |
$ L_0(t, x, u) = \sinh(u_1+u_1x^3)+{{ {{1} \over {2}}}} u_1^2\cos(2\pi u_2)-\cosh(x)+1+\lambda_1[\sin(u_1)-\sinh(u_1+u_1x^3)], $ |
$ \varphi_1(t, x, u) = -\sin(u_1), \quad \varphi_2(t, x, u) = u_2^2-1. $ |
Clearly, $ L $, $ L_1 $, $ f $ and $ \varphi = (\varphi_1, \varphi_2) $ satisfy the hypotheses imposed in the statement of the problem.
Let $ u_{02}\colon T\to{{\bf R}} $ be any measurable function whose codomain belongs to the set $ \{-1, 1\}. $
Clearly, the process $ (x_0, u_0) = (x_0, u_{01}, u_{02})\equiv(0, 0, u_{02}) $ is admissible.
Moreover,
$ H(t,x,u,ρ,μ)=ρu1u2+ρx−sinh(u1+u1x3)−12u21cos(2πu2)+cosh(x)−1−λ1[sin(u1)−sinh(u1+u1x3)]+μ1sin(u1)−μ2[u22−1], $
|
$ H_x(t, x, u, \rho, \mu) = \rho-3x^2u_1\cosh(u_1+u_1x^3)+\sinh(x)+3\lambda_1x^2u_1\cosh(u_1+u_1x^3), $ |
$ Hu1(t,x,u,ρ,μ)=ρu2−[1+x3]cosh(u1+u1x3)−u1cos(2πu2)−λ1[cos(u1)−{1+x3}cosh(u1+u1x3)]+μ1cos(u1), $
|
$ H_{u_2}(t, x, u, \rho, \mu) = \rho u_1+\pi u_1^2\sin(2\pi u_2)-2\mu_2u_2. $ |
Therefore, if we set $ \rho\equiv0 $, $ \mu_1\equiv1 $, $ \mu_2\equiv0 $ and $ \lambda_1 = 0 $, we have
$ \dot\rho(t) = -H_x({{\tilde x}}_0(t), \rho(t), \mu(t)) \ ({\hbox{a.e. in}\ T}), \quad H_u({{\tilde x}}_0(t), \rho(t), \mu(t)) = (0, 0) \ (t\in T), $ |
and hence the first order sufficient conditions involving the Hamiltonian of problem (P) are verified. Additionally, observe that
$ \lambda_1\ge0, \quad \lambda_1I_1(x_0, u_0) = 0, $ |
$ \mu_1(t)\ge0, \quad \mu_1(t)\varphi_1({{\tilde x}}_0(t)) = 0 \quad (t\in T). $ |
Also, $ {{\cal I}}_a({{\tilde x}}_0(\cdot))\equiv\{1\} $ is constant on $ T $. Moreover, it is readily seen that for all $ t\in T $,
$ H_{uu}({{\tilde x}}_0(t), \rho(t), \mu(t)) = \left(−1000 \right), $
|
and so condition (ⅰ) of Theorem 2.2 is satisfied. Observe that, for all $ t\in T $,
$ f_x({{\tilde x}}_0(t)) = 1, \quad f_u({{\tilde x}}_0(t)) = (u_{02}(t), 0), \quad L_{1x}({{\tilde x}}_0(t)) = 0, \quad L_{1u}({{\tilde x}}_0(t)) = (0, 0), $ |
$ \varphi_{1x}({{\tilde x}}_0(t)) = 0, \quad \varphi_{1u}({{\tilde x}}_0(t)) = (-1, 0), \quad \varphi_{2x}({{\tilde x}}_0(t)) = 0, \quad \varphi_{2u}({{\tilde x}}_0(t)) = (0, 2u_{02}(t)). $ |
Thus, $ {{\cal Y}}(x_0, u_0) $ is given by all $ (y, v)\in{{\cal X}}\times L^2(T; {{\bf R}}^2) $ satisfying
$ \left\{y(0)=y(1)=0.˙y(t)=y(t)+u02(t)v1(t) (a.e. in T).−v1(t)≤0 (a.e. in T).2u02(t)v2(t)=0 (a.e. in T). \right. $
|
Also, note that, for all $ (t, x, u)\in T\times{{\bf R}}\times{{\bf R}}^2 $,
$ F_0(t, x, u) = -H(t, x, u, \rho(t), \mu(t))-\dot\rho(t)x = \sinh(u_1+u_1x^3)+{{ {{1} \over {2}}}} u_1^2\cos(2\pi u_2)-\cosh(x)+1-\sin(u_1), $ |
and so, for all $ t\in T $,
$ F_{0xx}({{\tilde x}}_0(t)) = -1, \quad F_{0xu}({{\tilde x}}_0(t)) = (0, 0), \quad F_{0uu}({{\tilde x}}_0(t)) = \left(1000 \right). $
|
Consequently, we have
$ J′′0((x0,u0);(y,v))=∫10{v21(t)−y2(t)}dt=∫10{(˙y(t)−y(t))2−y2(t)}dt=∫10{˙y2(t)−2y(t)˙y(t)}dt=∫10˙y2(t)dt>0 $
|
for all $ (y, v)\not = (0, 0) $, $ (y, v)\in{{\cal Y}}(x_0, u_0) $. Hence, condition (ⅱ) of Theorem 2.2 is verified.
Additionally, observe that for any $ \epsilon\in(0, 1) $, all $ (x, u) $ admissible satisfying $ \|(x, u)-(x_0, u_0)\| < \epsilon $ and all $ t\in T $,
$ E0(t,x(t),u0(t),u(t))=sinh(u1(t)+u1(t)x3(t))+12u21(t)cos(2πu2(t))−sin(u1(t))=sinh(u1(t)+u1(t)x3(t))+12u21(t)cos(2πu02(t))−sin(u1(t))=sinh(u1(t)+u1(t)x3(t))+12u21(t)−sin(u1(t)). $
|
Therefore, for any $ \epsilon\in(0, 1) $ and all $ (x, u) $ admissible satisfying $ \|(x, u)-(x_0, u_0)\| < \epsilon $,
$ ∫10E0(t,x(t),u0(t),u(t))dt=∫10{sinh(u1(t)+u1(t)x3(t))−sin(u1(t))+12u21(t)}dt≥∫1012u21(t)dt≥∫10V(u1(t))dt=∫10V(u(t)−u0(t))dt, $
|
and hence condition (ⅲ)(a$ ^\prime $) of Theorem 2.2 is satisfied for any $ \epsilon\in(0, 1) $ and $ \delta = 1 $.
Finally, if $ (x, u) $ is admissible, note that
$ {\int_0^1} E_0(t, x(t), u_0(t), u(t))dt \ge \biggl| {\int_0^1} \{ \sinh(u_1(t)+u_1(t)x^3(t))-\sin(u_1(t)) \}dt \biggr| = \biggl| {\int_0^1} E_1(t, x(t), u_0(t), u(t))dt \biggr|, $ |
implying that condition (ⅲ)(b$ ^\prime $) of Theorem 2.2 holds for any $ {\epsilon > 0} $ and $ \delta = 1 $. By Theorem 2.2, $ (x_0, u_0) $ is a strict weak minimum of (P).
In this section we shall prove Theorem 2.1. We first state an auxiliary result whose proof is given in Lemmas 2–4 of [31].
In the following lemma we shall assume that we are given $ z_0: = (x_0, u_0)\in {{\cal X}}\times L^1(T; {{\bf R}^m}) $ and a subsequence $ \{z_q: = (x_q, u_q)\} $ in $ {{\cal X}}\times L^1(T; {{\bf R}^m}) $ such that
$ \lim\limits_{q\to\infty}D(z_q-z_0) = 0 \quad\hbox{and}\quad d_q: = [2D(z_q-z_0)]^{1/2} > 0\quad(q\in{{\bf N}}). $ |
For all $ q\in{{\bf N}} $, set
$ y_q: = {{{x_q-x_0}} \over {{d_q}}} \quad\hbox{and}\quad v_q: = {{{u_q-u_0}} \over {{d_q}}}. $ |
For all $ q\in{{\bf N}} $, define
$ W_q: = \max\{W_{1q}, W_{2q}\} $ |
where
$ W_{1q}: = [1+{{ {{1} \over {2}}}} V({{\dot x}}_q-{{\dot x}}_0)]^{1/2} \quad\hbox{and}\quad W_{2q}: = [1+{{ {{1} \over {2}}}} V(u_q-u_0)]^{1/2}. $ |
As we mentioned in the introduction, we do not relabel the subsequences of a given sequence since as one readily verifies this fact will not alter our results.
Lemma 3.1
$ \quad {\rm\bf a.} $ For some $ v_0\in L^2(T; {{\bf R}^m}) $ and some subsequence of $ \{z_q\} $, $ v_q \buildrel L^1 \over \rightharpoonup v_0 $ on $ T $. Even more, $ u_q \buildrel \hbox{au} \over \longrightarrow u_0 $ on $ T $.
$ \quad {\rm\bf b.} $ There exist $ \zeta_0\in L^2(T; {{\bf R}^n}) $, $ \bar y_0\in{{\bf R}^n} $, and some subsequence of $ \{z_q\} $, such that $ {{\dot y}}_q \buildrel L^1 \over \rightharpoonup \zeta_0 $ on $ T $. Moreover, if $ y_0(t): = \bar y_0+\int_{t_0}^t\zeta_0(\tau)d\tau $ $ (t\in T) $, then $ y_q \buildrel \hbox{u} \over \longrightarrow y_0 $ on $ T $.
$ \quad {\rm\bf c.} $ Let $ \Upsilon\subset T $ be measurable and suppose that $ W_q \buildrel \hbox{u} \over \longrightarrow 1 $ on $ \Upsilon $. Let $ R_q, R_0\in L^\infty(\Upsilon; {{\bf R}}^{m\times m}) $, assume that $ R_q \buildrel \hbox{u} \over \longrightarrow R_0 $ on $ \Upsilon $, $ R_0(t)\ge0 $ $ (t\in \Upsilon) $, and let $ v_0 $ be the function considered in condition (a) of Lemma 3.1. Then,
$ \liminf\limits_{q\to\infty}\int_\Upsilon v_q^\ast(t)R_q(t)v_q(t) dt \ge \int_\Upsilon v_0^\ast(t)R_0(t)v_0(t) dt. $ |
Proof. The proof of Theorem 2.1 will be made by contraposition, that is, we shall assume that for all $ \theta_1, \theta_2 > 0 $, there exists an admissible process $ (x, u) $ such that
$ ‖x−x0‖<θ1andI(x,u)<I(x0,u0)+θ2D(x−x0,u−u0). $
|
(1) |
Also, we are going to assume that all the hypotheses of Theorem 2.1 are satisfied with the exception of hypothesis (ⅱ) and we will obtain the negation of condition (ⅱ) of Theorem 2.1. First of all, note that since
$ \mu_\alpha(t)\ge0 \ (\alpha\in R, t\in T) \quad\hbox{and}\quad \lambda_i\ge0 \ (i = 1, \ldots, k), $ |
if $ (x, u) $ is admissible, then $ I(x, u)\ge J_0(x, u) $. Also, since
$ \mu_\alpha(t)\varphi_\alpha({{\tilde x}}_0(t)) = 0 \ (\alpha\in R, t\in T) \quad\hbox{and}\quad \lambda_i I_i(x_0, u_0) = 0 \ (i = 1, \ldots, k), $ |
then $ I(x_0, u_0) = J_0(x_0, u_0) $. Thus, (1) implies that for all $ \theta_1, \theta_2 > 0 $, there exists $ (x, u) $ admissible with $ \|x-x_0\| < \theta_1 $ and
$ J0(x,u)<J0(x0,u0)+θ2D(x−x0,u−u0). $
|
(2) |
Let $ z_0: = (x_0, u_0) $. Note that, for all admissible processes $ z = (x, u) $,
$ J0(z)=J0(z0)+J′0(z0;z−z0)+K0(z)+E0(z) $
|
(3) |
where
$ {{\cal E}}_0(x, u): = {\int_{t_0}^{t_1}} E_0(t, x(t), u_0(t), u(t))dt, $ |
$ {{\cal K}}_0(x, u): = {\int_{t_0}^{t_1}} \{ M_0(t, x(t)) + [u^\ast(t)-u_0^\ast(t)] N_0(t, x(t)) \}dt, $ |
and the functions $ M_0 $ and $ N_0 $ are given by
$ M_0(t, y): = F_0(t, y, u_0(t)) - F_0({{\tilde x}}_0(t)) - F_{0x}({{\tilde x}}_0(t))(y-x_0(t)), $ |
$ N_0(t, y): = F_{0u}^\ast(t, y, u_0(t)) - F_{0u}^\ast({{\tilde x}}_0(t)). $ |
We have,
$ M_0(t, y) = {{ {{1} \over {2}}}}[y^\ast-x_0^\ast(t)] P_0(t, y)(y-x_0(t)), \quad N_0(t, y) = Q_0(t, y)(y-x_0(t)), $ |
where
$ P_0(t, y): = 2\int_0^1(1-\lambda)F_{0xx}(t, x_0(t)+\lambda[y-x_0(t)], u_0(t))d\lambda, $ |
$ Q_0(t, y): = \int_0^1F_{0ux}(t, x_0(t)+\lambda[y-x_0(t)], u_0(t))d\lambda. $ |
Now, as in [28], choose $ \nu > 0 $ such that for all $ z = (x, u) $ admissible with $ \|x-x_0\| < 1 $,
$ |K0(x,u)|≤ν‖x−x0‖[1+D(z−z0)]. $
|
(4) |
Now, by (2), for all $ q\in{{\bf N}} $ there exists $ z_q: = (x_q, u_q) $ admissible such that
$ ‖xq−x0‖<ϵ,‖xq−x0‖<1q,J0(zq)−J0(z0)<1qD(zq−z0). $
|
(5) |
The last inequality of (5) implies that $ z_q\not = z_0 $ and so for all $ q\in{{\bf N}} $,
$ d_q: = [2D(z_q-z_0)]^{1/2} > 0. $ |
Since
$ \dot\rho(t) = -H_x^\ast({{\tilde x}}_0(t), \rho(t), \mu(t)) \ ({\hbox{a.e. in}\ T}), \quad H_u^\ast({{\tilde x}}_0(t), \rho(t), \mu(t)) = 0 \ (t\in T), $ |
it follows that $ {J^\prime}_0(z_0;(y, v)) = 0 $ for all $ (y, v)\in{{\cal X}}\times L^2(T; {{\bf R}^m}) $. With this in mind, by (3), condition (ⅲ)(b) of Theorem 2.1, (4) and (5),
$ J_0(z_q)-J_0(z_0) = {{\cal K}}_0(z_q)+{{\cal E}}_0(z_q) \ge -\nu\|x_q-x_0\|+D(z_q-z_0)(\delta-\nu\|x_q-x_0\|). $ |
By (5), for all $ q\in{{\bf N}} $,
$ D(z_q-z_0)\biggl( \delta - {{{1}} \over {{q}}} - {{{\nu}} \over {{q}}} \biggr) < {{{\nu}} \over {{q}}} $ |
and hence
$ \lim\limits_{q\to\infty}D(z_q-z_0) = 0. $ |
For all $ q\in{{\bf N}} $, define
$ y_q: = {{{x_q-x_0}} \over {{d_q}}}\quad\hbox{and}\quad v_q: = {{{u_q-u_0}} \over {{d_q}}}. $ |
By condition (a) of Lemma 3.1, there exist $ v_0\in L^2(T; {{\bf R}^m}) $ and a subsequence of $ \{z_q\} $ such that $ v_q \buildrel L^1 \over \rightharpoonup v_0 $ on $ T $. By condition (b) of Lemma 3.1, there exist $ \zeta_0\in L^2(T; {{\bf R}^n}) $, $ \bar y_0\in{{\bf R}^n} $ and a subsequence of $ \{z_q\} $ such that, if for all $ t\in T $, $ y_0(t): = \bar y_0+\int_{t_0}^t\zeta_0(\tau)d\tau $, then $ y_q \buildrel \hbox{u} \over \longrightarrow y_0 $ on $ T $.
We claim that
ⅰ. $ {J^{\prime\prime}}_0(z_0;(y_0, v_0))\le0 $, $ (y_0, v_0)\not = (0, 0) $.
ⅱ. $ {{\dot y}}_0(t) = f_x({{\tilde x}}_0(t))y_0(t)+f_u({{\tilde x}}_0(t))v_0(t) $ $ ({\hbox{a.e. in}\ T}) $, $ y_0(t_i) = 0 $ $ (i = 0, 1) $.
ⅲ. $ {I^\prime}_i(z_0;(y_0, v_0))\le0 $ $ (i\in i_a(z_0)) $, $ {I^\prime}_j(z_0;(y_0, v_0)) = 0 $ $ (j = k+1, \ldots, K) $.
ⅳ. $ \varphi_{\alpha x}({{\tilde x}}_0(t))y_0(t)+\varphi_{\alpha u}({{\tilde x}}_0(t))v_0(t)\le0 $ $ ({\hbox{a.e. in}\ T}, \, \alpha\in{{\cal I}}_a({{\tilde x}}_0(t))) $.
ⅴ. $ \varphi_{\beta x}({{\tilde x}}_0(t))y_0(t)+\varphi_{\beta u}({{\tilde x}}_0(t))v_0(t) = 0 $ $ ({\hbox{a.e. in}\ T}, \, \beta\in S) $.
Indeed, the equalities $ y_0(t_i) = 0 $ $ (i = 0, 1) $ follow from the definition of $ y_q $, the admissibility of $ z_q $ and the fact that $ y_q \buildrel \hbox{u} \over \longrightarrow y_0 $ on $ T $.
For all $ q\in{{\bf N}} $, we have
$ {{{{{\cal K}}_0(z_q)}} \over {{d_q^2}}} = {\int_{t_0}^{t_1}}\biggl \{ {{{M_0(t, x_q(t))}} \over {{d_q^2}}} + v_q^\ast(t) {{{N_0(t, x_q(t))}} \over {{d_q}}} \biggr \}dt. $ |
By condition (b) of Lemma 3.1,
$ {{{M_0(\cdot, x_q(\cdot))}} \over {{d_q^2}}} \buildrel L^\infty \over \longrightarrow {{ {{1} \over {2}}}} y_0^\ast(\cdot) F_{0xx}({{\tilde x}}_0(\cdot))y_0(\cdot), $ |
$ {{{N_0(\cdot, x_q(\cdot))}} \over {{d_q}}} \buildrel L^\infty \over \longrightarrow F_{0ux}({{\tilde x}}_0(\cdot))y_0(\cdot), $ |
both on $ T $ and, since $ v_q \buildrel L^1 \over \rightharpoonup v_0 $ on $ T $,
$ 12J′′0(z0;(y0,v0))=limq→∞K0(zq)d2q+12∫t1t0v∗0(t)F0uu(˜x0(t))v0(t)dt. $
|
(6) |
We have,
$ lim infq→∞E0(zq)d2q≥12∫t1t0v∗0(t)F0uu(˜x0(t))v0(t)dt. $
|
(7) |
Indeed, by condition (a) of Lemma 3.1, we are able to choose $ \Upsilon\subset T $ measurable such that $ u_q \buildrel \hbox{u} \over \longrightarrow u_0 $ on $ \Upsilon $. Since $ z_q $ is admissible, then recalling the definition of $ W_q $ given in the beginning of this section, as one readily verifies, $ W_q \buildrel \hbox{u} \over \longrightarrow 1 $ on $ \Upsilon $. Moreover, for all $ t\in \Upsilon $ and $ q\in{{\bf N}} $,
$ {{{1}} \over {{d_q^2}}}E_0(t, x_q(t), u_0(t), u_q(t)) = {{ {{1} \over {2}}}} v_q^\ast(t)R_q(t)v_q(t) $ |
where
$ R_q(t): = 2\int_0^1 (1-\lambda)F_{0uu}(t, x_q(t), u_0(t)+\lambda[u_q(t)-u_0(t)])d\lambda. $ |
Clearly,
$ R_q(\cdot) \buildrel \hbox{u} \over \longrightarrow R_0(\cdot): = F_{0uu}({{\tilde x}}_0(\cdot)) \hbox{ on $\Upsilon$}. $ |
By condition (ⅰ) of Theorem 2.1, $ R_0(t)\ge 0 $ $ (t\in \Upsilon) $. Additionally, by condition (ⅲ)(a) of Theorem 2.1, for all $ q\in{{\bf N}} $,
$ E_0(t, x_q(t), u_0(t), u_q(t))\ge0 \quad ({\hbox{a.e. in}\ T}), $ |
and so, by condition (c) of Lemma 3.1,
$ lim infq→∞E0(zq)d2q=lim infq→∞1d2q∫t1t0E0(t,xq(t),u0(t),uq(t))dt≥lim infq→∞1d2q∫ΥE0(t,xq(t),u0(t),uq(t))dt=12lim infq→∞∫Υv∗q(t)Rq(t)vq(t)dt≥12∫Υv∗0(t)R0(t)v0(t)dt. $
|
As $ \Upsilon $ can be chosen to differ from $ T $ by a set of an arbitrarily small measure and the function
$ t\mapsto v_0^\ast(t)R_0(t)v_0(t) $ |
belongs to $ L^1(T; {{\bf R}}) $, this inequality holds when $ \Upsilon = T $ and this establishes (7). By (3) and (5)–(7),
$ {{ {{1} \over {2}}}} {J^{\prime\prime}}_0(z_0;(y_0, v_0)) \le \lim\limits_{q\to\infty}{{{{{\cal K}}_0(z_q)}} \over {{d_q^2}}} + \liminf\limits_{q\to\infty}{{{{{\cal E}}_0(z_q)}} \over {{d_q^2}}} = \liminf\limits_{q\to\infty}{{{J_0(z_q)-J_0(z_0)}} \over {{d_q^2}}} \le 0. $ |
If $ (y_0, v_0) = (0, 0) $, then
$ \lim\limits_{q\to\infty}{{{{{\cal K}}_0(z_q)}} \over {{d_q^2}}} = 0 $ |
and so, by condition (ⅲ)(b) of Theorem 2.1,
$ {{ {{1} \over {2}}}}\delta\le\liminf\limits_{q\to\infty}{{{{{\cal E}}_0(z_q)}} \over {{d_q^2}}} \le 0, $ |
which contradicts the positivity of $ \delta $.
For all $ q\in{{\bf N}} $, we have
$ {{\dot y}}_q(t) = A_q(t)y_q(t) + B_q(t)v_q(t) \ ({\hbox{a.e. in}\ T}), \quad y_q(t_0) = 0, $ |
where
$ A_q(t) = \int_0^1 f_x(t, x_0(t) + \lambda[x_q(t)-x_0(t)], u_0(t))d\lambda, $ |
$ B_q(t) = \int_0^1 f_u(t, x_q(t), u_0(t) + \lambda[u_q(t)-u_0(t)])d\lambda. $ |
Since
$ A_q(\cdot) \buildrel \hbox{u} \over \longrightarrow A_0(\cdot): = f_x({{\tilde x}}_0(\cdot)), \quad B_q(\cdot) \buildrel \hbox{u} \over \longrightarrow B_0(\cdot): = f_u({{\tilde x}}_0(\cdot)), $ |
$ y_q \buildrel \hbox{u} \over \longrightarrow y_0 $ and $ v_q \buildrel L^1 \over \rightharpoonup v_0 $ all on $ \Upsilon $, it follows that $ {{\dot y}}_q \buildrel L^1 \over \rightharpoonup A_0y_0+B_0y_0 $ on $ \Upsilon $. By condition (b) of Lemma 3.1, $ {{\dot y}}_q \buildrel L^1 \over \rightharpoonup \zeta_0 = {{\dot y}}_0 $ on $ \Upsilon $. Therefore,
$ {{\dot y}}_0(t) = A_0(t)y_0(t) + B_0(t)v_0(t)\quad(t\in \Upsilon). $ |
As $ \Upsilon $ can be chosen to differ from $ T $ by a set of an arbitrarily small measure, then there cannot exist a subset of $ T $ of positive measure in which the functions $ y_0 $ and $ v_0 $ do not satisfy the differential equation $ {{\dot y}}_0(t) = A_0(t)y_0(t)+B_0(t)v_0(t) $. Consequently,
$ {{\dot y}}_0(t) = A_0(t)y_0(t) + B_0(t)v_0(t)\quad({\hbox{a.e. in}\ T}) $ |
and (ⅰ) and (ⅱ) of our claim are proved.
Finally, in order to obtain (ⅲ)–(ⅴ) of our claim it is enough to copy the proofs of [28] from Eqs (8)–(15).
In this section we shall prove Theorem 2.2. We first state an auxiliary result which is an immediate consequence of Lemmas 3.1 and 3.2 of [30].
In the following lemma we shall assume that we are given $ u_0\in L^1(T; {{\bf R}^m}) $ and a sequence $ \{u_q\} $ in $ L^1(T; {{\bf R}^m}) $ such that
$ \lim\limits_{q\to\infty}D_2(u_q-u_0) = 0 \quad\hbox{and}\quad d_{2q}: = [2D_2(u_q-u_0)]^{1/2} > 0 \quad (q\in{{\bf N}}). $ |
For all $ q\in{{\bf N}} $ define
$ v_{2q}: = {{{u_q-u_0}} \over {{d_{2q}}}}. $ |
Lemma 4.1
$ {\rm\bf a.} $ For some $ v_{02}\in L^2(T; {{\bf R}^m}) $ and a subsequence of $ \{u_q\} $, $ v_{2q} \buildrel L^1 \over \rightharpoonup v_{02} $ on $ T $.
$ {\rm\bf b.} $ Let $ A_q\in L^\infty(T; {{\bf R}}^{n\times n}) $ and $ B_q\in L^\infty(T; {{\bf R}}^{n\times m}) $ be matrix functions for which there exist constants $ m_0, m_1 > 0 $ such that $ \|A_q\|_\infty\le m_0 $, $ \|B_q\|_\infty\le m_1 $ $ (q\in{{\bf N}}) $, and for all $ q\in{{\bf N}} $ denote by $ Y_q $ the solution of the initial value problem
$ {{\dot y}}(t) = A_q(t)y(t)+B_q(t)v_{2q}(t) \ ({{a.e.\; in}\ T}), \quad y(t_0) = 0. $ |
Then there exist $ \sigma_0\in L^2(T; {{\bf R}^n}) $ and a subsequence of $ \{z_q\} $, such that $ {{\dot Y}}_q \buildrel L^1 \over \rightharpoonup \sigma_0 $ on $ T $, and hence if $ Y_0(t): = \int_{t_0}^t\sigma_0(\tau)d\tau $ $ (t\in T) $, then $ Y_q \buildrel \hbox{u} \over \longrightarrow Y_0 $ on $ T $.
Proof. As we made with the proof of Theorem 2.1, the proof of Theorem 2.2 will be made by contraposition, that is, we shall assume that for all $ \theta_1, \theta_2 > 0 $, there exists an admissible process $ (x, u) $ such that
$ ‖(x,u)−(x0,u0)‖<θ1andI(x,u)<I(x0,u0)+θ2D2(u−u0). $
|
(8) |
Once again, as we made with the proof of Theorem 2.1, (8) implies that for all $ \theta_1, \theta_2 > 0 $, there exists $ (x, u) $ admissible with
$ ‖(x,u)−(x0,u0)‖<θ1andJ0(x,u)<J0(x0,u0)+θ2D2(u−u0). $
|
(9) |
Let $ z_0: = (x_0, u_0) $. As in the proof of Theorem 2.1, for all admissible processes $ z = (x, u) $,
$ J_0(z) = J_0(z_0)+{J^\prime}_0(z_0;z-z_0)+{{\cal K}}_0(z)+{{\cal E}}_0(z) $ |
where $ {{\cal E}}_0 $ and $ K_0 $ are given as in the proof of Theorem 2.1.
Now, by (9), for all $ q\in{{\bf N}} $ there exists $ z_q: = (x_q, u_q) $ admissible such that
$ ‖zq−z0‖<1q,J0(zq)−J0(z0)<1qD2(uq−u0). $
|
(10) |
Since $ z_q $ is admissible, the last inequality of (10) implies that $ u_q\not = u_0 $ and so
$ d_{2q}: = [2D_2(u_q-u_0)]^{1/2} > 0 \quad (q\in{{\bf N}}). $ |
By the first relation of (10), we have
$ \lim\limits_{q\to\infty}D_2(u_q-u_0) = 0. $ |
For all $ q\in{{\bf N}} $, define $ v_{2q} $ as in Lemma 4.1 and
$ Y_q: = {{{x_q-x_0}} \over {{d_{2q}}}} \quad\hbox{and}\quad W_{2q}: = [1+{{ {{1} \over {2}}}} V(u_q-u_0)]^{1/2}. $ |
By condition (a) of Lemma 4.1, there exist $ v_{02}\in L^2(T; {{\bf R}^m}) $ and a subsequence of $ \{z_q\} $ such that $ v_{2q} \buildrel L^1 \over \rightharpoonup v_{02} $ on $ T $. As in the proof of Theorem 2.1, for all $ q\in{{\bf N}} $,
$ \dot Y_q(t) = A_q(t)Y_q(t) + B_q(t)v_{2q}(t), \quad Y_q(t_0) = 0 \quad ({\hbox{a.e. in}\ T}). $ |
We have the existence of $ m_0, m_1 > 0 $ such that $ \|A_q\|_\infty\le m_0 $ and $ \|B_q\|_\infty\le m_1 $ $ (q\in{{\bf N}}) $. By condition (b) of Lemma 4.1, there exist $ \sigma_0\in L^2(T; {{\bf R}^n}) $ and a subsequence of $ \{z_q\} $ such that, if $ Y_0(t): = \int_{t_0}^t \sigma_0(\tau)d\tau $ $ (t\in T) $, then $ Y_q \buildrel \hbox{u} \over \longrightarrow Y_0 $ on $ T $. We claim that
ⅰ. $ {J^{\prime\prime}}_0(z_0;(Y_0, v_{02}))\le0 $, $ (Y_0, v_{02})\not = (0, 0) $.
ⅱ. $ \dot Y_0(t) = f_x({{\tilde x}}_0(t))Y_0(t)+f_u({{\tilde x}}_0(t))v_{02}(t) $ $ ({\hbox{a.e. in}\ T}) $, $ Y_0(t_i) = 0 $ $ (i = 0, 1) $.
ⅲ. $ {I^\prime}_i(z_0;(Y_0, v_{02}))\le0 $ $ (i\in i_a(z_0)) $, $ {I^\prime}_j(z_0;(Y_0, v_{02})) = 0 $ $ (j = k+1, \ldots, K) $.
ⅳ. $ \varphi_{\alpha x}({{\tilde x}}_0(t))Y_0(t)+\varphi_{\alpha u}({{\tilde x}}_0(t))v_{02}(t)\le0 $ $ ({\hbox{a.e. in}\ T}, \, \alpha\in{{\cal I}}_a({{\tilde x}}_0(t))) $.
ⅴ. $ \varphi_{\beta x}({{\tilde x}}_0(t))Y_0(t)+\varphi_{\beta u}({{\tilde x}}_0(t))v_{02}(t) = 0 $ $ ({\hbox{a.e. in}\ T}, \, \beta\in S) $.
Indeed, for all $ q\in{{\bf N}} $, we have
$ {{{{{\cal K}}_0(z_q)}} \over {{d_{2q}^2}}} = {\int_{t_0}^{t_1}}\biggl \{ {{{M_0(t, x_q(t))}} \over {{d_{2q}^2}}} + v_{2q}^\ast(t) {{{N_0(t, x_q(t))}} \over {{d_{2q}}}} \biggr \}dt. $ |
Also, we have
$ {{{M_0(\cdot, x_q(\cdot))}} \over {{d_{2q}^2}}} \buildrel L^\infty \over \longrightarrow {{ {{1} \over {2}}}} Y_0^\ast(\cdot) F_{0xx}({{\tilde x}}_0(\cdot))Y_0(\cdot), $ |
$ {{{N_0(\cdot, x_q(\cdot))}} \over {{d_{2q}}}} \buildrel L^\infty \over \longrightarrow F_{0ux}({{\tilde x}}_0(\cdot))Y_0(\cdot), $ |
both on $ T $ and, since $ v_{2q} \buildrel L^1 \over \rightharpoonup v_{02} $ on $ T $,
$ 12J′′0(z0;(Y0,v02))=limq→∞K0(zq)d22q+12∫t1t0v∗02(t)F0uu(˜x0(t))v02(t)dt. $
|
(11) |
Now, for all $ t\in T $ and $ q\in{{\bf N}} $,
$ {{{1}} \over {{d_{2q}^2}}}E_0(t, x_q(t), u_0(t), u_q(t)) = {{ {{1} \over {2}}}} v_{2q}^\ast(t)R_q(t)v_{2q}(t) $ |
where
$ R_q(t): = 2\int_0^1 (1-\lambda)F_{0uu}(t, x_q(t), u_0(t)+\lambda[u_q(t)-u_0(t)])d\lambda. $ |
Clearly,
$ R_q(\cdot) \buildrel L^\infty \over \longrightarrow R_0(\cdot): = F_{0uu}({{\tilde x}}_0(\cdot)) \hbox{ on $T$}. $ |
Since $ \|z_q-z_0\|\to0 $ as $ q\to\infty $, it follows that $ W_{2q} \buildrel L^\infty \over \longrightarrow 1 $ on $ T $ and, by condition (ⅰ) of Theorem 2.2, $ R_0(t)\ge 0 $ $ ({\hbox{a.e. in}\ T}) $. Consequently,
$ lim infq→∞E0(zq)d22q≥12∫t1t0v∗02(t)R0(t)v02(t)dt. $
|
(12) |
On the other hand, since
$ \dot\rho(t) = -H_x^\ast({{\tilde x}}_0(t), \rho(t), \mu(t)) \ ({\hbox{a.e. in}\ T}), \quad H_u^\ast({{\tilde x}}_0(t), \rho(t), \mu(t)) = 0 \ (t\in T), $ |
we have that $ {J^\prime}_0(z_0;(y, v)) = 0 $ for all $ (y, v)\in {{\cal X}}\times L^2(T; {{\bf R}^m}) $. With this in mind, (10)–(12),
$ {{ {{1} \over {2}}}} {J^{\prime\prime}}_0(z_0;(Y_0, v_{02})) \le \lim\limits_{q\to\infty}{{{{{\cal K}}_0(z_q)}} \over {{d_{2q}^2}}} + \liminf\limits_{q\to\infty}{{{{{\cal E}}_0(z_q)}} \over {{d_{2q}^2}}} = \liminf\limits_{q\to\infty}{{{J_0(z_q)-J_0(z_0)}} \over {{d_{2q}^2}}} \le 0. $ |
If $ (Y_0, v_{02}) = (0, 0) $, then
$ \lim\limits_{q\to\infty}{{{{{\cal K}}_0(z_q)}} \over {{d_{2q}^2}}} = 0 $ |
and so, by condition (ⅲ)$ (\hbox{a}^\prime) $ of Theorem 2.2,
$ {{ {{1} \over {2}}}}\delta\le\liminf\limits_{q\to\infty}{{{{{\cal E}}_0(z_q)}} \over {{d_{2q}^2}}} \le 0, $ |
which contradicts the positivity of $ \delta $ and this proves (ⅰ) of our claim.
Now, we also claim that
$ \dot Y_0(t) = f_x({{\tilde x}}_0(t))Y_0(t)+f_u({{\tilde x}}_0(t))v_{02}(t) \, \, ({\hbox{a.e. in}\ T}), \quad Y_0(t_i) = 0 \, \, (i = 0, 1). $ |
Indeed, the equalities $ Y_0(t_i) = 0 $ $ (i = 0, 1) $ follow from the definition of $ Y_q $, the admissibility of $ z_q $ and the fact that $ Y_q \buildrel \hbox{u} \over \longrightarrow Y_0 $ on $ T $. Also, observe that since $ Y_q \buildrel \hbox{u} \over \longrightarrow Y_0 $,
$ A_q(\cdot) \buildrel L^\infty \over \longrightarrow A_0(\cdot): = f_x({{\tilde x}}_0(\cdot)), $ |
$ B_q(\cdot) \buildrel L^\infty \over \longrightarrow B_0(\cdot): = f_u({{\tilde x}}_0(\cdot)), $ |
and $ v_{2q} \buildrel L^1 \over \rightharpoonup v_{02} $ all on $ T $, then $ \dot Y_q \buildrel L^1 \over \rightharpoonup A_0Y_0+B_0v_{02} $ on $ T $. By condition (b) of Lemma 4.1, $ \dot Y_q \buildrel L^1 \over \rightharpoonup \sigma_0 = \dot Y_0 $ on $ T $, which accordingly implies that
$ \dot Y_0(t) = A_0(t)Y_0(t)+B_0(t)v_{02}(t) \quad ({\hbox{a.e. in}\ T}) $ |
and our claim is proved.
Finally, in order to prove (ⅲ)–(ⅴ) of our claim it is enough to copy the proofs given in [28] from Eqs (8)–(15) by replacing $ y_0 $ by $ Y_0 $, $ v_0 $ by $ v_{02} $ and $ \Upsilon $ by $ T $.
In this article, we have provided sufficiency theorems for weak and strong minima in an optimal control problem of Lagrange with fixed end-points, nonlinear dynamics, inequality and equality isoperimetric restrictions and inequality and equality mixed time-state-control constraints. The sufficiency treatment studied in this paper does not need that the proposed optimal controls be continuous but only purely measurable. The sufficiency results not only provide local minima but they also measure the deviation between optimal and admissible costs by means of a functional playing a similar role of the square of the classical norm of the Banach space $ L^1 $. Additionally, all the crucial sufficiency hypotheses are included in the theorems, in contrast, with other necessary and sufficiency theories which strongly depend upon some preliminary assumptions not embedded in the corresponding theorems of optimality. Finally, our sufficiency technique is self-contained because it is independent of some classical sufficient approaches involving Hamilton-Jacobi inequalities, matrix-valued Riccati equations, generalizations of Jacobi's theory appealing to extended notions of conjugate points or insertions of the original problem in some abstract Banach spaces.
The author is thankful to Dirección General de Asuntos del Personal Académico, Universidad Nacional Autónoma de México, for the support provided by the project PAPIIT-IN102220. Moreover, the author thanks the two anonymous referees for the encouraging suggestions made in their reviews.
The author declares no conflict of interest.
[1] |
Ramsay MA, Luterman DL (2004) Dexmedetomidine as a total intravenous anesthetic agent. Anesthesiology 101: 787-790. doi: 10.1097/00000542-200409000-00028
![]() |
[2] |
Jalowiecki P, Rudner R, Gonciarz M, et al. (2005) Sole use of dexmedetomidine has limited utility for conscious sedation during outpatient colonoscopy. Anesthesiology 103: 269-273. doi: 10.1097/00000542-200508000-00009
![]() |
[3] |
Martin E, Ramsay G, Mantz J, et al. (2003) The role of the alpha2-adrenoceptor agonist dexmedetomidine in postsurgical sedation in the intensive care unit. J Intensive Care Med 18: 29-41. doi: 10.1177/0885066602239122
![]() |
[4] |
Kamibayashi T, Maze M (2000) Clinical uses of alpha2 -adrenergic agonists. Anesthesiology 93: 1345-1349. doi: 10.1097/00000542-200011000-00030
![]() |
[5] |
Walker SM, Howard RF, Keay KA, et al. (2005) Developmental age influences the effect of epidural dexmedetomidine on inflammatory hyperalgesia in rat pups. Anesthesiology 102: 1226-1234. doi: 10.1097/00000542-200506000-00024
![]() |
[6] | Hoffman WE, Kochs E, Werner C, et al. (1991) Dexmedetomidine improves neurologic outcome from incomplete ischemia in the rat. Reversal by the alpha 2-adrenergic antagonist atipamezole. Anesthesiology 75: 328-332. |
[7] |
Dahmani S, Rouelle D, Gressens P, et al. (2010) Characterization of the postconditioning effect of dexmedetomidine in mouse organotypic hippocampal slice cultures exposed to oxygen and glucose deprivation. Anesthesiology 112: 373-383. doi: 10.1097/ALN.0b013e3181ca6982
![]() |
[8] | Engelhard K, Werner C, Eberspacher E, et al. (2003) The effect of the alpha 2-agonist dexmedetomidine and the N-methyl-D-aspartate antagonist S(+)-ketamine on the expression of apoptosis-regulating proteins after incomplete cerebral ischemia and reperfusion in rats. Anesthesia Analgesia 96: 524-531, table of contents. |
[9] |
Sato K, Kimura T, Nishikawa T, et al. (2010) Neuroprotective effects of a combination of dexmedetomidine and hypothermia after incomplete cerebral ischemia in rats. Acta anaesthesiologica Scandinavica 54: 377-382. doi: 10.1111/j.1399-6576.2009.02139.x
![]() |
[10] |
Wikberg JE, Uhlen S, Chhajlani V (1991) Medetomidine stereoisomers delineate two closely related subtypes of idazoxan (imidazoline) I-receptors in the guinea pig. Europ J Pharmacol 193: 335-340. doi: 10.1016/0014-2999(91)90148-J
![]() |
[11] |
Virtanen R, Savola JM, Saano V, et al. (1988) Characterization of the selectivity, specificity and potency of medetomidine as an alpha 2-adrenoceptor agonist. Europ J Pharmacol 150: 9-14. doi: 10.1016/0014-2999(88)90744-3
![]() |
[12] |
Savola MK, Savola JM (1996) [3H]dexmedetomidine, an alpha 2-adrenoceptor agonist, detects a novel imidazole binding site in adult rat spinal cord. Europ J Pharmacol 306: 315-323. doi: 10.1016/0014-2999(96)00224-5
![]() |
[13] |
Dahmani S, Paris A, Jannier V, et al. (2008) Dexmedetomidine increases hippocampal phosphorylated extracellular signal-regulated protein kinase 1 and 2 content by an alpha 2-adrenoceptor-independent mechanism: evidence for the involvement of imidazoline I1 receptors. Anesthesiology 108: 457-466. doi: 10.1097/ALN.0b013e318164ca81
![]() |
[14] |
Kalimo H, Paljarvi L, Vapalahti M (1979) The early ultrastructural alterations in the rabbit cerebral and cerebellar cortex after compression ischaemia. Neuropathol App Neurobiol 5: 211-223. doi: 10.1111/j.1365-2990.1979.tb00620.x
![]() |
[15] |
Morimoto K, Yanagihara T (1981) Cerebral ischemia in gerbils: polyribosomal function during progression and recovery. Stroke J Cerebral Circulation 12: 105-110. doi: 10.1161/01.STR.12.1.105
![]() |
[16] | Jevtovic-Todorovic V, Hartman RE, Izumi Y, et al. (2003) Early exposure to common anesthetic agents causes widespread neurodegeneration in the developing rat brain and persistent learning deficits. J Neurosci 23: 876-882. |
[17] |
Loop T, Dovi-Akue D, Frick M, et al. (2005) Volatile anesthetics induce caspase-dependent, mitochondria-mediated apoptosis in human T lymphocytes in vitro. Anesthesiology 102: 1147-1157. doi: 10.1097/00000542-200506000-00014
![]() |
[18] | Nath R, Raser KJ, Hajimohammadreza I, et al. (1997) Thapsigargin induces apoptosis in SH-SY5Y neuroblastoma cells and cerebrocortical cultures. Biochem Mol Biol Int 43: 197-205. |
[19] |
Nakajima A, Tsuji M, Inagaki M, et al. (2014) Neuroprotective effects of propofol on ER stress-mediated apoptosis in neuroblastoma SH-SY5Y cells. Europ J Pharmacol 725: 47-54. doi: 10.1016/j.ejphar.2014.01.003
![]() |
[20] |
Zhang M, Shan X, Gu L, et al. (2013) Dexmedetomidine causes neuroprotection via astrocytic α2- adrenergic receptor stimulation and HB-EGF release. J Anesthesiology Clinical Sci 2: 6. doi: 10.7243/2049-9752-2-6
![]() |
[21] |
Zhang F, Ding T, Yu L, et al. (2012) Dexmedetomidine protects against oxygen-glucose deprivation-induced injury through the I2 imidazoline receptor-PI3K/AKT pathway in rat C6 glioma cells. J Pharm Pharmacol 64: 120-127. doi: 10.1111/j.2042-7158.2011.01382.x
![]() |
[22] | Salvioli S, Maseroli R, Pazienza TL, et al. (1998) Use of flow cytometry as a tool to study mitochondrial membrane potential in isolated, living hepatocytes. Biochemistry (Mosc) 63: 235-238. |
[23] | Gertler R, Brown HC, Mitchell DH, et al. (2001) Dexmedetomidine: a novel sedative-analgesic agent. Proceedings 14: 13-21. |
[24] |
Wikberg JE, Uhlen S, Chhajlani V (1991) Medetomidine stereoisomers delineate two closely related subtypes of idazoxan (imidazoline) I-receptors in the guinea pig. Eur J Pharmacol 193: 335-340. doi: 10.1016/0014-2999(91)90148-J
![]() |
[25] | Lee YS, Silva AJ (2009) The molecular and cellular biology of enhanced cognition. Nat Rev Neurosci 10: 126-140. |
[26] | Ho AK, Ogiwara T, Chik CL (1996) Thapsigargin modulates agonist-stimulated cyclic AMP responses through cytosolic calcium-dependent and -independent mechanisms in rat pinealocytes. Mol Pharmacol 49: 1104-1112. |
[27] |
Kishikawa H, Kobayashi K, Takemori K, et al. (2008) The effects of dexmedetomidine on human neutrophil apoptosis. Biomed Res 29: 189-194. doi: 10.2220/biomedres.29.189
![]() |
[28] |
Paris A, Mantz J, Tonner PH, et al. (2006) The effects of dexmedetomidine on perinatal excitotoxic brain injury are mediated by the alpha2A-adrenoceptor subtype. Anesthesia Analgesia 102: 456-461. doi: 10.1213/01.ane.0000194301.79118.e9
![]() |
[29] |
Fujita Y, Inoue K, Sakamoto T, et al. (2013) A comparison between dosages and plasma concentrations of dexmedetomidine in clinically ill patients: a prospective, observational, cohort study in Japan. J Intensive Care 1: 15. doi: 10.1186/2052-0492-1-15
![]() |
[30] | Masayuki Somei MI, Tatsunori Oguchi, Ran Ono, , Sachie Fukutaka MT, Katsuji Oguchi (2016) Combined neuroprotective effects of propofol and dexmedetomidine on endoplasmic reticulum stress-meditated apoptosis in SH-SY5Y cells. Showa Uni J Med Sci 28. |
[31] |
Tesson F, Limon-Boulez I, Urban P, et al. (1995) Localization of I2-imidazoline binding sites on monoamine oxidases. J Biol Chem 270: 9856-9861. doi: 10.1074/jbc.270.17.9856
![]() |