A case study of optimal input-output system with sampled-data control: Ding et al. force and fatigue muscular control model

  • Received: 01 April 2018
  • 49K15, 93B07, 92B05

  • The objective of this article is to make the analysis of the muscular force response to optimize electrical pulses train using Ding et al. force-fatigue model. A geometric analysis of the dynamics is provided and very preliminary results are presented in the frame of optimal control using a simplified input-output model. In parallel, to take into account the physical constraints of the problem, partial state observation and input restrictions, an optimized pulses train is computed with a model predictive control, where a non-linear observer is used to estimate the state-variables.

    Citation: Toufik Bakir, Bernard Bonnard, Jérémy Rouot. A case study of optimal input-output system with sampled-data control: Ding et al. force and fatigue muscular control model[J]. Networks and Heterogeneous Media, 2019, 14(1): 79-100. doi: 10.3934/nhm.2019005

    Related Papers:

    [1] Toufik Bakir, Bernard Bonnard, Jérémy Rouot . A case study of optimal input-output system with sampled-data control: Ding et al. force and fatigue muscular control model. Networks and Heterogeneous Media, 2019, 14(1): 79-100. doi: 10.3934/nhm.2019005
    [2] Paola Goatin, Chiara Daini, Maria Laura Delle Monache, Antonella Ferrara . Interacting moving bottlenecks in traffic flow. Networks and Heterogeneous Media, 2023, 18(2): 930-945. doi: 10.3934/nhm.2023040
    [3] Michael Herty, Lorenzo Pareschi, Sonja Steffensen . Mean--field control and Riccati equations. Networks and Heterogeneous Media, 2015, 10(3): 699-715. doi: 10.3934/nhm.2015.10.699
    [4] Antoine Tordeux, Claudia Totzeck . Multi-scale description of pedestrian collective dynamics with port-Hamiltonian systems. Networks and Heterogeneous Media, 2023, 18(2): 906-929. doi: 10.3934/nhm.2023039
    [5] Oliver Kolb, Simone Göttlich, Paola Goatin . Capacity drop and traffic control for a second order traffic model. Networks and Heterogeneous Media, 2017, 12(4): 663-681. doi: 10.3934/nhm.2017027
    [6] Dengfeng Sun, Issam S. Strub, Alexandre M. Bayen . Comparison of the performance of four Eulerian network flow models for strategic air traffic management. Networks and Heterogeneous Media, 2007, 2(4): 569-595. doi: 10.3934/nhm.2007.2.569
    [7] Rudy R. Negenborn, Peter-Jules van Overloop, Tamás Keviczky, Bart De Schutter . Distributed model predictive control of irrigation canals. Networks and Heterogeneous Media, 2009, 4(2): 359-380. doi: 10.3934/nhm.2009.4.359
    [8] Zhong-Jie Han, Gen-Qi Xu . Dynamical behavior of networks of non-uniform Timoshenko beams system with boundary time-delay inputs. Networks and Heterogeneous Media, 2011, 6(2): 297-327. doi: 10.3934/nhm.2011.6.297
    [9] Mattia Bongini, Massimo Fornasier . Sparse stabilization of dynamical systems driven by attraction and avoidance forces. Networks and Heterogeneous Media, 2014, 9(1): 1-31. doi: 10.3934/nhm.2014.9.1
    [10] Vincent Renault, Michèle Thieullen, Emmanuel Trélat . Optimal control of infinite-dimensional piecewise deterministic Markov processes and application to the control of neuronal dynamics via Optogenetics. Networks and Heterogeneous Media, 2017, 12(3): 417-459. doi: 10.3934/nhm.2017019
  • The objective of this article is to make the analysis of the muscular force response to optimize electrical pulses train using Ding et al. force-fatigue model. A geometric analysis of the dynamics is provided and very preliminary results are presented in the frame of optimal control using a simplified input-output model. In parallel, to take into account the physical constraints of the problem, partial state observation and input restrictions, an optimized pulses train is computed with a model predictive control, where a non-linear observer is used to estimate the state-variables.



    Functional Electrical Stimulation (FES) consists of applying an electrical stimulation to the muscle, in order to produce functional movements. It can be used for the muscular reinforcement, reeducation of the muscle and in the case of paralysis to activate the paralyzed muscles. Mathematically FES leads to a sampled-data control problem which can be analyzed in this framework.

    The simulations of muscular response to electrical stimulations are based on dynamics models. The origin comes from the Hill-Langmuir equation in the context of biochemistry and pharmacology, see [12]. More recent models in the framework of model identification in non-linear control are due to Bobet and Stein [2], Law and Shields [15] and a more sophisticated model was proposed by Ding et al. [6,7,8,9] where the force model is coupled to a fatigue model based on experimental results [20]. This led to a set of five differential equations with a sampled-data control which can be used to describe the force response to a pulses train of electrical stimulations and we shall refer to this model as the Ding et al. force-fatigue model in the sequel.

    There is a limited literature that used this model to design an optimized train of pulses to control the force level [6,7,8,9,1]. Our aim is to produce a more complete study of the problem in the framework of non-linear optimal control, using sampled-data controls.

    A geometric analysis of the model is provided and preliminary results are presented in the framework of optimal control with sampled-data control [3]. It is based on a simplified dynamics using the geometric properties of the force-fatigue model and control reduction to simplify the physical control constraints. The complete system is analyzed in details using a control predictive strategy (MPC), see [17,19] coupled with a non-linear observer based on [11] to estimate the state variables.

    The article is organized as follows. In Section 2, we make a brief presentation of Ding et al. force-fatigue model based on [20]. In Section 3, the dynamics of the force model is investigated to describe the input-output properties. In Section 4, the force-fatigue model is analyzed in the framework of geometric optimal sampled-data control systems and preliminary results are presented with a simplified model using a model reduction and an input transformation. Section 5 is devoted to the observer description. In Section 6, MPC method is presented using a further discretization of the dynamics to conclude with the algorithm implemented to compute in practice the optimized pulses trains. Numerical results are presented in the final Section 7.

    We refer to [20] for a complete description and discussions of the model. The Ding et al. model studied in this article is presented next in the framework of model dynamics construction based on the so-called tetania phenomenon in muscular responses. The first part of the model is the force response (output) to electrical stimulations pulses (input). The pulses are normalized Dirac impulses at times $ 0 = t_1<t_2<\ldots< t_n: $

    $ v(t)=ni=1δ(tti)
    $

    and $ I_i = t_i-t_{i-1} $ is the interpulse and convexifying leads to apply the input:

    $ v(t)=ni=1ηiδ(tti).
    $

    with parameters $ \eta_i\in [0, 1], \; i\in \{1, \ldots, n\} $. Such pulses train feeds a first-order model to produce an output according to the dynamics

    $ ˙u(t)+u(t)τc=1τcni=1Riηiδ(tti)
    $
    (1)

    where $ R_i $ is a scaling factor associated to the phenomenon of tetania and corresponds to an accumulated effect of successive pulses and is modelled as

    $ R_i = \left\{ 1 for i=11+(R01)exp(titi1τc) for i>1
    \right. $

    where the magnitude is characterized by $ R_0 $ which is the limit case to high-frequency pulse. The FES response to the input is denoted by $ E_s(t) $ and is given by

    $ Es(t)=1τcni=1RiηiH(tti)exp(ttiτc)
    $
    (2)

    where $ H(t-t_i) = \left\{0 if t<ti1 if tti

    \right. $ is the Heaviside function.

    The force response to such a train is modelled by the two equations of the so-called force model:

    $ dCNdt(t)+CN(t)τc=Es(t)
    $
    (3)

    which corresponds to a first order (resonant) linear dynamics which can be integrated with $ C_N(0) = 0 $ and the force response $ F(t) $ is described by the equation

    $ dFdt(t)=A(t)a(t)F(t)b(t)
    $
    (4)

    where $ A $ is a fatigue variable. Non-linear features of the model are described by the two mappings $ a, b $:

    $ a(t)=CN(t)Km(t)+CN(t),b(t)=1τ1(t)+τ2a(t)
    $
    (5)

    where $ K_m, \tau_1 $ are fatigue variables and $ \tau_2 $ an additional parameter.

    The complete model is obtained by describing the evolutions of the variables associated to fatigue and corresponds to the linear dynamics

    $ dAdt(t)=A(t)Arestτfat+αAF(t)
    $
    (6)
    $ dKmdt(t)=Km(t)Km,restτfat+αKmF(t)
    $
    (7)
    $ dτ1dt(t)=τ1(t)τ1,restτfat+ατ1F(t).
    $
    (8)

    The full set of equations (3)-(4)-(6)-(7)-(8) are the force and fatigue model.

    We refer to Table 1 for the definitions and details of the symbols of the force-fatigue model.

    Table 1. 

    Margin settings

    .
    Symbol Unit Value description
    $ C_{N} $ Normalized amount of
    $ Ca^{2+} $-troponin complex
    $ F $ $ N $ Force generated by muscle
    $ t_{i} $ $ ms $ Time of the $ i^{th} $ pulse
    $ n $ Total number of
    the pulses before time $ t $
    $ i $ Stimulation pulse index
    $ \tau_{c} $ $ ms $ $ 20 $ Time constant that commands
    the rise and the decay of $ C_{N} $
    $ R_{0} $ $ 1.143 $ Term of the enhancement
    in $ C_{N} $ from successive stimuli
    $ A $ $ \frac{N}{ms} $ Scaling factor for the force and
    the shortening velocity
    of muscle
    $ \tau_{1} $ $ ms $ Force decline time constant
    when strongly bound
    cross-bridges absent
    $ \tau_{2} $ $ ms $ $ 124.4 $ Force decline time constant
    due to friction between actin
    and myosin
    $ K_{m} $ Sensitivity of strongly bound
    cross-bridges to $ C_{N} $
    $ A_{rest} $ $ \frac{N}{ms} $ $ 3.009 $ Value of the variable $ A $
    when muscle is not fatigued
    $ K_{m, rest} $ $ 0.103 $ Value of the variable $ K_{m} $
    when muscle is not fatigued
    $ \tau_{1, rest} $ $ ms $ $ 50.95 $ The value of the variable $ \tau_{1} $
    when muscle is not fatigued
    $ \alpha_{A} $ $ \frac{1}{ms^{2}} $ $ -4.0 10^{-7} $ Coefficient for the force-model
    variable $ A $ in the fatigue
    model
    $ \alpha_{K_{m}} $ $ \frac{1}{msN} $ $ 1.9 10 ^{-8} $ Coefficient for the force-model
    variable $ K_{m} $ in the fatigue
    model
    $ \alpha_{\tau_{1}} $ $ \frac{1}{N} $ $ 2.1 10^{-5} $ Coefficient for force-model
    variable $ \tau_{1} $ in the fatigue
    model
    $ \tau_{fat} $ $ s $ $ 127 $ Time constant controlling the
    recovery of $ (A, K_{m}, \tau_{1}) $

     | Show Table
    DownLoad: CSV

    For the purpose of our analysis the force-fatigue model is written as the single-input control system:

    $ dxdt(t)=F0(x(t))+u(t)F1(x(t))
    $
    (9)

    with $ x = (x_1, x_2, x_3, x_4, x_5) = (C_N, F, A, K_m, \tau_1) $ where $ u $ denotes the control $ u = E_s(t) $ corresponding to the sampled physical control data:

    $ (I1,I2,,In,η1,,ηn)
    $
    (10)

    with constraints

    $ I_{\min} \le I_i \le I_{\max}, \quad 0\le \eta_i\le 1 $

    corresponding to interpulses bounds and amplitude convexification. This leads to a non-linear model with sampled control data with prescribed convex control constraints $ (\eta, I)\in C;\; \eta = (\eta_1, \ldots, \eta_n), \; I = (I_1, \ldots, I_n) $.

    The force model can be briefly investigated to describe preliminary results.

    Observe that in the force model the static non-linearities described by the mappings $ a $ and $ b $ can be absorbed by time reparameterization to provide an explicit form for the force responses.

    Proposition 1. The force model can be integrated by quadratures using a time reparameterization.

    Proof. Integrating the linear system (3), the equation (4) can be written as

    $ dFds(s)=c(s)F(s)
    $
    (11)

    with $ \mathrm{d} s = b(t)\, \mathrm{d} t, \; c(s) = A(s)\, a(s)/b(s) $. It can be integrated using Lagrange formula.

    Thus this gives an explicit force response in the time parameter $ s $

    $ (I,η,s)F(I,η,s).
    $
    (12)

    Clearly we have

    Lemma 3.1. The above mapping is smooth with respect to $ I, \eta $ and piecewise smooth with respect to $ s $.

    For the sake of providing a smooth response for the observer it is sufficient to smooth the physical sampled control date as follows: use a bump function to smooth each Heaviside mapping $ H(t-t_i) $ at the sampling time $ t_i $.

    For the sake of the geometric analysis of the dynamics and to simplify the control constraints on $ (I, \eta) $ the FES signal $ E_s(t) $ is taken as the input $ u(\cdot) $ of the control system. Using (2) one can write

    $ u(t) = E_s(t) = e^{-t/\tau_c}\, v(t) $

    with

    $ v(t)=1τcni=1H(tti)Riηieti/τc.
    $
    (13)

    Whence $ 0 = t_1<t_2<\ldots<t_n $ are given $ v(t) $ is a piecewise constant control depending upon the parameters $ \eta_1, \ldots, \eta_n $ and the dynamics of the force model can be analyzed in the frame of geometric control.

    First of all the control system (9) is analyzed in the framework of geometric control.

    Consider a (smooth) control single-input control system.

    $ {dxdt=X(x)+uY(x)y=h(x)
    $
    (14)

    where $ x\in \mathbb{R}^n, \, y = h(x)\in \mathbb{R} $ corresponds to a (smooth) single-observation mapping. The following concepts rely on seminal results of geometric control, see [18,13].

    We denote $ [U, V] $ the Lie bracket of two (smooth) vector fields of $ \mathbb{R}^n $:

    $ [U, V](x) = \frac{\partial U}{\partial x}(x)\, V(x) -\frac{\partial V}{\partial x}(x)\, U(x) $

    and a vector field $ U $ acts on (smooth) mappings $ f $ with the Lie derivative:

    $ \mathcal L_U f = \frac{\partial f}{\partial x}\, U(x). $

    Let $ D^1 = \mathrm{span}\{X, Y\} $ and define recursively : $ D^k = \mathrm{span} \{D^k\cup [D, D^{k-1}] \}, \; k>1. $ The length of $ [X_{i_1}, \ldots, [X_{i_{k-1}}, X_{i_k}]\ldots] $ is $ k $. Hence $ D^k $ represents Lie brackets of $ X, Y $ with lengths smaller than $ k+1 $ and denote $ D_{L.A.} = \mathrm{span} \cup_{k\ge 0} D^k $ the Lie algebra generated by $ \{X, Y\} $. The system is called weakly controllable if for each $ x\in \mathbb{R}^n, \; D_{L.A.}(x) = \mathbb{R}^n $.

    The observation space is the set of mappings: $ \Theta = \{\mathcal{L}_{G}\, h;\; G\in D_{L.A.}\} $ and the system is called observable if for each $ x_1, x_2\in \mathbb{R}^n $, $ x_1\neq x_2 $ there exists $ \theta\in \Theta $ such that $ \theta(x_1)\neq \theta(x_2) $.

    Taking $ x_0\in \mathbb{R}^n $, a frame at $ x_0 $ is a set of elements $ X_1, \ldots, X_n $ of $ D_{L.A.} $ such that: $ X_1, \ldots, X_n $ are linearly independent at $ x_0 $ and $ \sum_{i = 1}^n \text{length}(X_i) $ is minimal.

    The system is called feedback linearizable in an open set $ \mathcal{V}\subset \mathbb{R}^n $ if $ \dot x = X(x)+u\, Y(x) $ is feedback equivalent to the linear system $ \dot x = Ax+ub $, that is there exists a diffeomorphism $ \varphi $ on $ \mathcal{V} $, $ \varphi(0) = 0 $ and a feedback $ u = \alpha(x)+\beta(x)\, v, \; \beta(x)\neq 0 $ such that $ g\cdot (X, Y) = (A, b) $ with $ g = (\varphi, \alpha, \beta) $ acting by change of coordinates and (affine) feedback.

    Fix $ x(0) = x_0 $ and $ T>0 $ and consider the extremity mapping: $ E^{x_0, T}: u(\cdot)\in L^\infty([0, T]) \mapsto x(T, x_0, u) $ where $ x(\cdot) $ is the response of $ \dot x = X(x)+u\, Y(x) $ to $ u $ defined on $ [0, T] $.

    The control $ u(\cdot) $ is called singular on $ [0, T] $ if the extremity mapping $ E^{x_0, T} $ is not of maximal rank $ n $ when evaluated at $ u(\cdot) $.

    Geometric analysis of an observed system of the form (14) amounts to compute $ D_{L.A.} $, the observation space, the singular controls and the feedback equivalence properties. Achievements of geometric optimal control amounts to synthesize the optimal control in relation with the Lie brackets properties of a frame.

    The control in the force-fatigue model (1) falls into the framework of sampled-optimal control problem and we refer to [3] for more details. We use the following terminology.

    Consider a control system $ \dot x = f(x, u) $. When the state $ x(\cdot) $ and the control $ u(\cdot) $ evolve continuously in time, we speak of a continuous-time control problem and the control is said permanent. The sampled-data control case is when $ u(\cdot) $ is piecewise constant.

    For simplicity consider the reduced model in dimension $ 3 $ related to the force-fatigue model.

    Let the minimization problem be defined by

    $ \int_0^{T} \left( u(s)^2 + (F-F_{ref})^2\right)\, \mathrm{d} s \; \rightarrow \min\limits_{ |u|\le 1} $

    subject to the dynamics in $ \tilde x = (C_N, F, A) $ given by

    $ dCNdt=CNτc+udFdt=aAbFdAdt=AArestτfat+αAF
    $
    (15)

    and with the initial conditions

    $ F(0) = A(0) = C_N(0) = 0. $

    The parameters $ T, \tau_2, \tau_{fat}, \alpha_A, A_{rest} $ and the variables $ K_m, \tau_1 $ are constant and fixed to some prescribed values.

    The system (15) can be written into the form

    $ \dot{\tilde x} = \tilde F_0(\tilde x) + u\, \tilde F_1(\tilde x). $

    Remark 1. The model (15) is a simplification of the force-fatigue model. In particular, without loss of generality, we don't take into account the factor $ \exp(-t/\tau_c)/\tau_c $ appearing in (2). Control constraints $ |u|\le 1 $ are not the physical constraints, see Section 3.3. Also the fatigue dynamics is reduced to a single equation, motivated by the controllability properties of the system (6)-(7)-(8).

    The permanent control case. The problem can be summarized as a permanent optimal control problem as follows

    $ {minu()T0(u(s)2+(FFref)2)ds˙˜x=˜F0(˜x)+u˜F1(˜x)u(t)[1,1](CN(0),F(0),A(0))=(0,0,0).
    $
    (16)

    The pseudo-Hamiltonian of the system is

    $ H(\tilde x, p, p^0, u) = p\cdot (\tilde F_0(\tilde x)+u\, \tilde F_1(\tilde x))+p^0\, (u^2+(F-F_{{ref}})^2) $

    where $ (p, p^0):[0, T]\mapsto \mathbb{R}^4 $ is the adjoint vector.

    We denote by $ H_i = p\cdot \tilde F_i, \; i = 0, 1 $ the Hamiltonian lifts of the vector fields $ \tilde F_i, \; i = 0, 1 $.

    Normal case: $ p^0 = -1/2 $. Applying the Pontryagin maximum principle (PMP), the optimal (permanent) control is given in the normal case by

    $ u(t)={1 if H1(˜x(t),p(t))1,1 if H1(˜x(t),p(t))1,H1(˜x(t),p(t))otherwise.
    $
    (17)

    Abnormal case: $ p^0 = 0 $. Abnormal controls are characterized by the equation $ p\cdot \tilde F_1 = 0 $. Differentiating twice with respect to $ t $ leads to

    $ p\cdot [\tilde F_0, \tilde F_1] = 0, \quad p\cdot \left([[\tilde F_0, \tilde F_1], \tilde F_0] + u\, [[\tilde F_0, \tilde F_1], \tilde F_1]\right) = 0. $

    Eliminating $ p $ we obtain $ D + u\, D' = 0 $ where

    $ D=det(˜F1,[˜F1,˜F0],[[˜F1,˜F0],˜F0])D=det(˜F1,[˜F1,˜F0],[[˜F1,˜F0],˜F1]).
    $

    Computing we have

    $ D = -\alpha_A \left( a'(C_N)\, A -b'(C_N)\, F\right)^2, \quad D' = 0 $

    and $ D = 0 $ is equivalent to $ A (C_N (\tau_1+\tau_2)+K_m\tau_1)^2+F \tau_2 (C_N+K_m)^2 = 0. $ This implies $ F = A = 0 $ and the following lemma.

    Lemma 4.1. There are no physically admissible singular trajectories for the problem (15).

    The sampled-data control case. The corresponding optimal sampled-data control problem is given by

    $ {minT0(u(kTs)2+(FFref)2)dt,withk=t/Ts˙˜x=˜F0(˜x)+u˜F1(˜x)u(kTs)[1,1](CN(0),F(0),A(0))=(0,0,0).
    $
    (18)

    where $ T_s>0 $ is a fixed sampling period such that $ T = j\, T_s $ for some $ j\in \mathbb{N} $.

    Normal case: $ p^0 = -1/2 $. Following [3], the optimal (sampled-data) control is

    $ u(kTs)argmaxy[1,1]1T(k+1)TskTsH(s,˜x(s),p(s),p0,y)ds
    $
    (19)

    for all $ k\in \{0, \ldots, d\} $.

    Write

    $ \bar{H}_1 = \frac{1}{T}\, \int_{kT_s}^{(k+1)T_s} p_1(s)\, \mathrm{d} s $

    then, the optimal sampled-data control is

    $ u(kTs)={1 if ˉH1>1,1 if ˉH1<1,ˉH1 otherwise. 
    $
    (20)

    Numerical results. In Fig. 1, we represent numerical results for several values of $ T/T_s $. The optimal permanent control is represented with thin continuous line. A Gaussian quadrature rule is used to compute $ \bar{H}_1 $. We observe the convergence of the optimal sampled-data control to the permanent control as $ T_s $ tends to $ 0 $.

    Figure 1. 

    Time evolution of the permanent control (thin continuous line) and sampled-data control for several values of the sampling period $ T_s\in \{T/20, T/40, T/200\} $

    .

    The force evolution is compared for $ K_{m, rest} $ and different values $ K_{m, rest}^{\prime} $ ($ \pm30\% $ of error) for $ I = 10ms $, $ 50ms $ and $ 100ms $ (see Fig. 2 for $ I = 10ms $). In the case of $ I = 10ms $, the maximum force error (see Fig. 3) is of $ -0.3\% $. Following interpulse value, the maximum force error is obtained for $ I = 100ms $ ($ -1.3\% $) which means that a tolerance of $ \pm30\% $ gives force evolution with a good accuracy.

    Figure 2. 

    Evolution of $ K_m $ for different initial conditions (case of $ I = 10ms $)

    .
    Figure 3. 

    Relative error of the force for a well known and erroneous $ K_m $ initial condition (case of $ I = 10ms $)

    .

    In this section, we design a modified version of the standard high-gain observer given in [11] taking into account the specific structure of the problem. The system defined by the force equation (4) and the fatigue model (6) and (8) can be rewritten as the single input-output system:

    $ {˙x(t)=am(t,Es(t))f(x(t),Es(t))y(t)=h(x(t))=F(t),
    $
    (21)

    with $ x = (F, A, \tau_1) \in \mathbb{R}^{3} $, $ y \in \mathbb{R} $, $ E_s \in \mathbb{R} $, $ 0<a<1 $ is given by (5) and $ m $ is a positive integer. Note that in (21), $ K_m $ is the solution of (7) thanks to the weak sensibility of the solution with respect to this variable (see Section 5.1 for the sensibility study of the force versus $ K_m $). We introduce the change of variables $ \phi $:

    $ {ϕ:R3R3xϕ(x)=[h(x),Lf(h(x)),L2f(h(x))].
    $
    (22)

    We have:

    $ ϕx=(1001/(τ1+τ2a)aF/(τ1+τ2a)2L2fh(x)x1L2fh(x)x2L2fh(x)x3)
    $

    where

    $ L2fh(x)x1=aαA+1(x3+τ2a)2(1x3τ1,restτfat+2ατ1x1),L2fh(x)x2=aτfat+x3+τ2aτfat(x3+τ2a),L2fh(x)x3=1(x3+τ2a)2(x2ax1τfat)2x1(x3+τ2a)3(1x3τ1,restτfat+ατ1x1)
    $

    and

    $ det(ϕx)=a(x2aτfat(x3+τ2a)τfatx1+2x1(x3τ1,restατ1τfatx1)τfat(x3+τ2a)3
    $
    (23)

    A sensibility study in (6) and (8) concerning respectively $ -(A-A_{rest})/\tau_{fat} $ and $ -(\tau_1-\tau_{1, rest})/\tau_{fat} $, shows that neglecting these two terms induces a maximum error of $ 7\% $ for $ A $ and $ \tau_1 $, and $ 6\% $ for the force value. Hence we use the simplified model:

    $ dAdt=αAF
    $
    (24)
    $ dτ1dt=ατ1F.
    $
    (25)

    Hence (4) gives:

    $ det(ϕx)=a(x2a(x3+τ2a)x12ατ1x21)(x3+τ2a)3.
    $
    (26)

    Since $ E_s(t) \geq 0 $, we have:

    $ \forall t \in [0, T]: (C_N, x_1) \geq 0, (x_2, x_3, K_m) > 0 \Rightarrow (x_3+\tau_2 a)^3 > 0. $

    The equation $ \det \left(\frac{\partial\phi}{\partial x}\right) = 0 $ yields

    $ a=0CN=0(CN=0x1=0)(period of rest)
    $
    (27)
    $ x2a(x3+τ2a)x12ατ1x21=0x1=11+8ατ1x2a(x3+τ2a)4ατ1.
    $
    (28)

    Since $ \alpha_{\tau_1} \sim 10^{-5} $, $ 0 \leq a \leq 1 $ and $ (x_3+\tau_2 a) \sim 10^2 $, we have from (28)

    $ 8\alpha_{\tau_1} x_2 a (x_3+\tau_2 a)\ll 1 \Rightarrow \left(x_1 = 0 \; (C_N = 0), x_1 = \frac{1}{2\alpha_{\tau_1}}\right). $

    Thus $ (\frac{\partial\phi}{\partial x}(\hat{x}(t)) = 0, \; \hat x = (\hat F, \hat A, \hat \tau_1) $ for:

    $ (C_N, x_1) = 0 $ which corresponds to the period of rest,

    $ x_1 = 1/(2\alpha_{\tau_1}) $, $ x_1 \sim 10^{5} $ which is greater than the maximum force value.

    In the simplified model (4), (24) and (25), $ \frac{\partial\phi}{\partial x}(\hat{x}(t)) $ is invertible during stimulation period $ (C_N, x_1>0) $, and the matrix is well-conditioned.

    Based on the simplified model, the modified high-gain observer is defined as:

    $ ˙ˆx(t)=amf1(ˆx(t),Es(t))am(ϕx(ˆx(t))1S1θCT(Cˆx(t)y(t)).
    $
    (29)

    The experimental choice of $ m $ depends on the pulses frequency. $ S_{\theta} $ is a symmetric definite positive matrix given by the following Lyapunov equation:

    $ θSθ(t)+ATSθ(t)+Sθ(t)A=CTC
    $
    (30)

    where $ \theta $ is a high-gain tuning parameter introduced in [11], and

    $ A=(010001000),C=(100).
    $

    Hence the components of $ S_{\theta} = [S_{\theta}(l, k)]_{1 \leq l, k \leq 3} $ have the following form:

    $ Sθ(l,k)=(1)l+k(l+k2k1)θ(l+k1),(nk)=n!(nk)!k!.
    $
    (31)

    Let

    $ ˙x(t)=am(t)f1(x(t),Es(t))=f(x(t),Es(t))y(t)=h(x(t))
    $
    (32)

    and

    $ z1=h(x),˙z1=Lfh(x)=amLf1h(x)=amz2,˙z2=Lf(Lf1h(x))=amLf1(Lf1h(x))=amz3,˙zn1=amzn,˙zn=amLnf1h(x)=φn(u,z).
    $

    Then

    $ (˙z1˙z2˙zn)=am(0100100)(z1z2zn)+(00φn(u,z)).
    $
    (33)

    Therefore

    $ {˙z=amAz+φ(u,z)y=Cz.
    $
    (34)

    Take $ C = \left( 100

    \right) $ and consider the system (33). We make the following assumptions.

    ● H1) $ \varphi_n(u, z) $ is globally Lipschitz with respect to $ z $ and uniformly with respect to $ u $,

    ● H2) Let $ U $ be the control domain, a compact $ K\subset \mathbb{R}^n $ and two positive constants $ a_{\min} $ and $ a_{\max} $ such that: for all $ u $ valued in $ U $ and initial conditions $ z(0) $ $ \in $ $ K $, we have $ a_{\min}\leq a(t)\leq a_{\max} $,

    ● H3) $ a(\cdot) $ is $ C^1 $ (see Remark 2 below),

    ● H4) $ m\, \dot{a}(t)/ a(t)<1 $.

    The observer has the following expression:

    $ ˙ˆz(t)=a(t)mAˆz(t)a(t)mS1θCT(Cˆz(t)y(t))
    $
    (35)

    where $ S_{\theta} $ is the solution of the Lyapunov equation:

    $ θSθ+ATSθ+SθACTC=0.
    $
    (36)

    The solution of (36) can be rewritten as

    $ Sθ=1θDθS1Dθ.
    $
    (37)

    and

    $ Dθ=diag(1,1θ,,1θn)
    $
    (38)

    with $ S_1 $ is the solution of (36) for $ \theta = 1 $. Let $ \rho = a^{m} $ and $ e = \hat{z}-z $ $ \Rightarrow $ $ \dot{e} = \dot{\hat{z}}-\dot{z} $, then

    $ ˙e=ρAˆz+ˆφ(u,ˆz)S1θCT(Cˆz(t)y(t))ρAzφ(u,z)=ρ(AS1θCTC)e+ˆφφ
    $
    (39)

    Write $ \bar{e} = \rho D_{\theta}e $, then

    $ ˙ˉe=˙ρDθe+ρDθ˙e=˙ρDθe+ρDθ(ρ(AS1θCTC)e+ˆφφ)=˙ρDθe+ρDθ(ρ(AθD1θS11D1θDθCTCDθ)e+ˆφφ),DθCTCDθ=CTC=˙ρρˉe+ρθAˉeθS11CTCˉe+ρDθ(ˆφφ),DθAD1θ=θA
    $
    (40)

    Consider the Lyapunov function: $ V = \bar{e}^TS_1\bar{e} $. Then

    $ ˙V=2ˉeTS1˙ˉe=2ˉeTS1(˙ρρˉe+ρθAˉeθS11CTCˉe+ρDθ(ˆφφ))
    $
    (41)

    $ 2\bar{e}^T S_1 A \bar{e} = \bar{e}^T(S_1 A+A^T S_1)\bar{e} $ and $ S_1 A+A^T S_1 = -S_1+C^T C $.

    Then, we have

    $ ˙V=θρV+θρˉeTCTCˉe2ˉeTCTCˉe+2ˉeTS1˙ρρˉe+2ˉeTS1ρDθ(ˆφφ)=θρV+(θρ2)∣∣Cˉe2+2˙ρρV+2ˉeTS1ρDθ(ˆφφ)=(θρ2˙ρρ)V+(θρ2)∣∣Cˉe2+2ˉeTS1ρDθ(ˆφφ)
    $
    (42)

    We deduce that

    $ θρ2˙ρρ>0θ>2˙ρρ2,θρ2<0θ<2ρ2˙ρρ2<θ<2ρ
    $
    $ (˙ρρ=m˙aa)<1.
    $
    (43)

    Using (43) and assumption H1:

    $ ˙VγV+2ˉeTS1ρDθ∣∣ˆφφ∣∣,γ>0γV+2ˉeTS1ρθn^φnφnγV+2ˉeTS1ρθnK1ˉe(γ2amθnK1)V.
    $
    (44)

    For $ m $ and $ \theta $ sufficiently large: $ \gamma>2\, a^m/\theta^n\, K_1 \Rightarrow \dot{V}\leq -\gamma_1\, V, \; \; \; \gamma_1>0 $.

    Remark 2. In the force-fatigue model, $ a(t) $ is piecewise smooth, the lack of regularity is numerically bypassed by the choice of the integer $ m $. For example, for $ I = 10ms $, $ m = 3 $ is sufficient to estimate the whole variables. However, for $ I = 25ms $, $ m $ must be at least equal to $ 5 $ (see observer simulations in Section 7).

    MPC computes a sequence of predicted controls to optimize plant behavior (see Fig. 4).

    Figure 4. 

    General MPC strategy diagram

    .

    Model Predictive Heuristic Control (MPHC) was introduced by Richalet et al. [17] using impulse response type dynamic model. Dynamic Matrix Control (DMC) followed in 1980 (Cutler et al. [5]) using step response type dynamic model. State space formulation of MPC was introduced by Li et al. [16].

    Consider a system written as

    $ ˙x=f(x,u),x(0)=x0
    $
    (45)

    and a general cost function to be minimized

    $ J(u(.),x0)=0q(xu(τ,x0),u(τ))dτ
    $
    (46)

    with

    $ xu(t,x0)=x0+t0f(xu(τ,x0),u(τ))dτ
    $
    (47)

    and

    $ q(0,0)=0,q()C2q(x,u)cq(||x||2+||u||2),cq>0uq(x,u) is convex for all x.
    $
    (48)

    Without constraints, Bellman's principle of optimality (1950) gives the solution. In the case of constrained problem, instead of $ J_\infty $, we minimize the following cost:

    $ J(u(.),x0,T)=T0q(xu(τ,x0),u(τ))dτ+v(xu(T,x0))
    $
    (49)

    with

    $ J(x0,T)=minu(.)J(u(.),x0,T)
    $
    (50)

    and

    $ u(t,x0,T)=argminu(.)J(u(.),x0,T).
    $
    (51)

    This optimization over a finite horizon follows the algorithm:

    1. Solve $ \min \; J(u(.), x_0, T) $ and find $ u^*(., x_0, T) $.

    2. Apply $ u^*(., x_0, T) $ for $ \tau \in [0, T_s[ $, $ 0\leq T_s \leq T $, ($ T_s $: sampling period).

    3. Repeat using $ x(T_s) $ instead of $ x_0 $.

    To explain the method, we consider the discrete linear system:

    $ {x(k+1)=Ax(k)+Bu(k)y(k)=Cx(k)
    $
    (52)

    where $ x(t) $ $ \in $ $ \mathbb{R}^n $, $ u(t) $ $ \in $ $ \mathbb{R}^m $, $ y(t) $ $ \in $ $ \mathbb{R}^p $ are respectively the state, input and output vectors, and $ t = kT_s $. This system (66) results from a discrete system modelling or continuous-discrete transformation. Practical considerations of implementation make this form more appropriate in the framework of MPC.

    We suppose that the system is both controllable and observable, and we denote by $ x(k+i/k), \, i\geq 0 $, the predicted state vector $ x(k+i) $ with initial condition $ x(kT_s) $.

    Consider the unconstrained problem and the quadratic cost:

    $ J(k)=j=k[(y(j+1/k)yref)TQ(y(j+1/k)yref)+(u(j/k)uref)TR(u(j/k)uref)+Δu(j/k)TSΔu(j/k)]
    $
    (53)

    with $ Q = Q^T $ $ \succ $ $ 0 $, $ R = R^T $ $ \succeq $ $ 0 $ and $ S = S^T $ $ \succeq $ $ 0 $. The couple ($ y_{ref} $, $ u_{ref} $) is solution of (52) and $ \Delta u(k) = u(k)-u(k-1) $. The solution of this problem is obtained using the LQ controller.

    In the case of constrained problem, LQR could not be used to solve control problem. System (52) allows to calculate $ x(k+i/k) $, $ i = 1, ..., N_r $ at time $ kT_s $, ($ N_r $ being the receding horizon length):

    $ x(k+i/k)=Aix(k/k)+i1j=0Aij1Bu(k+j/k).
    $
    (54)

    The input-output relation along the receding horizon is then:

    $ (y(k+1/k)y(k+2/k)y(k+Nr/k))=(CACA2CANr)x(k/k)+(CB00CABCB0CANr1BCABCB)(u(k/k)u(k+1/k)u(k+Nr1/k)).
    $
    (55)

    Write (55) as

    $ ˉy=Ψx(k/k)+Γˉu
    $
    (56)

    where $ \bar{y}_k $ $ \in $ $ \mathbb{R}^{p N_r} $, $ \bar{u}_k $ $ \in $ $ \mathbb{R}^{m N_r} $, $ \Psi $ $ \in $ $ \mathbb{R}^{p N_r \times n} $, $ \Gamma $ $ \in $ $ \mathbb{R}^{p N_r \times m N_r} $.

    Constraints can take the form:

    ● Control variable:

    $ uminu(k)umax.
    $
    (57)

    ● Change rate of the control variables:

    $ ΔuminΔu(k)Δumax
    $
    (58)

    with

    $ Δu(k)=u(k)u(k1).
    $
    (59)

    ● Soft output variables constraints (relaxed output constraints using large slack variable $ s_v $ to avoid constraints conflicts when solving control problem):

    $ yminsv1y(k)ymax+sv1.
    $
    (60)

    ● Soft state variables constraints (for the same reason as output constraints):

    $ xminsv2x(k)xmax+sv2
    $
    (61)

    The cost function (53) becomes:

    $ J(k)=k+Nr1j=k[(y(j+1/k)yref)TQ(y(j+1/k)yref)+(u(j/k)uref)TR(u(j/k)uref)+Δu(j/k)TSΔu(j/k)]
    $
    (62)

    subject to (56) and a set of constraints among (57)-(61).

    This minimization problem takes the form:

    $ min1/2XTEX+XTFsubject to ˉy=Ψx(k/k)+ΓˉuMXγ
    $
    (63)

    and $ \bar{y} = C\bar{x} $ with $ X = [\bar{x}, \bar{u}]^T $. Optimization problem (63) is a convex Quadratic Programming (QP) problem. Denote $ X^* = [\bar{x}^*, \bar{u}^*]^T $ the global minimizer at each iteration. Once $ \bar{u}^* $ is calculated, only $ u^*(k/k) $ is applied at time $ t = kT_s $ and we iterate the algorithm.

    In the case of NMPC (non-linear Model Predictive Controller), the dynamics is defined using a discretization of the equation (45). Using (62), optimization problem becomes:

    $ min1/2XTEX+XTFsubject to x(k+i/k)=g(x(k+j/k),u(k+j/k)),j=0,,Nr1MXγ
    $
    (64)

    with $ X = [\bar{x}, \bar{u}]^T $. Equality constraint in the optimization problem (64) is non-linear. Now convexity condition in (64) is not guaranteed, then $ X^* $ could be a local minimizer. NMPC (non-linear Model Predictive controller) is used to solve this problem.

    Numerical solutions are computed using Active Set, Primal-Dual or SQP methods (Wang [19], Fletcher [10] and Boyd and Vandenberghe [4]).

    In the case of force-fatigue model, one must consider the more general case of a varying $ T_s $ ($ T_s $ $ \rightarrow $ $ I(i) $), then $ u(i) = [I(i) \; \eta(i)]^T $, where $ I(i) = t(i)-t(i-1) $ is the interpulse between two successive pulses and $ \eta(i) $ is the pulse amplitude applied at time $ t_i $. In this case, $ \bar{u}(k) = [u(k/k) \ldots u((k+N_r-1)/k)] $ is the discrete receding horizon control vector (of length $ N_r $) to be calculated by the NMPC at time $ t $ corresponding to iteration $ k $. Then:

    $ ˉu(k)=(I(k/k)I(k+1/k)I((t(Nr))/k)η(tk/k)η((tk+I(k/k))/k)η((tk+Nr1j=0I(k+j))/k))
    $
    (65)

    $ t(N_r) $ being the final time of the optimization horizon which is unknown a priori. For instance, using single move strategy, we have $ u(k/k) = u(k+1/k) = \ldots = u((k+N_r-1)/k) $, (see Fig. 5).

    Figure 5. 

    (left half plane) $ E_s $ and force profile for applied amplitude and interpulse stimulation, (right half plane) Predicted $ E_s $ and force using single move strategy to be optimized

    .

    The force-fatigue model being non-linear, an NMPC is used to solve the problem with Non-linear Programming Algorithm (NPA). In this case, criterion to be minimized is:

    $ J(k)=k+Nr1j=k(F(j+1/k)Fref)2
    $
    (66)

    subject to: $ 0 \leq \eta(i) \leq 1 $ and $ 0.01 $ms$ \leq I(i) \leq 0.1 $ms.

    The estimation of the state variables vector (by the high-gain observer) is used as an initial variables vector to perform the NMPC over the horizon $ N_r $.

    To exhibit the interest of the use of the power $ m $ in the non-linear observer, consider a MPC based on a non-linear observer with only amplitude as control variable. We consider also the worst case of $ +30\% $ of error of $ K_m $.

    The following simulation results follow the protocol used in practice (a set of periods with stimulation and rest time slots in each period). During the stimulation time slot, the control is calculated to bring the force to $ F_{ref} $ and in the rest time slot, the stimulation amplitude is set to $ 0 $. In this section, only two stimulation periods are considered.

    For $ I = 10ms $ ($ 100Hz $) and $ I = 25ms $ ($ 40Hz $), we take $ m = 3 $ and $ m = 5 $ respectively. Fig. 6 and Fig. 7 represent the $ \hat A $ for $ I = 10ms $ and $ I = 25ms $, respectively. Fig. 8 and Fig. 9 are the $ \hat \tau_1 $ for $ I = 10ms $ and $ I = 25ms $. $ \hat{A} $ converges after $ 50ms $ when $ I = 10ms $ and $ 100ms $ when $ I = 25ms $. Concerning $ \hat{\tau_1} $, it converges after $ 75ms $ when $ I = 10ms $ and $ 200ms $ when $ I = 25ms $. Large $ I $ seems to delay the convergence of the observer.

    Figure 6. 

    Evolution of $ A $ and $ \hat{A} $ for $ I = 10 $, $ 30\% $ error of $ K_m $

    .
    Figure 7. 

    Evolution of $ A $ and $ \hat{A} $ for $ I = 25 $, $ 30\% $ error of $ K_m $

    .
    Figure 8. 

    Evolution of $ \tau_1 $ and $ \hat{\tau_1} $ for $ I = 10 $, $ 30\% $ error of $ K_m $

    .
    Figure 9. 

    Evolution of $ \tau_1 $ and $ \hat{\tau_1} $ for $ I = 25 $, $ 30\% $ error of $ K_m $

    .

    Fig. 10 represents the force response for amplitude control strategy (for a receding horizon $ N_r = 10 $) based on the proposed observer for $ I = 25ms $ and a force reference of $ 250N $. Force mean value converges to the force reference after $ 200ms $.

    Figure 10. 

    Evolution of $ F $, $ \hat{F} $ and $ F $ mean value over $ I $ for $ I = 25 $, $ 30\% $ error of $ K_m $, $ Fref = 250N $

    .

    In this section, five stimulations periods are presented (due to the experimental protocol). Figures 11, 12, and 13, are the force response in the case of $ F_{ref} = 425N $ and ($ N_r = 3, 5, 10 $), the interpulse and the amplitude controls for $ N_r = 10 $, respectively. As expected, $ N_r = 10 $ gives the best regulation performances (response time and overshoot).

    Figure 11. 

    Evolution of the force for a reference force of $ 425N $ and different receding horizons $ (3, 5 $ and $ 10) $

    .
    Figure 12. 

    Evolution of the interpulse (control) for a reference force of $ 425N $ and a preditive horizon of $ 10 $

    .
    Figure 13. 

    Evolution of the amplitude (control) for a reference force of $ 425N $ and a preditive horizon of $ 10 $

    .

    Before $ t = 6000ms $ with $ N_r = 10 $, the force is correctly maintained at $ F_{ref} = 425N $. For $ t \geq 6000ms $ (starting from the forth period), the fatigue level $ A $ is very high so that the maximum value of the amplitude control (see Fig. 13) and the interpulse control (see Fig. 12) cannot maintain $ F $ at $ F_{ref} $. Maximum interpulse frequency is not used at the beginning of the forth and the fifth periods. Higher value of $ N_r $ could correct this problem (supposing that global minimizer of MPC algorithm is reached at each iteration). However, increasing $ N_r $ will render MPC algorithm time consuming and cause a problem for real time implementation.

    Figure 14. 

    Evolution of the interpulse (control) for a reference force of 425N and a preditive horizon of 10

    .
    Figure 15. 

    Evolution of the amplitude (control) for a reference force of 425N and a preditive horizon of 3

    .

    This work deals with the control and estimation of the state variables of the Ding et al. force-fatigue model where the control is the interpulse and/or amplitude of the electrical stimulation. Preliminary geometric analysis of the force control controlling force level is presented with the aim of a future PMP control strategy. In the case of a fixed interpulse, the proposed high-gain observer using the force measurements exhibits the relation between the interpulse and the parameter $ m $ to perform accurate variables estimation. Model Predictive Control (MPC) strategy is presented in the case of both stimulation interpulse and amplitude as control variables. Simulation results are presented concerning the high-gain observer and the MPC force control. These simulations show the effect of the receding horizon on the control efficiency. Reasonable value of $ N_r $ is however suitable to guarantee a short computation time for real time implementation.



    [1]

    T. Bakir, B. Bonnard and S. Othman, Predictive control based on non-linear observer for muscular force and fatigue model, Annual American Control Conference (ACC), Milwaukee (2018) 2157-2162.

    [2] A simple model of force generation by skeletal muscle during dynamic isometric contractions. IEEE Transactions on Biomedical Engineering (1998) 45: 1010-1016.
    [3] Optimal sampled-data control, and generalizations on time scales. Math. Cont. Related Fields (2016) 6: 53-94.
    [4] (2004) Convex Optimization.Cambridge University Press.
    [5]

    C. R. Cutler and B. L. Ramaker, Dynamic Matrix Control: A Computer Control Algorithm, In Joint automatic control conference, San Francisco, 1981.

    [6] Two-step, predictive, isometric force model tested on data from human and rat muscles. J. Appl. Physiol. (1998) 85: 2176-2189.
    [7] Development of a mathematical model that predicts optimal muscle activation patterns by using brief trains. J. Appl. Physiol. (2000) 88: 917-925.
    [8] A predictive model of fatigue in human skeletal muscles. J. Appl. Physiol. (2000) 89: 1322-1332.
    [9] Mathematical models for fatigue minimization during functional electrical stimulation. J. Electromyogr. Kinesiol. (2003) 13: 575-588.
    [10]

    R. Fletcher, Practical Methods of Optimization, A Wiley-Interscience Publication. John Wiley & Sons, Second edition., Ltd., Chichester, 1987.

    [11] A simple observer for non-linear systems Application to bioreactors. IEEE Trans. Automat. Control (1992) 37: 875-880.
    [12] The Hill equation and the origin of quantitative pharmacology. Arch. Hist. Exact Sci. (2012) 66: 427-438.
    [13] Non-linear controllability and observability. IEEE Transactions on Automatic Control (1977) AC-22: 728-740.
    [14]

    A. Isidori, Non-linear Control Systems, 3rd ed. Berlin, Germany: Springer-Verlag, 1995.

    [15] Mathematical models of human paralyzed muscle after long-term training. Journal of Biomechanics (2007) 40: 2587-2595.
    [16]

    S. Li, K. Y. Lim and D. G. Fisher, A state space formulation for model predictive control, Springer, New York, 35 (1989), 241-249.

    [17]

    J. Richalet, A. Rault, J. L. Testud and J. Papon, Model algorithmic control of industrial processes, In IFAC Proceedings, 10 (1977), 103–120.

    [18] Controllability of non-linear systems. J. Differential Equations (1972) 12: 95-116.
    [19]

    L. Wang, Model Predictive Control System Design and Implementation Using MATLAB, Springer, London, 2009.

    [20]

    E. Wilson, Force Response of Locust Skeletal Muscle, Southampton University, Ph.D. thesis, 2011.

  • This article has been cited by:

    1. Toufik Bakir, Bernard Bonnard, Loïc Bourdin, Jérémy Rouot, Pontryagin-Type Conditions for Optimal Muscular Force Response to Functional Electrical Stimulations, 2020, 184, 0022-3239, 581, 10.1007/s10957-019-01599-4
    2. Bernard Bonnard, Jérémy Rouot, Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model, 2021, 13, 1941-4897, 1, 10.3934/jgm.2020032
    3. Toufik Bakir, Bernard Bonnard, Sandrine Gayrard, Jérémy Rouot, Finite dimensional approximation to muscular response in force-fatigue dynamics using functional electrical stimulation, 2022, 144, 00051098, 110464, 10.1016/j.automatica.2022.110464
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5619) PDF downloads(360) Cited by(3)

Figures and Tables

Figures(15)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog