Processing math: 100%
Special Issues

Age-structured cell population model to study the influence of growth factors on cell cycle dynamics

  • Cell proliferation is controlled by many complex regulatory networks. Our purpose is to analyse, through mathematical modeling, the effects of growth factors on the dynamics of the division cycle in cell populations.
        Our work is based on an age-structured PDE model of the cell division cycle within a population of cells in a common tissue. Cell proliferation is at its first stages exponential and is thus characterised by its growth exponent, the first eigenvalue of the linear system we consider here, a growth exponent that we will explicitly evaluate from biological data.Moreover, this study relies on recent and innovative imaging data (fluorescence microscopy) that make us able to experimentally determine the parameters of the model and to validate numerical results.This model has allowed us to study the degree of simultaneity of phase transitions within a proliferating cell population and to analyse the role of an increased growth factor concentration in this process.
        This study thus aims at helping biologists to elicit the impact of growth factor concentration on cell cycle regulation, at making more precise the dynamics of key mechanisms controlling the division cycle in proliferating cell populations, and eventually at establishing theoretical bases for optimised combined anticancer treatments.

    Citation: Frédérique Billy, Jean Clairambault, Franck Delaunay, Céline Feillet, Natalia Robert. Age-structured cell population model to study the influence of growth factors on cell cycle dynamics[J]. Mathematical Biosciences and Engineering, 2013, 10(1): 1-17. doi: 10.3934/mbe.2013.10.1

    Related Papers:

    [1] Karl Hajjar, Lénaïc Chizat . On the symmetries in the dynamics of wide two-layer neural networks. Electronic Research Archive, 2023, 31(4): 2175-2212. doi: 10.3934/era.2023112
    [2] Eray Önler . Feature fusion based artificial neural network model for disease detection of bean leaves. Electronic Research Archive, 2023, 31(5): 2409-2427. doi: 10.3934/era.2023122
    [3] Dong-hyeon Kim, Se-woon Choe, Sung-Uk Zhang . Recognition of adherent polychaetes on oysters and scallops using Microsoft Azure Custom Vision. Electronic Research Archive, 2023, 31(3): 1691-1709. doi: 10.3934/era.2023088
    [4] Ziqing Yang, Ruiping Niu, Miaomiao Chen, Hongen Jia, Shengli Li . Adaptive fractional physical information neural network based on PQI scheme for solving time-fractional partial differential equations. Electronic Research Archive, 2024, 32(4): 2699-2727. doi: 10.3934/era.2024122
    [5] Ilyоs Abdullaev, Natalia Prodanova, Mohammed Altaf Ahmed, E. Laxmi Lydia, Bhanu Shrestha, Gyanendra Prasad Joshi, Woong Cho . Leveraging metaheuristics with artificial intelligence for customer churn prediction in telecom industries. Electronic Research Archive, 2023, 31(8): 4443-4458. doi: 10.3934/era.2023227
    [6] Kai Huang, Chang Jiang, Pei Li, Ali Shan, Jian Wan, Wenhu Qin . A systematic framework for urban smart transportation towards traffic management and parking. Electronic Research Archive, 2022, 30(11): 4191-4208. doi: 10.3934/era.2022212
    [7] Ruyu Yan, Jiafei Jin, Kun Han . Reinforcement learning for deep portfolio optimization. Electronic Research Archive, 2024, 32(9): 5176-5200. doi: 10.3934/era.2024239
    [8] Mohd. Rehan Ghazi, N. S. Raghava . Securing cloud-enabled smart cities by detecting intrusion using spark-based stacking ensemble of machine learning algorithms. Electronic Research Archive, 2024, 32(2): 1268-1307. doi: 10.3934/era.2024060
    [9] Manal Abdullah Alohali, Mashael Maashi, Raji Faqih, Hany Mahgoub, Abdullah Mohamed, Mohammed Assiri, Suhanda Drar . Spotted hyena optimizer with deep learning enabled vehicle counting and classification model for intelligent transportation systems. Electronic Research Archive, 2023, 31(7): 3704-3721. doi: 10.3934/era.2023188
    [10] Jiaxin Zhang, Hoang Tran, Guannan Zhang . Accelerating reinforcement learning with a Directional-Gaussian-Smoothing evolution strategy. Electronic Research Archive, 2021, 29(6): 4119-4135. doi: 10.3934/era.2021075
  • Cell proliferation is controlled by many complex regulatory networks. Our purpose is to analyse, through mathematical modeling, the effects of growth factors on the dynamics of the division cycle in cell populations.
        Our work is based on an age-structured PDE model of the cell division cycle within a population of cells in a common tissue. Cell proliferation is at its first stages exponential and is thus characterised by its growth exponent, the first eigenvalue of the linear system we consider here, a growth exponent that we will explicitly evaluate from biological data.Moreover, this study relies on recent and innovative imaging data (fluorescence microscopy) that make us able to experimentally determine the parameters of the model and to validate numerical results.This model has allowed us to study the degree of simultaneity of phase transitions within a proliferating cell population and to analyse the role of an increased growth factor concentration in this process.
        This study thus aims at helping biologists to elicit the impact of growth factor concentration on cell cycle regulation, at making more precise the dynamics of key mechanisms controlling the division cycle in proliferating cell populations, and eventually at establishing theoretical bases for optimised combined anticancer treatments.


    The training of artificial neural networks (ANNs) with rectified linear unit (ReLU) activation via gradient descent (GD) type optimization schemes is nowadays a common industrially relevant procedure which appears, for instance, in the context of natural language processing, face recognition, fraud detection, and game intelligence. Although there exist a large number of numerical simulations in which GD type optimization schemes are effectively used to train ANNs with ReLU activation, till this day in the scientific literature there is in general no mathematical convergence analysis which explains the success of GD type optimization schemes in the training of such ANNs.

    GD type optimization schemes can be regarded as temporal discretization methods for the gradient flow (GF) differential equations associated to the considered optimization problem and, in view of this, it seems to be a natural direction of research to first aim to develop a mathematical convergence theory for time-continuous GF differential equations and, thereafter, to aim to extend such a time-continuous convergence theory to implementable time-discrete GD type optimization methods.

    Although there is in general no theoretical analysis which explains the success of GD type optimization schemes in the training of ANNs in the literature, there are several auspicious analysis approaches as well as several promising partial error analyses regarding the training of ANNs via GD type optimization schemes and GFs, respectively, in the literature. For convex objective functions, the convergence of GF and GD processes to the global minimum in different settings has been proved, e.g., in [1,2,3,4,5]. For general non-convex objective functions, even under smoothness assumptions GF and GD processes can show wild oscillations and admit infinitely many limit points, cf., e.g., [6]. A standard condition which excludes this undesirable behavior is the Kurdyka-Łojasiewicz inequality and we point to [7,8,9,10,11,12,13,14,15,16] for convergence results for GF and GD processes under Łojasiewicz type assumptions. It is in fact one of the main contributions of this work to demonstrate that the objective functions occurring in the training of ANNs with ReLU activation satisfy an appropriate Kurdyka-Łojasiewicz inequality, provided that both the target function and the density of the probability distribution of the input data are piecewise polynomial. For further abstract convergence results for GF and GD processes in the non-convex setting we refer, e.g., to [17,18,19,20,21] and the references mentioned therein.

    In the overparametrized regime, where the number of training parameters is much larger than the number of training data points, GF and GD processes can be shown to converge to global minima in the training of ANNs with high probability, cf., e.g., [22,23,24,25,26,27,28]. As the number of neurons increases to infinity, the corresponding GF processes converge (with appropriate rescaling) to a measure-valued process which is known in the scientific literature as Wasserstein GF. For results on the convergence behavior of Wasserstein GFs in the training of ANNs we point, e.g., to [29,30,31], [32, Section 5.1], and the references mentioned therein.

    A different approach is to consider only very special target functions and we refer, in particular, to [33,34] for a convergence analysis for GF and GD processes in the case of constant target functions and to [35] for a convergence analysis for GF and GD processes in the training of ANNs with piecewise linear target functions. In the case of linear target functions, a complete characterization of the non-global local minima and the saddle points of the risk function has been obtained in [36].

    In this article we establish two basic results for GF differential equations in the training of fully-connected feedforward ANNs with one hidden layer and ReLU activation. Specifically, in the first main result of this article, see Theorem 1.1 below, we establish in the training of such ANNs under the assumption that the probability distribution of the input data of the considered supervised learning problem is absolutely continuous with a bounded density function that every GF differential equation possesses for every initial value a solution which is also unique among a suitable class of solutions (see (1.6) in Theorem 1.1 for details). In the second main result of this article, see Theorem 1.2 below, we prove in the training of such ANNs under the assumption that the target function and the density function are piecewise polynomial (see (1.8) below for details) that every non-divergent GF trajectory converges with an appropriate speed of convergence (see (1.11) below) to a critical point.

    In Theorems 1.1 and 1.2 we consider ANNs with $ d \in \mathbb{N} = \{ 1, 2, 3, \dots \} $ neurons on the input layer ($ d $-dimensional input), $ H \in \mathbb{N} $ neurons on the hidden layer ($ H $-dimensional hidden layer), and $ 1 $ neuron on the output layer ($ 1 $-dimensional output). There are thus $ H d $ scalar real weight parameters and $ H $ scalar real bias parameters to describe the affine linear transformation between $ d $-dimensional input layer and the $ H $-dimensional hidden layer and there are thus $ H $ scalar real weight parameters and 1 scalar real bias parameter to describe the affine linear transformation between the $ H $-dimensional hidden layer and the $ 1 $-dimensional output layer. Altogether there are thus

    $ d=Hd+H+H+1=Hd+2H+1 $ (1.1)

    real numbers to describe the ANNs in Theorems 1.1 and 1.2. We also refer to Figure 1 for a graphical illustration of the architecture of an example ANN with $ d = 4 $ neurons on the input layer and $ H = 5 $ neurons on the hidden layer.

    Figure 1.  Graphical illustration of the architecture of an example fully-connected feedforward ANN with one hidden layer with $ 4 $ neurons on the input layer, $ 5 $ neurons on the hidden layer, and $ 1 $ neuron on the output layer corresponding to $ d = 4 $ and $ H = 5 $ in Theorems 1.1 and 1.2. In this example there are $ H d = 20 $ arrows from the input layer to the hidden layer corresponding to $ H d = 20 $ weight parameters to describe the affine linear transformation from the input layer to the hidden layer, there are $ H = 5 $ bias parameters to describe the affine linear transformation from the input layer to the hidden layer, there are $ H = 5 $ arrows from the hidden layer to the output layer corresponding to $ H = 5 $ weight parameters to describe the affine linear transformation from the hidden layer to the output layer, and there is $ 1 $ bias parameter to describe the affine linear transformation from the hidden layer to the output layer. The overall number $ \mathfrak{d} \in \mathbb{N} $ of ANN parameters thus satisfies $ \mathfrak{d} = H d + H + H + 1 = Hd + 2 H + 1 = 20 + 10 + 1 = 31 $ (cf. (1.1), Theorems 1.1 and 1.2.

    The real numbers $ \mathscr{a} \in \mathbb{R} $, $ \mathscr{b} \in (\mathscr{a}, \infty) $ in Theorems 1.1 and 1.2 are used to specify the set $ [\mathscr{a}, \mathscr{b}]^d $ in which the input data of the considered supervised learning problem takes values in and the function $ f \colon [\mathscr{a}, \mathscr{b}]^d \to \mathbb{R} $ in Theorem 1.1 specifies the target function of the considered supervised learning problem.

    In Theorem 1.1 we assume that the target function is an element of the set $ C([\mathscr{a}, \mathscr{b}]^d, \mathbb{R}) $ of continuous functions from $ [\mathscr{a}, \mathscr{b}]^d $ to $ \mathbb{R} $ but beside this continuity hypothesis we do not impose further regularity assumptions on the target function.

    The function $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}]^d \to [0, \infty) $ in Theorems 1.1 and 1.2 is an unnormalized density function of the probability distribution of the input data of the considered supervised learning problem and in Theorem 1.1 we impose that this unnormalized density function is bounded and measurable.

    In Theorems 1.1 and 1.2 we consider ANNs with the ReLU activation function

    $ Rxmax{x,0}R. $ (1.2)

    The ReLU activation function fails to be differentiable and this lack of regularity also transfers to the risk function of the considered supervised learning problem; cf. (1.5) below. We thus need to employ appropriately generalized gradients of the risk function to specify the dynamics of the GFs. As in [34, Setting 2.1 and Proposition 2.3] (cf. also [33,37]), we accomplish this, first, by approximating the ReLU activation function through continuously differentiable functions which converge pointwise to the ReLU activation function and whose derivatives converge pointwise to the left derivative of the ReLU activation function and, thereafter, by specifying the generalized gradient function as the limit of the gradients of the approximated risk functions; see (1.3) and (1.5) in Theorem 1.1 and (1.9) and (1.10) in Theorem 1.2 for details.

    We now present the precise statement of Theorem 1.1 and, thereafter, provide further comments regarding Theorem 1.2.

    Theorem 1.1 (Existence and uniqueness of solutions of GFs in the training of ANNs). Let $ d, H, \mathfrak{d} \in \mathbb{N} $, $ \mathscr{a} \in \mathbb{R} $, $ \mathscr{b} \in (\mathscr{a}, \infty) $, $ f \in C ([\mathscr{a}, \mathscr{b}]^d, \mathbb{R}) $ satisfy $ \mathfrak{d} = d H + 2 H + 1 $, let $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}]^d \to [0, \infty) $ be bounded and measurable, let $ \mathfrak{R}_r \in C (\mathbb{R}, \mathbb{R}) $, $ r \in \mathbb{N} \cup \left\{ {{ \infty }} \right\} $, satisfy for all $ x \in \mathbb{R} $ that $ (\cup_{r \in \mathbb{N}} \left\{ {{ \mathfrak{R}_r }} \right\}) \subseteq C^1(\mathbb{R}, \mathbb{R}) $, $ \mathfrak{R}_\infty (x) = \max \left\{ {{ x, 0 }} \right\} $, $ \sup_{r \in \mathbb{N}} \sup_{y \in [-|x|, |x|] } | (\mathfrak{R}_r)'(y)| < \infty $, and

    $ lim supr(|Rr(x)R(x)|+|(Rr)(x)1(0,)(x)|)=0, $ (1.3)

    for every $ \theta = (\theta_1, \ldots, \theta_ \mathfrak{d}) \in \mathbb{R}^ \mathfrak{d} $ let $ \mathbf{D} ^\theta \subseteq \mathbb{N} $ satisfy

    $ Dθ={i{1,2,,H}:|θHd+i|+dj=1|θ(i1)d+j|=0}, $ (1.4)

    for every $ r \in \mathbb{N} \cup \left\{ {{ \infty }} \right\} $ let $ \mathcal{L}_r \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R} $ satisfy for all $ \theta = (\theta_1, \ldots, \theta_ \mathfrak{d}) \in \mathbb{R}^{ \mathfrak{d}} $ that

    $ Lr(θ)=[a,b]d(f(x1,,xd)θdHi=1θH(d+1)+i[Rr(θHd+i+dj=1θ(i1)d+jxj)])2p(x)d(x1,,xd), $ (1.5)

    let $ \theta \in \mathbb{R}^ \mathfrak{d} $, and let $ \mathcal{G} \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R}^ \mathfrak{d} $ satisfy for all $ \vartheta \in \left\{ {{ v \in \mathbb{R}^ \mathfrak{d} \colon ((\nabla \mathcal{L}_r) (v)) _{r \in \mathbb{N}}\; \mathit{\text{is convergent}} }} \right\} $ that $ \mathcal{G} (\vartheta) = \lim_{r \to \infty} (\nabla \mathcal{L}_r) (\vartheta) $. Then

    (i) it holds that $ \mathcal{G} $ is locally bounded and measurable and

    (ii) there exists a unique $ \Theta \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ which satisfies for all $ t \in [0, \infty) $, $ s \in [t, \infty) $ that $ \mathbf{D}^{\Theta_t} \subseteq \mathbf{D}^{\Theta_s } $ and

    $ Θt=θt0G(Θu)du. $ (1.6)

    Theorem 1.1 is a direct consequence of Theorem 3.3 below. In Theorem 1.2 we also assume that the target function $ f \colon [\mathscr{a}, \mathscr{b}]^d \to \mathbb{R} $ is continuous but additionally assume that, roughly speaking, both the target function $ f \colon [\mathscr{a}, \mathscr{b}]^d \to \mathbb{R} $ and the unnormalized density function $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}]^d \to [0, \infty) $ coincide with polynomial functions on suitable subsets of their domain of definition $ [\mathscr{a}, \mathscr{b}]^d $. In Theorem 1.2 the $ (n \times d) $-matrices $ \alpha^k_i \in \mathbb{R}^{ n \times d } $, $ i \in \left\{ {{ 1, 2, \ldots, n }} \right\} $, $ k \in \left\{ {{0, 1}} \right\} $, and the $ n $-dimensional vectors $ \beta^k_i \in \mathbb{R}^n $, $ i \in \left\{ {{ 1, 2, \ldots, n }} \right\} $, $ k \in \left\{ {{0, 1}} \right\} $, are used to describe these subsets and the functions $ P^k_i \colon \mathbb{R}^d \to \mathbb{R} $, $ i \in \left\{ {{ 1, 2, \ldots, n }} \right\} $, $ k \in \left\{ {{0, 1}} \right\} $, constitute the polynomials with which the target function and the unnormalized density function should partially coincide. More formally, in (1.8) in Theorem 1.2 we assume that for every $ x \in [\mathscr{a}, \mathscr{b}] ^d $ we have that

    $ p(x)=i{1,2,,n},α0ix+β0i[0,)nP0i(x)andf(x)=i{1,2,,n},α1ix+β1i[0,)nP1i(x). $ (1.7)

    In (1.11) in Theorem 1.2 we prove that there exists a strictly positive real number $ \beta \in (0, \infty) $ such that for every GF trajectory $ \Theta \colon [0, \infty) \to \mathbb{R}^{ \mathfrak{d} } $ which does not diverge to infinity in the sense* that $ \liminf_{t \to \infty} ||\Theta_t || < \infty $ we have that $ \Theta_t \in \mathbb{R}^{ \mathfrak{d} } $, $ t \in [0, \infty) $, converges with order $ \beta $ to a critical point $ \vartheta \in \mathcal{G}^{ - 1 }(\left\{ {{ 0 }} \right\}) = \left\{ {{ \theta \in \mathbb{R}^{ \mathfrak{d} } \colon \mathcal{G} (\theta) = 0 }} \right\} $ and we have that the risk $ \mathcal{L} (\Theta_t) \in \mathbb{R} $, $ t \in [0, \infty) $, converges with order 1 to the risk $ \mathcal{L} (\vartheta) $ of the critical point $ \vartheta $. We now present the precise statement of Theorem 1.2.

    *Note that the functions $ || \cdot || \colon (\cup_{n \in \mathbb{N}} \mathbb{R} ^n) \to \mathbb{R} $ and $ \left\langle {{ \cdot, \cdot }} \right\rangle \colon (\cup_{n \in \mathbb{N}} (\mathbb{R}^n \times \mathbb{R}^n)) \to \mathbb{R} $ satisfy for all $ n \in \mathbb{N} $, $ x = (x_1, \ldots, x_n) $, $ y = (y_1, \ldots, y_n) \in \mathbb{R}^n $ that $ || x || = [\sum _{i = 1}^n \left| x_i \right| ^2] ^{1/2} $ and $ \left\langle {{ x, y }} \right\rangle = \sum _{i = 1}^ \mathfrak{d} x_i y_i $.

    Theorem 1.2 (Convergence rates for GFs trajectories in the training of ANNs). Let $ d, H, \mathfrak{d}, n \in \mathbb{N} $, $ \mathscr{a} \in \mathbb{R} $, $ \mathscr{b} \in (\mathscr{a}, \infty) $, $ f \in C([\mathscr{a}, \mathscr{b}] ^d, \mathbb{R}) $ satisfy $ \mathfrak{d} = d H + 2 H + 1 $, for every $ i \in \left\{ {{1, 2, \ldots, n}} \right\} $, $ k \in \left\{ {{0, 1}} \right\} $ let $ \alpha_{i}^k \in \mathbb{R}^{n \times d} $, let $ \beta_{i }^k \in \mathbb{R}^n $, and let $ P_i^k \colon \mathbb{R}^d \to \mathbb{R} $ be a polynomial, let $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}] ^d \to [0, \infty) $ satisfy for all $ k \in \left\{ {{0, 1}} \right\} $, $ x \in [\mathscr{a}, \mathscr{b}] ^d $ that

    $ kf(x)+(1k)p(x)=ni=1[Pki(x)1[0,)n(αkix+βki)], $ (1.8)

    let $ \mathfrak{R}_r \in C (\mathbb{R}, \mathbb{R}) $, $ r \in \mathbb{N} \cup \left\{ {{ \infty }} \right\} $, satisfy for all $ x \in \mathbb{R} $ that $ (\cup_{r \in \mathbb{N}} \left\{ {{ \mathfrak{R}_r }} \right\}) \subseteq C^1(\mathbb{R}, \mathbb{R}) $, $ \mathfrak{R}_\infty (x) = \max \left\{ {{ x, 0 }} \right\} $, $ \sup_{r \in \mathbb{N}} \sup_{y \in [-|x|, |x|] } | (\mathfrak{R}_r)'(y)| < \infty $, and

    $ lim supr(|Rr(x)R(x)|+|(Rr)(x)1(0,)(x)|)=0, $ (1.9)

    for every $ r \in \mathbb{N} \cup \left\{ {{ \infty }} \right\} $ let $ \mathcal{L}_r \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R} $ satisfy for all $ \theta = (\theta_1, \ldots, \theta_ \mathfrak{d}) \in \mathbb{R}^{ \mathfrak{d}} $ that

    $ Lr(θ)=[a,b]d(f(x1,,xd)θdHi=1θH(d+1)+i[Rr(θHd+i+dj=1θ(i1)d+jxj)])2p(x)d(x1,,xd), $ (1.10)

    let $ \mathcal{G} \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R}^ \mathfrak{d} $ satisfy for all $ \theta \in \left\{ {{ \vartheta \in \mathbb{R}^ \mathfrak{d} \colon ((\nabla \mathcal{L}_r) (\vartheta)) _{r \in \mathbb{N}}\; \mathit{\text{is convergent}} }} \right\} $ that $ \mathcal{G} (\theta) = \lim_{r \to \infty} (\nabla \mathcal{L}_r) (\theta) $, and let $ \Theta \in C ([0, \infty), \mathbb{R}^ \mathfrak{d}) $ satisfy $ \liminf_{t \to \infty } ||\Theta_t || < \infty $ and $ \forall \, t \in [0, \infty) \colon \Theta_t = \Theta_0 - \int_0^t \mathcal{G} (\Theta_s) \, \mathrm{d} s $. Then there exist $ \vartheta \in \mathcal{G}^{ - 1 } (\left\{ {{ 0 }} \right\}) $, $ \mathfrak{C}, \beta \in (0, \infty) $ which satisfy for all $ t \in [0, \infty) $ that

    $ ||Θtϑ||C(1+t)βand|L(Θt)L(ϑ)|C(1+t)1. $ (1.11)

    Theorem 1.2 above is an immediate consequence of Theorem 5.4 in Subsection 5.3 below. Theorem 1.2 is related to Theorem 1.1 in our previous article [37]. In particular, [37, Theorem 1.1] uses weaker assumptions than Theorem 1.2 above but Theorem 1.2 above establishes a stronger statement when compared to [37, Theorem 1.1]. Specifically, on the one hand in [37, Theorem 1.1] the target function is only assumed to be a continuous function and the unnormalized density is only assumed to be measurable and integrable while in Theorem 1.2 it is additionally assumed that both the target function and the unnormalized density are piecewise polynomial in the sense of (1.8) above. On the other hand [37, Theorem 1.1] only asserts that the risk of every bounded GF trajectory converges to the risk of critical point while Theorem 1.2 assures that every non-divergent GF trajectory converges with a strictly positive rate of convergence to a critical point (the rate of convergence is given through the strictly positive real number $ \beta \in (0, \infty) $ appearing in the exponent on the left inequality in (Eq 1.11) in Theorem 1.2) and also assures that the risk of the non-divergent GF trajectory converges with rate 1 to the risk of the critical point (the convergence rate $ 1 $ is ensured through the $ 1 $ appearing in the exponent on the right inequality in (Eq 1.11) in Theorem 1.2).

    We also point out that Theorem 1.2 assumes that the GF trajectory is non-divergent in the sense that $ \liminf_{ t \to \infty } ||\Theta_t || < \infty $. In general, it remains an open problem to establish sufficient conditions which ensure that the GF trajectory has this non-divergence property. In this aspect we also refer to Gallon et al. [38] for counterexamples for which it has been proved that every GF trajectory with sufficiently small initial risk does in the training of ANNs diverge to $ \infty $ in the sense that $ \liminf_{ t \to \infty } ||\Theta_t || = \infty $.

    The remainder of this article is organized in the following way. In Section 2 we establish several regularity properties for the risk function of the considered supervised learning problem and its generalized gradient function. In Subsection 3 we employ the findings from Section 2 to establish existence and uniqueness properties for solutions of GF differential equations. In particular, in Subsection 3 we present the proof of Theorem 1.1 above. In Subsection 4 we establish under the assumption that both the target function $ f \colon [\mathscr{a}, \mathscr{b}]^d \to \mathbb{R} $ and the unnormalized density function $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}]^d \to [0, \infty) $ are piecewise polynomial that the risk function is semialgebraic in the sense of Definition 4.3 in Subsection 4 (see Corollary 4.10 in Subsection 4 for details). In Subsection 5 we engage the results from Sections 2 and 4 to establish several convergence rate results for solutions of GF differential equations and, thereby, we also prove Theorem 1.2 above.

    In this section we establish several regularity properties for the risk function $ \mathcal{L} \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R} $ and its generalized gradient function $ \mathcal{G} \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R}^ \mathfrak{d} $. In particular, in Proposition 2.12 in Subsection 2.5 below we prove for every parameter vector $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ in the ANN parameter space $ \mathbb{R}^{ \mathfrak{d} } = \mathbb{R}^{ d H + 2 H + 1 } $ that the generalized gradient $ \mathcal{G} (\theta) $ is a limiting subdifferential of the risk function $ \mathcal{L} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ at $ \theta $. In Definition 2.8 in Subsection 2.5 we recall the notion of subdifferentials (which are sometimes also referred to as Fréchet subdifferentials in the scientific literature) and in Definition 2.9 in Subsection 2.5 we recall the notion of limiting subdifferentials. In the scientific literature Definitions 2.8 and 2.9 can in a slightly different presentational form, e.g., be found in Rockafellar & Wets [39, Definition 8.3] and Bolte et al. [9, Definition 2.10], respectively.

    Our proof of Proposition 2.12 uses the continuously differentiability result for the risk function in Proposition 2.3 in Subsection 2.2 and the local Lipschitz continuity result for the generalized gradient function in Corollary 2.7 in Subsection 2.4. Corollary 2.7 will also be employed in Subsection 3 below to establish existence and uniqueness results for solutions of GF differential equations. Proposition 2.3 follows directly from [37, Proposition 2.10, Lemmas 2.11 and 2.12]. Our proof of Corollary 2.7, in turn, employs the known representation result for the generalized gradient function in Proposition 2.2 in Subsection 2.2 below and the local Lipschitz continuity result for certain parameter integrals in Corollary 2.6 in Subsection 2.4. Statements related to Proposition 2.2 can, e.g., be found in [37, Proposition 2.2], [33, Proposition 2.3], and [34, Proposition 2.3].

    Our proof of Corollary 2.6 uses the elementary abstract local Lipschitz continuity result for certain parameter integrals in Lemma 2.5 in Subsection 2.4 and the local Lipschitz continuity result for active neuron regions in Lemma 2.4 in Subsection 2.3 below. Lemma 2.4 is a generalization of [35, Lemma 7], Lemma 2.5 is a slight generalization of [35, Lemma 6], and Corollary 2.6 is a generalization of [37, Lemma 2.12] and [35, Corollary 9]. The proof of Lemma 2.5 is therefore omitted.

    In Setting 2.1 in Subsection 2.1 below we present the mathematical setup to describe ANNs with ReLU activation, the risk function $ \mathcal{L} \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R} $, and its generalized gradient function $ \mathcal{G} \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R}^ \mathfrak{d} $. Moreover, in (2.6) in Setting 2.1 we define for a given parameter vector $ \theta \in \mathbb{R}^ \mathfrak{d} $ the set of hidden neurons which have all input parameters equal to zero. Such neurons are sometimes called degenerate (cf. Cheridito et al. [36]) and can cause problems with the differentiability of the risk function, which is why we exclude degenerate neurons in Proposition 2.3 and Corollary 2.7 below.

    In this subsection we present in Setting 2.1 below the mathematical setup that we employ to state most of the mathematical results of this work. We also refer to Figure 2 below for a table in which we briefly list the mathematical objects introduced in Setting 2.1.

    Figure 2.  List of the mathematical objects introduced in Setting 2.1.

    Setting 2.1. Let $ d, H, \mathfrak{d} \in \mathbb{N} $, $ \mathscr{a} \in \mathbb{R} $, $ \mathscr{b} \in (\mathscr{a}, \infty) $, $ f \in C ([\mathscr{a}, \mathscr{b}]^d, \mathbb{R}) $ satisfy $ \mathfrak{d} = d H + 2 H + 1 $, let $ \mathfrak{w} = ((\mathfrak{w}^{{ \theta }}_{ i, j })_{ (i, j) \in \left\{ {{ 1, \ldots, H }} \right\} \times \left\{ {{ 1, \ldots, d }} \right\} })_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ H \times d } $, $ \mathfrak{b} = ((\mathfrak{b}^{{ \theta }}_1, \dots, \mathfrak{b}^{{ \theta }}_{ H }))_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ H } $, $ \mathfrak{v} = ((\mathfrak{v}^{{\theta}}_1, \dots, \mathfrak{v}^{{\theta}}_{ H }))_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ H } $, and $ \mathfrak{c} = (\mathfrak{c}^{{ \theta }})_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ satisfy for all $ \theta = (\theta_1, \ldots, \theta_{ \mathfrak{d}}) \in \mathbb{R}^{ \mathfrak{d}} $, $ i \in \left\{ {{ 1, 2, \ldots, H }} \right\} $, $ j \in \left\{ {{ 1, 2, \ldots, d }} \right\} $ that

    $ wθi,j=θ(i1)d+j,bθi=θHd+i,vθi=θH(d+1)+i,andcθ=θd, $ (2.1)

    let $ \mathfrak{R}_r \in C^1(\mathbb{R}, \mathbb{R}) $, $ r \in \mathbb{N} $, satisfy for all $ x \in \mathbb{R} $ that

    $ lim supr(|Rr(x)max{x,0}|+|(Rr)(x)1(0,)(x)|)=0 $ (2.2)

    and $ \sup_{r \in \mathbb{N}} \sup_{y \in [-|x|, |x|] } |(\mathfrak{R}_r)'(y)| < \infty $, let $ \lambda \colon \mathcal{B} (\mathbb{R}^d) \to [0, \infty] $ be the Lebesgue–Borel measure on $ \mathbb{R}^d $, let $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}] ^d \to [0, \infty) $ be bounded and measurable, let $ \mathscr{N} = (\mathscr{N} ^{ { \theta } })_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to C(\mathbb{R}^d, \mathbb{R}) $ and $ \mathcal{L} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ satisfy for all $ \theta \in \mathbb{R}^{ \mathfrak{d} } $, $ x = (x_1, \dots, x_d) \in \mathbb{R}^d $ that

    $ Nθ(x)=cθ+Hi=1vθi{bθi+dj=1wθi,jxj,0} $ (2.3)

    and $ \mathcal{L} (\theta) = \int_{[\mathscr{a}, \mathscr{b}] ^d} (f (y) - \mathscr{N} ^{ {\theta} } (y))^2 \mathfrak{p} (y) \, \lambda (\mathrm{d} y) $, for every $ r \in \mathbb{N} $ let $ \mathfrak{L}_r \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ satisfy for all $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ that

    $ Lr(θ)=[a,b]d(f(y)cθHi=1vθi[Rr(bθi+dj=1wθi,jyj)])2p(y)λ(dy), $ (2.4)

    for every $ \varepsilon \in (0, \infty) $, $ \theta \in \mathbb{R}^ \mathfrak{d} $ let $ B_\varepsilon (\theta) \subseteq \mathbb{R}^ \mathfrak{d} $ satisfy $ B_\varepsilon (\theta) = \left\{ {{\vartheta \in \mathbb{R}^ \mathfrak{d} \colon ||\theta - \vartheta || < \varepsilon }} \right\} $, for every $ \theta \in \mathbb{R}^{ \mathfrak{d} } $, $ i \in \left\{ {{ 1, 2, \ldots, H }} \right\} $ let $ I_i^\theta \subseteq \mathbb{R}^d $ satisfy

    $ Iθi={x=(x1,,xd)[a,b]d:bθi+dj=1wθi,jxd>0}, $ (2.5)

    for every $ \theta \in \mathbb{R}^ \mathfrak{d} $ let $ \mathbf{D} ^\theta \subseteq \mathbb{N} $ satisfy

    $ Dθ={i{1,2,,H}:|bθi|+dj=1|wθi,j|=0}, $ (2.6)

    and let $ \mathcal{G} = (\mathcal{G}_1, \dots, \mathcal{G}_ \mathfrak{d}) \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ \mathfrak{d} } $ satisfy for all $ \theta \in \left\{ {{ \vartheta \in \mathbb{R}^ \mathfrak{d} \colon ((\nabla \mathfrak{L}_r)(\vartheta))_{r \in \mathbb{N}}\; \mathit{\text{is convergent}} }} \right\} $ that $ \mathcal{G}(\theta) = \lim_{r \to \infty} (\nabla \mathfrak{L}_r) (\theta) $.

    Next we add some explanations regarding the mathematical framework presented in Setting 2.1 above. In Setting 2.1

    ● the natural number $ d \in \mathbb{N} $ represents the number of neurons on the input layer of the considered ANNs,

    ● the natural number $ H \in \mathbb{N} $ represents the number of neurons on the hidden layer of the considered ANNs, and

    ● the natural number $ \mathfrak{d} \in \mathbb{N} $ measures the overall number of parameters of the considered ANNs

    (cf. (1.1) and Figure 1 above). The real numbers $ \mathscr{a} \in \mathbb{R} $, $ \mathscr{b} \in (\mathscr{a}, \infty) $ in Setting 2.1 are employed to specify the $ d $-dimensional set $ [\mathscr{a}, \mathscr{b}]^d \subseteq \mathbb{R}^d $ in which the input data of the supervised learning problem considered in Setting 2.1 takes values in and which, thereby, also serves as the domain of definition of the target function of the considered supervised learning problem.

    In Setting 2.1 the function $ f \colon [\mathscr{a}, \mathscr{b}]^d \to \mathbb{R} $ represents the target function of the considered supervised learning problem. In Setting 2.1 the target function $ f $ is assumed to be an element of the set $ C([\mathscr{a}, \mathscr{b}]^d, \mathbb{R}) $ of continuous functions from the $ d $-dimensional set $ [\mathscr{a}, \mathscr{b}]^d $ to the reals $ \mathbb{R} $ (first line in Setting 2.1).

    The matrix valued function $ \mathfrak{w} = ((\mathfrak{w}^{{ \theta }}_{ i, j })_{ (i, j) \in \left\{ {{ 1, \ldots, H }} \right\} \times \left\{ {{ 1, \ldots, d }} \right\} })_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ H \times d } $ in Setting 2.1 is used to represent the inner weight parameters of the ANNs considered in Setting 2.1. In particular, in Setting 2.1 we have for every $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ that the $ H \times d $-matrix $ \mathfrak{w}^{{ \theta }} = (\mathfrak{w}^{{ \theta }}_{ i, j })_{ (i, j) \in \left\{ {{ 1, \ldots, H }} \right\} \times \left\{ {{ 1, \ldots, d }} \right\} } \in \mathbb{R}^{ H \times d } $ represents the weight parameter matrix for the affine linear transformation from the $ d $-dimensional input layer to the $ H $-dimensional hidden layer of the ANN associated to the ANN parameter vector $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ (cf. (2.1), (2.3), and Figure 1).

    The vector valued function $ \mathfrak{b} = ((\mathfrak{b}^{{ \theta }}_1, \dots, \mathfrak{b}^{{ \theta }}_{ H }))_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ H } $ in Setting 2.1 is used to represent the inner bias parameters of the ANNs considered in Setting 2.1. In particular, in Setting 2.1 we have for every $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ that the $ d $-dimensional vector $ \mathfrak{b}^{{ \theta }} = (\mathfrak{b}^{{ \theta }}_1, \dots, \mathfrak{b}^{{ \theta }}_{ H }) \in \mathbb{R}^{ H } $ represents the bias parameter vector for the affine linear transformation from the $ d $-dimensional input layer to the $ H $-dimensional hidden layer of the ANN associated to the ANN parameter vector $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ (cf. (2.1), (2.3), and Figure 1).

    The vector valued function $ \mathfrak{v} = ((\mathfrak{v}^{{\theta}}_1, \dots, \mathfrak{v}^{{\theta}}_{ H }))_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ H } $ in Setting 2.1 is used to describe the outer weight parameters of the ANNs considered in Setting 2.1. In particular, in Setting 2.1 we have for every $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ that the transpose of the $ H $-dimensional vector $ \mathfrak{v}^{{ \theta }} = (\mathfrak{v}^{{\theta}}_1, \dots, \mathfrak{v}^{{\theta}}_{ H }) \in \mathbb{R}^{ H } $ represents the weight parameter matrix for the affine linear transformation from the $ H $-dimensional hidden layer to the $ 1 $-dimensional output layer of the ANN associated to the ANN parameter vector $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ (cf. (2.1), (2.3), and Figure 1).

    The real valued function $ \mathfrak{c} = (\mathfrak{c}^{{ \theta }})_{ \theta \in \mathbb{R}^{ \mathfrak{d} } } \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ in Setting 2.1 is used to represent the outer bias parameters of the ANNs considered in Setting 2.1. In particular, in Setting 2.1 we have for every $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ that the real number $ \mathfrak{c}^{{ \theta }} \in \mathbb{R} $ describes the bias parameter for the affine linear transformation from the $ H $-dimensional hidden layer to the $ 1 $-dimensional output layer of the ANN associated to the ANN parameter vector $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ (cf. (2.1), (2.3), and Figure 1).

    In Setting 2.1 we consider ANNs with the ReLU activation function $ \mathbb{R} \ni x \mapsto \max\{ x, 0 \} \in \mathbb{R} $ (cf. (1.2)). The ReLU activation function fails to be differentiable and this lack of differentiability typically transfers from the activation function to the realization functions $ \mathscr{N} ^{ { \theta } } \colon \mathbb{R}^d \to \mathbb{R} $, $ \theta \in \mathbb{R}^{ \mathfrak{d} } $, of the considered ANNs and the risk function $ \mathcal{L} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ of the considered supervised learning problem, both, introduced in (2.3) in Setting 2.1. In general, there thus do not exist standard derivatives and standard gradients of the risk function and, in view of this, we need to introduce suitably generalized gradients of the risk function to specify the GF dynamics. As in [34, Setting 2.1 and Proposition 2.3] (cf. also [33,37]), we accomplish this,

    ● first, by approximating the ReLU activation function through appropriate continuously differentiable functions which converge pointwise to the ReLU activation function and whose derivatives converge pointwise to the left derivative of the ReLU activation function,

    ● then, by using these continuously differentiable approximations of the ReLU activation function to specify approximated risk functions, and,

    ● finally, by specifying the generalized gradient function as the pointwise limit of the standard gradients of the approximated risk functions.

    In Setting 2.1 the functions $ \mathfrak{R}_r \colon \mathbb{R} \to \mathbb{R} $, $ r \in \mathbb{N} $, serves as such appropriate continuously differentiable approximations of the ReLU activation function and the hypothesis in (2.2) ensures that these functions converge pointwise to the ReLU activation function and that the derivatives of these functions converge pointwise to the left derivative of the ReLU activation function (cf. also (1.3) in Theorem 1.1 and (1.9) in Theorem 1.2). These continuously differentiable approximations of the ReLU activation function are then used in (2.4) in Setting 2.1 (cf. also (1.5) in Theorem 1.1 and (1.10) in Theorem 1.2) to introduce continuously differentiable approximated risk functions $ \mathfrak{L}_r \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $, $ r \in \mathbb{N} $, which converge pointwise to the risk function $ \mathcal{L} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ (cf., e.g., [37, Proposition 2.2]). Finally, the standard gradients of the approximated risk functions $ \mathfrak{L}_r \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $, $ r \in \mathbb{N} $, are then used to introduce the generalized gradient function $ \mathcal{G} = (\mathcal{G}_1, \dots, \mathcal{G}_ \mathfrak{d}) \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ \mathfrak{d} } $ in Setting 2.1. In this regard we also note that Proposition 2.2 in Subsection 2.2 below, in particular, ensures that the function $ \mathcal{G} = (\mathcal{G}_1, \dots, \mathcal{G}_ \mathfrak{d}) \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ \mathfrak{d} } $ in Setting 2.1 is indeed uniquely defined.

    Proposition 2.2. Assume Setting 2.1. Then it holds for all $ \theta \in \mathbb{R}^{ \mathfrak{d}} $, $ i \in \left\{ {{ 1, 2, \ldots, H }} \right\} $, $ j \in \left\{ {{ 1, 2, \ldots, d }} \right\} $ that

    $ G(i1)d+j(θ)=2vθiIθixj(Nθ(x)f(x))p(x)λ(dx),GHd+i(θ)=2vθiIθi(Nθ(x)f(x))p(x)λ(dx),GH(d+1)+i(θ)=2[a,b]d[max{bθi+dj=1wθi,jxj,0}](Nθ(x)f(x))p(x)λ(dx),andGd(θ)=2[a,b]d(Nθ(x)f(x))p(x)λ(dx). $ (2.7)

    Proof of Proposition 2.2. Observe that, e.g., [37, Proposition 2.2] establishes 2.7. The proof of Proposition 2.2 is thus complete.

    Proposition 2.3. Assume Setting 2.1 and let $ U \subseteq \mathbb{R}^ \mathfrak{d} $ satisfy $ U = \left\{ {{\theta \in \mathbb{R}^ \mathfrak{d} \colon \mathbf{D}^\theta = \varnothing }} \right\} $. Then

    (i) it holds that $ U \subseteq \mathbb{R}^ \mathfrak{d} $ is open,

    (ii) it holds that $ \mathcal{L} | _U \in C^1 (U, \mathbb{R}) $, and

    (iii) it holds that $ \nabla (\mathcal{L} |_U) = \mathcal{G} |_U $.

    Proof of Proposition 2.3. Note that [37, Proposition 2.10,Lemmas 2.11 and 2.12] establish items (i), (ii), and (iii). The proof of Proposition 2.3 is thus complete.

    Lemma 2.4. Let $ d \in \mathbb{N} $, $ \mathscr{a} \in \mathbb{R} $, $ \mathscr{b} \in (\mathscr{a}, \infty) $, for every $ v = (v_1, \ldots, v_{d+1}) \in \mathbb{R}^{d+1} $ let $ I^v \subseteq [\mathscr{a}, \mathscr{b}]^d $ satisfy $ I^v = \left\{ {{ x \in [\mathscr{a}, \mathscr{b}] ^d \colon v_{d+1} + \sum _{i = 1}^d v_i x_i > 0 }} \right\} $, for every $ n \in \mathbb{N} $ let $ \lambda_n \colon \mathcal{B} (\mathbb{R}^n) \to [0, \infty] $ be the Lebesgue–Borel measure on $ \mathbb{R}^n $, let $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}] ^d \to [0, \infty) $ be bounded and measurable, and let $ u \in \mathbb{R}^{d+1} \backslash \left\{ {{0}} \right\} $. Then there exist $ \varepsilon, \mathfrak{C} \in (0, \infty) $ such that for all $ v, w \in \mathbb{R}^{d+1} $ with $ \max \left\{ {{ ||u - v||, ||u - w || }} \right\} \le \varepsilon $ it holds that

    $ IvΔIwp(x)λd(dx)C||vw||. $ (2.8)

    Proof of Lemma 2.4. Observe that for all $ v, w \in \mathbb{R}^{d+1} $ we have that

    $ IvΔIwp(x)λd(dx)(supx[a,b]dp(x))λd(IvΔIw). $ (2.9)

    Moreover, note that the fact that for all $ y \in \mathbb{R} $ it holds that $ y \geq - |y| $ ensures that for all $ v = (v_1, \ldots, v_{d+1}) \in \mathbb{R}^{d+1} $, $ i \in \left\{ {{1, 2, \ldots, d+1 }} \right\} $ with $ ||u-v|| < |u_i| $ it holds that

    $ uivi=(ui)2+(viui)ui|ui|2|uivi||ui||ui|2||uv|||ui|>0. $ (2.10)

    Next observe that for all $ v_1, v_2, w_1, w_2 \in \mathbb{R} $ with $ \min \left\{ {{|v_1|, |w_1|}} \right\} > 0 $ it holds that

    $ |v2v1w2w1|=|v2w1w2v1||v1w1|=|v2(w1v1)+v1(v2w2)||v1w1|[|v2|+|v1||v1w1|][|v1w1|+|v2w2|]. $ (2.11)

    Combining this and 2.10 demonstrates for all $ v = (v_1, \ldots, v_{d+1}) $, $ w = (w_1, \ldots, w_{d+1}) \in \mathbb{R}^{d+1} $, $ i \in \left\{ {{1, 2, \ldots, d}} \right\} $ with $ \max \left\{ {{||v-u||, ||w-u||}} \right\} < |u_1| $ that $ v_1 w_1 > 0 $ and

    $ |viv1wiw1|[2||v|||v1w1|][2||vw||][4||vu||+4||u|||v1w1|]||vw||. $ (2.12)

    Hence, we obtain for all $ v = (v_1, \ldots, v_{d+1}) $, $ w = (w_1, \ldots, w_{d+1}) \in \mathbb{R}^{d+1} $, $ i \in \left\{ {{1, 2, \ldots, d}} \right\} $ with $ \max \left\{ {{||v-u||, ||w-u||}} \right\} \le \frac{|u_1|}{2} $ and $ |u_1| > 0 $ that $ v_1 w_1 > 0 $ and

    $ |viv1wiw1|(2|u1|+4||u||)||vw|||u1+(v1u1)||u1+(w1u1)|6||u||||vw||(|u1|||vu||)(|u1|||wu||)[24||u|||u1|2]||vw||. $ (2.13)

    In the following we distinguish between the case $ \max_{i \in \left\{ {{1, 2, \ldots, d }} \right\} } |u_i| = 0 $, the case $ (\max_{i \in \left\{ {{1, 2, \ldots, d }} \right\} } |u_i|, d) \in (0, \infty) \times [2, \infty) $, and the case $ (\max_{i \in \left\{ {{1, 2, \ldots, d }} \right\} } |u_i|, d) \in (0, \infty) \times \left\{ {{1}} \right\} $. We first prove 2.8 in the case

    $ maxi{1,2,,d}|ui|=0. $ (2.14)

    Note that (2.14) and the assumption that $ u \in \mathbb{R}^{d+1} \backslash \left\{ {{ 0 }} \right\} $ imply that $ | u_{d+1} | > 0 $. Moreover, observe that (2.14) shows that for all $ v = (v_1, \ldots, v_{d+1}) \in \mathbb{R}^{d+1} $, $ x = (x_1, \ldots, x_d) \in I^u \Delta I^v $ we have that

    $ |([di=1vixi]+vd+1)([di=1uixi]+ud+1)|=|[di=1vixi]+vd+1|+|[di=1uixi]+ud+1||[di=1uixi]+ud+1|=|ud+1|. $ (2.15)

    In addition, note that for all $ v = (v_1, \ldots, v_{d+1}) \in \mathbb{R}^{d+1} $, $ x = (x_1, \ldots, x_d) \in [\mathscr{a}, \mathscr{b}] ^d $ it holds that

    $ |([di=1vixi]+vd+1)([di=1uixi]+ud+1)|[di=1|viui||xi|]+|vd+1ud+1|max{|a|,|b|}[di=1|viui|]+|vd+1ud+1|(1+dmax{|a,b|})||vu||. $ (2.16)

    This and (2.15) prove that for all $ v \in \mathbb{R}^{d+1} $ with $ || u - v || \le \frac{|u_{d+1}|}{2 + d \max \left\{ {{ | \mathscr{a}, \mathscr{b} | }} \right\}} $ we have that $ I^u \Delta I^v = \varnothing $, i.e., $ I^u = I^v $. Therefore, we get for all $ v, w \in \mathbb{R}^{d+1} $ with $ \max \left\{ {{ || u - v ||, ||u - w|| }} \right\} \le \frac{|u_{d+1}|}{2 + d \max \left\{ {{ | \mathscr{a}, \mathscr{b} | }} \right\}} $ that $ I^v = I^w = I^u $. Hence, we obtain for all $ v, w \in \mathbb{R}^{d+1} $ with $ \max \left\{ {{ || u - v ||, ||u - w|| }} \right\} \le \frac{|u_{d+1}|}{2 + d \max \left\{ {{ | \mathscr{a}, \mathscr{b} | }} \right\}} $ that $ \lambda_d(I^v \Delta I^w) = 0 $. This establishes (2.8) in the case $ \max_{i \in \left\{ {{1, 2, \ldots, d }} \right\} } |u_i| = 0 $. In the next step we prove 2.8 in the case

    $ (maxi{1,2,,d}|ui|,d)(0,)×[2,). $ (2.17)

    For this we assume without loss of generality that $ | u_1 | > 0 $. In the following let $ J_x^{v, w} \subseteq \mathbb{R} $, $ x \in [\mathscr{a}, \mathscr{b}]^{d-1} $, $ v, w \in \mathbb{R}^{d+1} $, satisfy for all $ x = (x_2, \ldots, x_d) \in [\mathscr{a}, \mathscr{b}] ^{d-1} $, $ v, w \in \mathbb{R}^{d+1} $ that $ J_x^{v, w} = \left\{ {{ y \in [\mathscr{a}, \mathscr{b}] \colon (y, x_2, \ldots, x_d) \in I^v \backslash I^w }} \right\} $. Next observe that Fubini's theorem and the fact that for all $ v \in \mathbb{R}^{d+1} $ it holds that $ I^v $ is measurable show that for all $ v, w \in \mathbb{R}^{d+1} $ we have that

    $ λd(IvΔIw)=[a,b]d1IvΔIw(x)λd(dx)=[a,b]d(1IvIw(x)+1IwIv(x))λd(dx)=[a,b]d1[a,b](1IvIw(y,x2,,xd)+1IwIv(y,x2,,xd))λ1(dy)λd1(d(x2,,xd))=[a,b]d1[a,b](1Jv,wx(y)+1Jw,vx(y))λ1(dy)λd1(dx)=[a,b]d1(λ1(Jv,wx)+λ1(Jw,vx))λd1(dx). $ (2.18)

    Furthermore, note that for all $ x = (x_2, \ldots, x_d) \in [\mathscr{a}, \mathscr{b}] ^{d-1} $, $ v = (v_1, \ldots, v_{d+1}) $, $ w = (w_1, \ldots, w_{d+1}) \in \mathbb{R}^{d+1} $, $ \mathfrak{s} \in \left\{ {{ - 1, 1 }} \right\} $ with $ \min \left\{ {{ \mathfrak{s} v_1, \mathfrak{s} w_1 }} \right\} > 0 $ it holds that

    $ Jv,wx={y[a,b]:(y,x2,,xd)IvIw}={y[a,b]:v1y+[di=2vixi]+vd+1>0w1y+[di=2wixi]+wd+1}={y[a,b]:sv1([di=2vixi]+vd+1)<sysw1([di=2wixi]+wd+1)}. $ (2.19)

    Hence, we obtain for all $ x = (x_2, \ldots, x_d) \in [\mathscr{a}, \mathscr{b}] ^{d-1} $, $ v = (v_1, \ldots, v_{d+1}) $, $ w = (w_1, \ldots, w_{d+1}) \in \mathbb{R}^{d+1} $, $ \mathfrak{s} \in \left\{ {{ - 1, 1 }} \right\} $ with $ \min \left\{ {{ \mathfrak{s} v_1, \mathfrak{s} w_1 }} \right\} > 0 $ that

    $ λ1(Jv,wx)|sv1([di=2vixi]+vd+1)sw1([di=2wixi]+wd+1)|[di=2|viv1wiw1||xi|]+|vd+1v1wd+1w1|max{|a|,|b|}[di=2|viv1wiw1|]+|vd+1v1wd+1w1|. $ (2.20)

    Furthermore, observe that (2.10) demonstrates for all $ v = (v_1, \ldots, v_{d+1}) \in \mathbb{R}^{d+1} $ with $ ||u - v|| < |u_1| $ that $ u_1 v_1 > 0 $. This implies that for all $ v = (v_1, \ldots, v_{d+1}) $, $ w = (w_1, \ldots, w_{d+1}) \in \mathbb{R}^{d+1} $ with $ \max \left\{ {{||u - v||, ||u - w|| }} \right\} < |u_1| $ there exists $ \mathfrak{s} \in \left\{ {{-1, 1 }} \right\} $ such that $ \min \left\{ {{ \mathfrak{s} v_1, \mathfrak{s} w_1 }} \right\} > 0 $. Combining this and (2.13) with (2.20) proves that there exists $ \mathfrak{C} \in \mathbb{R} $ such that for all $ x \in [\mathscr{a}, \mathscr{b}]^{d-1} $, $ v, w \in \mathbb{R}^{d+1} $ with $ \max \left\{ {{ || u - v ||, ||u - w || }} \right\} \le \frac{ | u_1 | }{2} $ we have that $ \lambda_1(J_x^{v, w}) + \lambda_1 (J_x^{w, v}) \leq \mathfrak{C} || v - w || $. This, (2.18), and (2.9) establish (2.8) in the case $ (\max_{i \in \left\{ {{1, 2, \ldots, d }} \right\} } |u_i|, d) \in (0, \infty) \times [2, \infty) $. Finally, we prove (2.8) in the case

    $ (maxi{1,2,,d}|ui|,d)(0,)×{1}. $ (2.21)

    Note that (2.21) demonstrates that $ |u_1| > 0 $. In addition, observe that for all $ v = (v_1, v_2) $, $ w = (w_1, w_2) \in \mathbb{R}^{2} $, $ \mathfrak{s} \in \left\{ {{ - 1, 1 }} \right\} $ with $ \min \left\{ {{ \mathfrak{s} v_1, \mathfrak{s} w_1 }} \right\} > 0 $ it holds that

    $ IvIw={y[a,b]:v1y+v2>0w1y+w2}={y[a,b]:sv2v1<sysw2w1}{yR:sv2v1<sysw2w1}. $ (2.22)

    Therefore, we get for all $ v = (v_1, v_2) $, $ w = (w_1, w_2) \in \mathbb{R}^{2} $, $ \mathfrak{s} \in \left\{ {{ - 1, 1 }} \right\} $ with $ \min \left\{ {{ \mathfrak{s} v_1, \mathfrak{s} w_1 }} \right\} > 0 $ that

    $ λ1(IvIw)|(sv2v1)(sw2w1)|=|v2v1w2w1|. $ (2.23)

    Furthermore, note that (2.10) ensures for all $ v = (v_1, v_2) \in \mathbb{R}^2 $ with $ ||u - v || < |u_1| $ that $ u_1 v_1 > 0 $. This proves that for all $ v = (v_1, v_2) $, $ w = (w_1, w_2) \in \mathbb{R}^2 $ with $ \max \left\{ {{||u - v ||, ||u - w|| }} \right\} < |u_1| $ there exists $ \mathfrak{s} \in \left\{ {{-1, 1 }} \right\} $ such that $ \min \left\{ {{ \mathfrak{s} v_1, \mathfrak{s} w_1 }} \right\} > 0 $. Combining this with (2.23) demonstrates for all $ v = (v_1, v_2) $, $ w = (w_1, w_2) \in \mathbb{R}^2 $ with $ \max \left\{ {{||u - v ||, ||u - w || }} \right\} < |u_1| $ that $ \min \left\{ {{ |v_1|, |w_1|}} \right\} > 0 $ and

    $ λ1(IvΔIw)=λ1(IvIw)+λ1(IwIv)2|v2v1w2w1|. $ (2.24)

    This, (2.13), and (2.9) establish (2.8) in the case $ (\max_{i \in \left\{ {{1, 2, \ldots, d }} \right\} } |u_i|, d) \in (0, \infty) \times \left\{ {{ 1 }} \right\} $. The proof of Lemma 2.4 is thus complete.

    Lemma 2.5. Let $ d, n \in \mathbb{N} $, $ \mathscr{a} \in \mathbb{R} $, $ \mathscr{b} \in (\mathscr{a}, \infty) $, $ x \in \mathbb{R}^n $, $ \mathfrak{C}, \varepsilon \in (0, \infty) $, let $ \phi \colon \mathbb{R}^n \times [\mathscr{a}, \mathscr{b}]^d \to \mathbb{R} $ be locally bounded and measurable, assume for all $ r \in (0, \infty) $ that

    $ supy,zRn,||y||+||z||r,yzsups[a,b]d|ϕ(y,s)ϕ(z,s)|||yz||<, $ (2.25)

    let $ \mu \colon \mathcal{B}([\mathscr{a}, \mathscr{b}]^d) \to [0, \infty) $ be a finite measure, let $ I^y \in \mathcal{B} ([\mathscr{a}, \mathscr{b}]^d) $, $ y \in \mathbb{R}^n $, satisfy for all $ y, z \in \left\{ {{ v \in \mathbb{R}^n \colon ||x-v|| \le \varepsilon }} \right\} $ that $ \mu (I^y \Delta I^z) \leq \mathfrak{C} ||y - z || $, and let $ \Phi \colon \mathbb{R}^n \to \mathbb{R} $ satisfy for all $ y \in \mathbb{R}^n $ that

    $ Φ(y)=Iyϕ(y,s)μ(ds). $ (2.26)

    Then there exists $ \mathscr{C} \in \mathbb{R} $ such that for all $ y, z \in \left\{ {{ v \in \mathbb{R}^n \colon ||x - v|| \le \varepsilon }} \right\} $ it holds that $ |\Phi (y) - \Phi (z) | \leq \mathscr{C} ||y - z|| $.

    Proof of Lemma 2.5. The proof is analogous to the proof of [35, Lemma 6].

    Corollary 2.6. Assume Setting 2.1, let $ \phi \colon \mathbb{R}^ \mathfrak{d} \times [\mathscr{a}, \mathscr{b}]^d \to \mathbb{R} $ be locally bounded and measurable, and assume for all $ r \in (0, \infty) $ that

    $ supθ,ϑRd,||θ||+||ϑ||r,θϑsupx[a,b]d|ϕ(θ,x)ϕ(ϑ,x)|||θϑ||<. $ (2.27)

    Then

    (i) it holds that

    $ Rdθ[a,b]dϕ(θ,x)p(x)λ(dx)R $ (2.28)

    is locally Lipschitz continuous and

    (ii) it holds for all $ i \in \left\{ {{ 1, 2, \ldots, H}} \right\} $ that

    $ {ϑRd:iDϑ}θIθiϕ(θ,x)p(x)λ(dx)R $ (2.29)

    is locally Lipschitz continuous.

    Proof of Corollary 2.6. First observe that Lemma 2.5 (applied for every $ \theta \in \mathbb{R}^ \mathfrak{d} $ with $ n \curvearrowleft \mathfrak{d} $, $ x \curvearrowleft \theta $, $ \mu \curvearrowleft (\mathcal{B} ([\mathscr{a}, \mathscr{b}] ^d) \ni A \mapsto \int_A \mathfrak{p} (x) \, \lambda (\, \mathrm{d} x) \in [0, \infty)) $, $ (I^y)_{y \in \mathbb{R}^n} \curvearrowleft ([\mathscr{a}, \mathscr{b}]^d)_{y \in \mathbb{R}^ \mathfrak{d}} $ in the notation of Lemma 2.5) establishes item (i). In the following let $ i \in \left\{ {{1, 2, \ldots, H}} \right\} $, $ \theta \in \left\{ {{\vartheta \in \mathbb{R}^ \mathfrak{d} \colon i \notin \mathbf{D}^\vartheta }} \right\} $. Note that Lemma 2.4 shows that there exist $ \varepsilon, \mathfrak{C} \in (0, \infty) $ which satisfy for all $ \vartheta_1, \vartheta_2 \in \mathbb{R}^ \mathfrak{d} $ with $ \max \left\{ {{ ||\theta - \vartheta_1||, ||\theta - \vartheta_2|| }} \right\} \le \varepsilon $ that

    $ Iϑ1iΔIϑ2ip(x)λ(dx)C||ϑ1ϑ2||. $ (2.30)

    Combining this with Lemma 2.5 (applied for every $ \theta \in \mathbb{R}^ \mathfrak{d} $ with $ n \curvearrowleft \mathfrak{d} $, $ x \curvearrowleft \theta $, $ \mu \curvearrowleft (\mathcal{B} ([\mathscr{a}, \mathscr{b}] ^d) \ni A \mapsto \int_A \mathfrak{p} (x) \, \lambda (\, \mathrm{d} x) \in [0, \infty)) $, $ (I^y)_{y \in \mathbb{R}^n} \curvearrowleft (I_i^y)_{ y \in \mathbb{R}^ \mathfrak{d}} $ in the notation of Lemma 2.5) demonstrates that there exists $ \mathscr{C} \in \mathbb{R} $ such that for all $ \vartheta_1, \vartheta_2 \in \mathbb{R}^ \mathfrak{d} $ with $ \max \left\{ {{ ||\theta - \vartheta_1||, ||\theta - \vartheta_2|| }} \right\} \le \varepsilon $ it holds that

    $ |Iϑ1iϕ(ϑ1,x)p(x)λ(dx)Iϑ2iϕ(ϑ2,x)p(x)λ(dx)|C||ϑ1ϑ2||. $ (2.31)

    This establishes item (ii). The proof of Corollary 2.6 is thus complete.

    Corollary 2.7. Assume Setting 2.1. Then

    (i) it holds for all $ k \in \mathbb{N} \cap (H d + H, \mathfrak{d}] $ that

    $ RdθGk(θ)R $ (2.32)

    is locally Lipschitz continuous,

    (ii) it holds for all $ i \in \left\{ {{ 1, 2, \ldots, H}} \right\} $, $ j \in \left\{ {{ 1, 2, \ldots, d}} \right\} $ that

    $ {ϑRd:iDϑ}θG(i1)d+j(θ)R $ (2.33)

    is locally Lipschitz continuous, and

    (iii) it holds for all $ i \in \left\{ {{1, 2, \ldots, H}} \right\} $ that

    $ {ϑRd:iDϑ}θGHd+i(θ)R $ (2.34)

    is locally Lipschitz continuous.

    Proof of Corollary 2.7. Observe that (2.7) and Corollary 2.6 establish items (i), (ii), and (iii). The proof of Corollary 2.7 is thus complete.

    Definition 2.8 (Subdifferential). Let $ n \in \mathbb{N} $, $ f \in C(\mathbb{R}^n, \mathbb{R}) $, $ x \in \mathbb{R}^n $. Then we denote by $ \hat{\partial} f(x) \subseteq \mathbb{R}^n $ the set given by

    $ ˆf(x)={yRn:lim infRn{0}h0(f(x+h)f(x)y,h||h||)0}. $ (2.35)

    Definition 2.9 (Limiting subdifferential). Let $ n \in \mathbb{N} $, $ f \in C(\mathbb{R}^n, \mathbb{R}) $, $ x \in \mathbb{R}^n $. Then we denote by $ \partial f(x) \subseteq \mathbb{R}^n $ the set given by

    $ f(x)=ε(0,)¯[y{zRn:||xz||<ε}ˆf(y)] $ (2.36)

    (cf. Definition 2.8).

    Lemma 2.10. Let $ n \in \mathbb{N} $, $ f \in C(\mathbb{R}^n, \mathbb{R}) $, $ x \in \mathbb{R}^n $. Then

    $ f(x)={yRn:z=(z1,z2):NRn×Rn:([kN:z2(k)ˆf(z1(k))],[lim supk(||z1(k)x||+||z2(k)y||)=0])} $ (2.37)

    (cf. Definitions 2.8 and 2.9).

    Proof of Lemma 2.10. Note that (2.36) establishes (2.37). The proof of Lemma 2.10 is thus complete.

    Lemma 2.11. Let $ n \in \mathbb{N} $, $ f \in C (\mathbb{R}^n, \mathbb{R}) $, let $ U \subseteq \mathbb{R}^n $ be open, assume $ f | _U \in C^1 (U, \mathbb{R}) $, and let $ x \in U $. Then $ \hat{\partial} f(x) = \partial f(x) = \left\{ {{ (\nabla f) (x) }} \right\} $ (cf. Definitions 2.8 and 2.9).

    Proof of Lemma 2.11. This is a direct consequence of, e.g., Rockafellar & Wets [39, Exercise 8.8]. The proof of Lemma 2.11 is thus complete.

    Proposition 2.12. Assume Setting 2.1 and let $ \theta \in \mathbb{R}^ \mathfrak{d} $. Then $ \mathcal{G} (\theta) \in \partial \mathcal{L} (\theta) $ (cf. Definition 2.9).

    Proof of Proposition 2.12. Throughout this proof let $ \vartheta = (\vartheta_n)_{n \in \mathbb{N}} \colon \mathbb{N} \to \mathbb{R}^ \mathfrak{d} $ satisfy for all $ n \in \mathbb{N} $, $ i \in \left\{ {{ 1, 2, \ldots, H}} \right\} $, $ j \in \left\{ {{ 1, 2, \ldots, d}} \right\} $ that $ \mathfrak{w}^{{\vartheta_n}}_{i, j} = \mathfrak{w}^{{\theta}}_{i, j} $, $ \mathfrak{b}^{{\vartheta_n}}_i = \mathfrak{b}^{{\theta}}_i - \frac{1}{n} \mathbf{1}_{\smash{{ \mathbf{D}^\theta }}} (i) $, $ \mathfrak{v}^{{\vartheta_n}} _ i = \mathfrak{v}^{{\theta}}_i $, and $ \mathfrak{c}^{{\vartheta_n}} = \mathfrak{c}^{{\theta}} $. We prove Proposition 2.12 through an application of Lemma 2.10. Observe that for all $ n \in \mathbb{N} $, $ i \in \left\{ {{ 1, 2, \ldots, H }} \right\} \backslash \mathbf{D}^\theta $ it holds that $ \mathfrak{b}^{{\vartheta_n}}_i = \mathfrak{b}^{{\theta}}_i $. This implies for all $ n \in \mathbb{N} $, $ i \in \left\{ {{ 1, 2, \ldots, H }} \right\} \backslash \mathbf{D}^\theta $ that

    $ iDϑn. $ (2.38)

    In addition, note that for all $ n \in \mathbb{N} $, $ i \in \mathbf{D}^\theta $ it holds that $ \mathfrak{b}^{{\vartheta_n}}_i = - \frac{1}{n} < 0 $. This shows for all $ n \in \mathbb{N} $, $ i \in \mathbf{D}^\theta $ that

    $ iDϑn. $ (2.39)

    Hence, we obtain for all $ n \in \mathbb{N} $ that $ \mathbf{D}^{\vartheta_n} = \varnothing $. Combining this with Proposition 2.3 and Lemma 2.11 demonstrates that for all $ n \in \mathbb{N} $ it holds that $ \hat{\partial} \mathcal{L} (\vartheta_n) = \left\{ {{ (\nabla \mathcal{L}) (\vartheta_n) }} \right\} = \left\{ {{ \mathcal{G} (\vartheta_n) }} \right\} $ (cf. Definition 2.8). Moreover, observe that $ \lim_{n \to \infty} \vartheta_n = \theta $. It thus remains to show that $ \mathcal{G} (\vartheta _n) $, $ n \in \mathbb{N} $, converges to $ \mathcal{G} (\theta) $. Note that Corollary 2.7 ensures that for all $ k \in \mathbb{N} \cap (H d + H, \mathfrak{d}] $ it holds that

    $ limnGk(ϑn)=Gk(θ). $ (2.40)

    Furthermore, observe that Corollary 2.7, (2.38) and (2.39) assure that for all $ i \in \left\{ {{ 1, 2, \ldots, H}} \right\} \backslash \mathbf{D}^\theta $, $ j \in \left\{ {{ 1, 2, \ldots, d }} \right\} $ it holds that

    $ limnG(i1)d+j(ϑn)=G(i1)d+j(θ)andlimnGHd+i(ϑn)=GHd+i(θ). $ (2.41)

    In addition, note that for all $ n \in \mathbb{N} $, $ i \in \mathbf{D}^\theta $ we have that $ I_i^{\vartheta_n } = I_i^\theta = \varnothing $. Hence, we obtain for all $ i \in \mathbf{D}^\theta $, $ j \in \left\{ {{ 1, 2, \ldots, d}} \right\} $ that

    $ limnG(i1)d+j(ϑn)=0=G(i1)d+j(θ)andlimnGHd+i(ϑn)=0=GHd+i(θ). $ (2.42)

    Combining this, (2.40) and (2.41) demonstrates that $ \lim_{n \to \infty} \mathcal{G} (\vartheta _ n) = \mathcal{G} (\theta) $. This and Lemma 2.10 assure that $ \mathcal{G} (\theta) \in \partial \mathcal{L} (\theta) $. The proof of Proposition 2.12 is thus complete.

    In this section we employ the local Lipschitz continuity result for the generalized gradient function in Corollary 2.7 from Section 2 to establish existence and uniqueness results for solutions of GF differential equations. Specifically, in Proposition 3.1 in Subsection 3.1 below we prove the existence of solutions GF differential equations, in Lemma 3.2 in Subsection 3.2 below we establish the uniqueness of solutions of GF differential equations among a suitable class of GF solutions, and in Theorem 3.3 in Subsection 3.3 below we combine Proposition 3.1 and Lemma 3.2 to establish the unique existence of solutions of GF differential equations among a suitable class of GF solutions. Theorem 1.1 in the introduction is an immediate consequence of Theorem 3.3.

    Roughly speaking, we show in Theorem 3.3 the unique existence of solutions of GF differential equations among the class of GF solutions which satisfy that the set of all degenerate neurons of the GF solution at time $ t \in [0, \infty) $ is non-decreasing in the time variable $ t \in [0, \infty) $. In other words, in Theorem 3.3 we prove the unique existence of GF solutions with the property that once a neuron has become degenerate it will remain degenerate for subsequent times.

    Our strategy of the proof of Theorem 3.3 and Proposition 3.1, respectively, can, loosely speaking, be described as follows. Corollary 2.7 above implies that the components of the generalized gradient function $ \mathcal{G} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ \mathfrak{d} } $ corresponding to non-degenerate neurons are locally Lipschitz continuous so that the classical Picard-Lindelöf local existence and uniqueness theorem for ordinary differential equations can be brought into play for those components. On the other hand, if at some time $ t \in [0, \infty) $ the $ i $-th neuron is degenerate, then Proposition 2.2 above shows that the corresponding components of the generalized gradient function $ \mathcal{G} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ \mathfrak{d} } $ vanish. The GF differential equation is thus satisfied if the neuron remains degenerate at all subsequent times $ s \in [t, \infty) $. Using these arguments we prove in Proposition 3.1 the existence of GF solutions by induction on the number of non-degenerate neurons of the initial value.

    Proposition 3.1. Assume Setting 2.1 and let $ \theta \in \mathbb{R}^ \mathfrak{d} $. Then there exists $ \Theta \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ which satisfies for all $ t \in [0, \infty) $, $ s \in [t, \infty) $ that

    $ Θt=θt0G(Θu)duandDΘtDΘs. $ (3.1)

    Proof of Proposition 3.1. We prove the statement by induction on the quantity $ H - \# (\mathbf{D}^\theta) \in \mathbb{N} \cap [0, H] $. Assume first that $ H - \# (\mathbf{D}^\theta) = 0 $, i.e., $ \mathbf{D}^\theta = \left\{ {{1, 2, \ldots, H}} \right\} $. Observe that this implies that $ \mathfrak{w}^{{\theta}} = 0 $ and $ \mathfrak{b}^{{\theta}} = 0 $. In the following let $ \kappa \in \mathbb{R} $ satisfy

    $ κ=[a,b]df(x)p(x)λ(dx). $ (3.2)

    Note that the Picard–Lindelöf Theorem shows that there exists a unique $ c \in C([0, \infty), \mathbb{R}) $ which satisfies for all $ t \in [0, \infty) $ that

    $ c(0)=cθandc(t)=c(0)+2κt2([a,b]dp(x)λ(dx))(t0c(s)ds). $ (3.3)

    Next let $ \Theta \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ satisfy for all $ t \in [0, \infty) $, $ i \in \left\{ {{ 1, 2, \ldots, H }} \right\} $, $ j \in \left\{ {{1, 2, \ldots, d}} \right\} $ that

    $ wΘti,j=wθi,j=bΘti=bθi=0,vΘti=vθi,andcΘt=c(t). $ (3.4)

    Observe that (2.7), (3.3), and (3.4) ensure for all $ t \in [0, \infty) $ that

    $ cΘt=cθ+2κt2([a,b]dp(x)λ(dx))(t0cΘsds)=cθ2t0(κ+[a,b]dcΘsp(x)λ(dx))ds=cθ2t0[a,b]d(cΘs+Hi=1[vΘsimax{bΘsi+dj=1wΘsi,jxj,0}]f(x))p(x)λ(dx)ds=cθ2t0[a,b]d(NΘs(x)f(x))p(x)λ(dx)ds=cθt0Gd(Θs)ds. $ (3.5)

    Next note that (3.4) and (2.7) show for all $ t \in [0, \infty) $, $ i \in \mathbb{N} \cap [1, \mathfrak{d}) $ that $ \mathbf{D}^{\Theta_t} = \left\{ {{1, 2, \ldots, H}} \right\} $ and $ \mathcal{G}_i (\Theta_t) = 0 $. Combining this with (3.4) and (3.5) proves that $ \Theta $ satisfies 3.1. This establishes the claim in the case $ \# (\mathbf{D}^\theta) = H $.

    For the induction step assume that $ \# (\mathbf{D}^\theta) < H $ and assume that for all $ \vartheta \in \mathbb{R}^ \mathfrak{d} $ with $ \# (\mathbf{D}^\vartheta) > \# (\mathbf{D}^\theta) $ there exists $ \Theta \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ which satisfies for all $ t \in [0, \infty) $, $ s \in [t, \infty) $ that $ \Theta_t = \vartheta - \int_0^t \mathcal{G} (\Theta_u) \, \mathrm{d} u $ and $ \mathbf{D}^{\Theta_t} \subseteq \mathbf{D}^{\Theta_s} $. In the following let $ U \subseteq \mathbb{R}^ \mathfrak{d} $ satisfy

    $ U={ϑRd:DϑDθ} $ (3.6)

    and let $ \mathfrak{G} \colon U \to \mathbb{R}^ \mathfrak{d} $ satisfy for all $ \vartheta \in U $, $ i \in \left\{ {{ 1, 2, \ldots, \mathfrak{d} }} \right\} $ that

    $ Gi(ϑ)={0:i{(1)d+j:Dθ,jN[1,d]}{Hd+:Dθ}Gi(ϑ):else. $ (3.7)

    Observe that (3.6) assures that $ U \subseteq \mathbb{R}^ \mathfrak{d} $ is open. In addition, note that Corollary 2.7 implies that $ \mathfrak{G} $ is locally Lipschitz continuous. Combining this with the Picard–Lindelöf Theorem demonstrates that there exist a unique maximal $ \tau \in (0, \infty] $ and $ \Psi \in C([0, \tau), U) $ which satisfy for all $ t \in [0, \tau) $ that

    $ Ψt=θt0G(Ψu)du. $ (3.8)

    Next observe that (3.7) ensures that for all $ t \in [0, \tau) $, $ i \in \mathbf{D}^\theta $, $ j \in \left\{ {{1, 2, \ldots, d}} \right\} $ we have that

    $ wΨti,j=wθi,j=bΨti=bθi=0andvΨti=vθi. $ (3.9)

    This, (3.7), and (2.7) demonstrate for all $ t \in [0, \tau) $ that $ \mathcal{G} (\Psi_t) = \mathfrak{G} (\Psi_t) $. In addition, note that (3.6) and (3.9) imply for all $ t \in [0, \tau) $ that $ \mathbf{D}^{\Psi_t} = \mathbf{D}^\theta $. Hence, if $ \tau = \infty $ then $ \Psi $ satisfies (3.1). Next assume that $ \tau < \infty $. Observe that the Cauchy-Schwarz inequality and [37, Lemma 3.1] prove for all $ s, t \in [0, \tau) $ with $ s \leq t $ that

    $ ||ΨtΨs||ts||G(Ψu)||du(ts)1/2[ts||G(Ψu)||2du]1/2(ts)1/2[t0||G(Ψu)||2du]1/2=(ts)1/2(L(Ψ0)L(Ψt))1/2(ts)1/2(L(Ψ0))1/2. $ (3.10)

    Hence, we obtain for all $ (t_n) _{n \in \mathbb{N}} \subseteq [0, \tau) $ with $ \liminf_{n \to \infty} t_n = \tau $ that $ (\Psi_{t_n}) $ is a Cauchy sequence. This implies that $ \vartheta : = \lim_{t \uparrow \tau} \Psi_t \in \mathbb{R}^ \mathfrak{d} $ exists. Furthermore, note that the fact that $ \tau $ is maximal proves that $ \vartheta \notin U $. Therefore, we have that $ \mathbf{D}^\vartheta \backslash \mathbf{D}^\theta \not = \varnothing $. Moreover, observe that (3.9) shows that for all $ i \in \mathbf{D}^\theta $, $ j \in \left\{ {{1, 2, \ldots, d}} \right\} $ it holds that $ \mathfrak{w}^{{\vartheta}}_{i, j} = \mathfrak{b}^{{\vartheta}}_i = 0 $ and, therefore, $ i \in \mathbf{D}^\vartheta $. This demonstrates that $ \# (\mathbf{D}^\vartheta) > \# (\mathbf{D}^\theta) $. Combining this with the induction hypothesis ensures that there exists $ \Phi \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ which satisfies for all $ t \in [0, \infty) $, $ s \in [t, \infty) $ that

    $ Φt=ϑt0G(Φu)duandDΦtDΦs. $ (3.11)

    In the following let $ \Theta \colon [0, \infty) \to \mathbb{R}^ \mathfrak{d} $ satisfy for all $ t \in [0, \infty) $ that

    $ Θt={Ψt:t[0,τ)Φtτ:t[τ,). $ (3.12)

    Note that the fact that $ \vartheta = \lim_{t \uparrow \tau} \Psi_t $ and the fact that $ \Phi_0 = \vartheta $ imply that $ \Theta $ is continuous. Furthermore, observe that the fact that $ \mathcal{G} $ is locally bounded and (3.8) ensure that

    $ Θτ=ϑ=limtτΨt=limtτ[θt0G(Ψs)ds]=θτ0G(Ψs)ds=θτ0G(Θs)ds. $ (3.13)

    Hence, we obtain for all $ t \in [\tau, \infty) $ that

    $ Θt=(ΘtΘτ)+Θτ=(ΦtτΦ0)+Θτ=tτ0G(Φs)ds+θτ0G(Θs)ds=τtG(Θs)+θτ0G(Θs)ds=θt0G(Θs)ds. $ (3.14)

    This shows that $ \Theta $ satisfies (3.1). The proof of Proposition 3.1 is thus complete.

    Lemma 3.2. Assume Setting 2.1 and let $ \theta \in \mathbb{R}^ \mathfrak{d} $, $ \Theta^1, \Theta^2 \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ satisfy for all $ t \in [0, \infty) $, $ s \in [t, \infty) $, $ k \in \left\{ {{ 1, 2}} \right\} $ that

    $ Θkt=θt0G(Θku)duandDΘktDΘks. $ (3.15)

    Then it holds for all $ t \in [0, \infty) $ that $ \Theta_t^1 = \Theta_t^2 $.

    Proof of Lemma 3.2. Assume for the sake of contradiction that there exists $ t \in [0, \infty) $ such that $ \Theta_t^1 \not = \Theta_t^2 $. By translating the variable $ t $ if necessary, we may assume without loss of generality that $ \inf \left\{ {{t \in [0, \infty) \colon \Theta_t^1 \not = \Theta_t^2}} \right\} = 0 $. Next note that the fact that $ \Theta^1 $ and $ \Theta^2 $ are continuous implies that there exists $ \delta \in (0, \infty) $ which satisfies for all $ t \in [0, \delta] $, $ k \in \left\{ {{ 1, 2}} \right\} $ that $ \mathbf{D}^{\Theta_t^k} \subseteq \mathbf{D}^\theta $. Furthermore, observe that 3.15 ensures for all $ t \in [0, \infty) $, $ i \in \mathbf{D}^\theta $, $ k \in \left\{ {{ 1, 2}} \right\} $ that $ i \in \mathbf{D}^{\Theta_t^k} $. Hence, we obtain for all $ t \in [0, \infty) $, $ i \in \mathbf{D}^\theta $, $ j \in \left\{ {{1, 2, \ldots, d}} \right\} $, $ k \in \left\{ {{ 1, 2}} \right\} $ that

    $ G(i1)d+j(Θkt)=GHd+i(Θkt)=GH(d+1)+i(Θkt)=0. $ (3.16)

    In addition, note that the fact that $ \Theta^1 $ and $ \Theta^2 $ are continuous implies that there exists a compact $ K \subseteq \left\{ {{\vartheta \in \mathbb{R}^ \mathfrak{d} \colon \mathbf{D}^\vartheta \subseteq \mathbf{D}^\theta }} \right\} $ which satisfies for all $ t \in [0, \delta] $, $ k \in \left\{ {{1, 2}} \right\} $ that $ \Theta_t^k \in K $. Moreover, observe that Corollary 2.7 proves that for all $ i \in \left\{ {{1, 2, \ldots, H}} \right\} \backslash \mathbf{D}^\theta $, $ j \in \left\{ {{1, 2, \ldots, d}} \right\} $ it holds that $ \mathcal{G}_{(i - 1) d + j }, \mathcal{G}_{ H d + i }, \mathcal{G}_{ H (d+1) + i }, \mathcal{G} _ \mathfrak{d} \colon K \to \mathbb{R} $ are Lipschitz continuous. This and (3.16) show that there exists $ L \in (0, \infty) $ such that for all $ t \in [0, \delta] $ we have that

    $ ||G(Θ1t)G(Θ2t)||L||Θ1tΘ2t||. $ (3.17)

    In the following let $ M \colon [0, \infty) \to [0, \infty) $ satisfy for all $ t \in [0, \infty) $ that $ M_t = \sup_{s \in (0, t] } ||\Theta_s^1 - \Theta_s^2|| $. Note that the fact that $ \inf \left\{ {{t \in [0, \infty) \colon \Theta_t^1 \not = \Theta_t^2}} \right\} = 0 $ proves for all $ t \in (0, \infty) $ that $ M_t > 0 $. Moreover, observe that (3.17) ensures for all $ t \in (0, \delta) $ that

    $ ||Θ1tΘ2t||=t0G(Θ1u)dut0G(Θ2u)dut0||G(Θ1u)G(Θ2u)||duLt0||Θ1uΘ2u||duLtMt. $ (3.18)

    Combining this with the fact that $ M $ is non-decreasing shows for all $ t \in (0, \delta) $, $ s \in (0, t] $ that

    $ ||Θ1sΘ2s||LsMsLtMt. $ (3.19)

    This demonstrates for all $ t \in (0, \min \left\{ {{L^{-1}, \delta }} \right\}) $ that

    $ 0<MtLtMt<Mt, $ (3.20)

    which is a contradiction. The proof of Lemma 3.2 is thus complete.

    Theorem 3.3. Assume Setting 2.1 and let $ \theta \in \mathbb{R}^ \mathfrak{d} $. Then there exists a unique $ \Theta \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ which satisfies for all $ t \in [0, \infty) $, $ s \in [t, \infty) $ that

    $ Θt=θt0G(Θu)duandDΘtDΘs. $ (3.21)

    Proof of Theorem 3.3. Proposition 3.1 establishes the existence and Lemma 3.2 establishes the uniqueness. The proof of Theorem 3.3 is thus complete.

    In this section we establish in Corollary 4.10 in Subsection 4.3 below that under the assumption that both the target function $ f \colon [\mathscr{a}, \mathscr{b}] ^d \to \mathbb{R} $ and the unnormalized density function $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}]^d \to [0, \infty) $ are piecewise polynomial in the sense of Definition 4.9 in Subsection 4.3 we have that the risk function $ \mathcal{L} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ is a semialgebraic function in the sense of Definition 4.3 in Subsection 4.1. In Definition 4.9 we specify precisely what we mean by a piecewise polynomial function, in Definition 4.2 in Subsection 4.1 we recall the notion of a semialgebraic set, and in Definition 4.3 we recall the notion of a semialgebraic function. In the scientific literature Definitions 4.2 and 4.3 can in a slightly different presentational form, e.g., be found in Bierstone & Milman [40, Definitions 1.1 and 1.2] and Attouch et al. [8, Definition 2.1].

    Note that the risk function $ \mathcal{L} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R} $ is given through a parametric integral in the sense that for all $ \theta \in \mathbb{R}^{ \mathfrak{d} } $ we have that

    $ L(θ)=[a,b]d(f(y)Nθ(y))2p(y)λ(dy). $ (4.1)

    In general, parametric integrals of semialgebraic functions are no longer semialgebraic functions and the characterization of functions that can occur as such integrals is quite involved (cf. Kaiser [41]). This is the reason why we introduce in Definition 4.6 in Subsection 4.2 below a suitable subclass of the class of semialgebraic functions which is rich enough to contain the realization functions of ANNs with ReLU activation (cf. (4.30) in Subsection 4.2 below) and which can be shown to be closed under integration (cf. Proposition 4.8 in Subsection 4.2 below for the precise statement).

    Definition 4.1 (Set of polynomials). Let $ n \in \mathbb{N}_0 $. Then we denote by $ \mathscr{P}_n \subseteq C(\mathbb{R}^n, \mathbb{R}) $ the set of all polynomials from $ \mathbb{R}^n $ to $ \mathbb{R} $.

    Note that $ \mathbb{R}^0 = \{ 0 \} $, $ C(\mathbb{R}^0, \mathbb{R}) = C(\{ 0 \}, \mathbb{R}) $, and $ \#(C(\mathbb{R}^0, \mathbb{R})) = \#(C(\{ 0 \}, \mathbb{R})) = \infty $. In particular, this shows for all $ n \in \mathbb{N}_0 $ that $ \operatorname{dim}(\mathbb{R}^n) = n $ and $ \#(C(\mathbb{R}^n, \mathbb{R})) = \infty $.

    Definition 4.2 (Semialgebraic sets). Let $ n \in \mathbb{N} $ and let $ A \subseteq \mathbb{R}^n $ be a set. Then we say that $ A $ is a semialgebraic set if and only if there exist $ k \in \mathbb{N} $ and $ (P_{i, j, \ell })_{ (i, j, \ell) \in \left\{ {{1, 2, \ldots, k}} \right\} ^2 \times \left\{ {{0, 1}} \right\}} \subseteq \mathscr{P}_n $ such that

    $ A=ki=1kj=1{xRn:Pi,j,0(x)=0<Pi,j,1(x)} $ (4.2)

    (cf. Definition 4.1).

    Definition 4.3 (Semialgebraic functions). Let $ m, n \in \mathbb{N} $ and let $ f \colon \mathbb{R}^n \to \mathbb{R}^m $ be a function. Then we say that $ f $ is a semialgebraic function if and only if it holds that $ \left\{ {{ (x, f (x)) \colon x \in \mathbb{R}^n }} \right\} \subseteq \mathbb{R}^{m+n} $ is a semialgebraic set (cf. Definition 4.2).

    Lemma 4.4. Let $ n \in \mathbb{N} $ and let $ f, g \colon \mathbb{R}^n \to \mathbb{R} $ be semialgebraic functions (cf. Definition 4.3). Then

    (i) it holds that $ \mathbb{R}^n \ni x \mapsto f(x) + g(x) \in \mathbb{R} $ is semialgebraic and

    (ii) it holds that $ \mathbb{R}^n \ni x \mapsto f(x) g(x) \in \mathbb{R} $ is semialgebraic.

    Proof of Lemma 4.4. Note that, e.g., Coste [42, Corollary 2.9] (see, e.g., also Bierstone & Milman [40, Section 1]) establishes items (i) and (ii). The proof of Lemma 4.4 is thus complete.

    Definition 4.5 (Set of rational functions). Let $ n \in \mathbb{N} $. Then we denote by $ \mathscr{R}_n $ the set given by

    $ Rn={R:RnR:[P,QPn:xRn:R(x)={P(x)Q(x):Q(x)00:Q(x)=0]} $ (4.3)

    (cf. Definition 4.1).

    Definition 4.6. Let $ m \in \mathbb{N} $, $ n \in \mathbb{N}_0 $. Then we denote by $ \mathscr{A}_{m, n} $ the $ \mathbb{R} $-vector space given by

    $ Am,n=span({f:Rm×RnR:[rN,A1,A2,,Ar{{0},[0,),(0,)},RRm,QPn,P=(Pi,j)(i,j){1,2,,r}×{0,1,,n}Pm:θRm,x=(x1,,xn)Rn:f(θ,x)=R(θ)Q(x)[ri=11Ai(Pi,0(θ)+nj=1Pi,j(θ)xj)]]}) $ (4.4)

    (cf. Definitions 4.1 and 4.5).

    Lemma 4.7. Let $ m \in \mathbb{N} $, $ f \in \mathscr{A}_{m, 0 } $ (cf. Definition 4.6). Then $ f $ is semialgebraic (cf. Definition 4.3).

    Proof of Lemma 4.7. Throughout this proof let $ r \in \mathbb{N} $, $ A_1, A_2, \ldots, A_r \in \left\{ {{ \left\{ {{0}} \right\}, [0, \infty), (0, \infty)}} \right\} $, $ R \in \mathscr{R}_m $, $ P = (P_i)_{ i \in \left\{ {{1, 2, \ldots, r }} \right\}} \subseteq \mathscr{P}_m $, and let $ g \colon \mathbb{R}^m \to \mathbb{R} $ satisfy for all $ \theta \in \mathbb{R}^m $ that

    $ g(θ)=R(θ)ri=11Ai(Pi(θ)) $ (4.5)

    (cf. Definitions 4.1 and 4.5). Due to the fact that sums of semialgebraic functions are again semialgebraic (cf. Lemma 4.4), it suffices to show that $ g $ is semialgebraic. Furthermore, observe that for all $ y \in \mathbb{R} $ it holds that $ \mathbf{1}_{\smash{{(0, \infty)}}} (y) = 1 - \mathbf{1}_{\smash{{[0, \infty) }}} (- y) $ and $ \mathbf{1}_{\smash{{\left\{ {{0}} \right\}}}} (y) = \mathbf{1}_{\smash{{[0, \infty) }}} (y) \mathbf{1}_{\smash{{[0, \infty) }}} (- y) $. Hence, by linearity we may assume for all $ i \in \left\{ {{1, 2, \ldots, r }} \right\} $ that $ A_i = [0, \infty) $. Next let $ Q_1, Q_2 \in \mathscr{P}_m $ satisfy for all $ x \in \mathbb{R}^m $ that

    $ R(x)={Q1(x)Q2(x):Q2(x)00:Q2(x)=0. $ (4.6)

    Note that the graph of $ \mathbb{R}^m \ni \theta \mapsto R(\theta) \in \mathbb{R} $ is given by

    $ {(θ,y)Rm×R:Q2(θ)=0,y=0}{(θ,y)Rm×R:Q2(θ)0,Q2(θ)yQ1(θ)=0}. $ (4.7)

    Since both of these sets are described by polynomial equations and inequalities, it follows that $ \mathbb{R}^m \ni \theta \mapsto R(\theta) \in \mathbb{R} $ is semialgebraic. In addition, observe that for all $ i \in \left\{ {{1, 2, \ldots, r}} \right\} $ the graph of $ \mathbb{R}^m \ni \theta \mapsto \mathbf{1}_{\smash{{[0, \infty) }}} \left({{ P_i (\theta) }} \right) \in \mathbb{R} $ is given by

    $ {(θ,y)Rm×R:Pi(θ)<0,y=0}{(θ,y)Rm×R:Pi(θ)0,y=1}. $ (4.8)

    This demonstrates for all $ i \in \left\{ {{1, 2, \ldots, r}} \right\} $ that $ \mathbb{R}^m \ni \theta \mapsto \mathbf{1}_{\smash{{[0, \infty) }}} \left({{ P_i (\theta) }} \right) \in \mathbb{R} $ is semialgebraic. Combining this and (4.5) with Lemma 4.4 demonstrates that $ g $ is semialgebraic. The proof of Lemma 4.7 is thus complete.

    Proposition 4.8. Let $ m, n \in \mathbb{N} $, $ \mathscr{a} \in \mathbb{R} $, $ \mathscr{b} \in (\mathscr{a}, \infty) $, $ f \in \mathscr{A}_{m, n} $ (cf. Definition 4.6). Then

    $ [Rm×Rn1(θ,x1,,xn1)baf(θ,x1,,xn)dxnR]Am,n1. $ (4.9)

    Proof of Proposition 4.8. By linearity of the integral it suffices to consider a function $ f $ of the form

    $ f(θ,x)=R(θ)Q(x)ri=11Ai(Pi,0(θ)+nj=1Pi,j(θ)xj) $ (4.10)

    where $ r \in \mathbb{N} $, $ \left({{P_{i, j}}} \right)_{(i, j) \in \left\{ {{1, 2, \ldots, r}} \right\} \times \left\{ {{0, 1, \ldots, n }} \right\} } \subseteq \mathscr{P}_m $, $ A_1, A_2, \ldots, A_r \in \left\{ {{ \left\{ {{0}} \right\}, (0, \infty), [0, \infty) }} \right\} $, $ Q \in \mathscr{P}_n $, and $ R \in \mathscr{R}_m $ (cf. Definitions 4.1 and 4.5). Moreover, note that for all $ y \in \mathbb{R} $ it holds that $ \mathbf{1}_{\smash{{(0, \infty)}}} (y) = 1 - \mathbf{1}_{\smash{{[0, \infty) }}} (- y) $ and $ \mathbf{1}_{\smash{{\left\{ {{0}} \right\}}}} (y) = \mathbf{1}_{\smash{{[0, \infty) }}} (y) \mathbf{1}_{\smash{{[0, \infty) }}} (- y) $. Hence, by linearity we may assume that $ A_i = [0, \infty) $ for all $ i \in \left\{ {{1, 2, \ldots, r }} \right\} $. Furthermore, by linearity we may assume that $ Q $ is of the form

    $ Q(x1,,xn)=n=1(x)i $ (4.11)

    with $ i_1, i_2, \ldots, i_n \in \mathbb{N}_0 $. In the following let $ \mathfrak{s} \colon \mathbb{R} \to \mathbb{R} $ satisfy for all $ x \in \mathbb{R} $ that $ \mathfrak{s} (x) = \mathbf{1}_{\smash{{(0, \infty)}}} (x) - \mathbf{1}_{\smash{{(0, \infty) }}} (- x) $, for every $ \theta \in \mathbb{R}^m $, $ k \in \left\{ {{-1, 0, 1}} \right\} $ let $ \mathcal{S}_k^\theta \subseteq \left\{ {{1, 2, \ldots, r}} \right\} $ satisfy $ \mathcal{S}_k^\theta = \left\{ {{ i \in \left\{ {{1, 2, \ldots, r }} \right\} \colon \mathfrak{s} (P_{i, n} (\theta)) = k }} \right\} $, and for every $ i \in \left\{ {{1, 2, \ldots, r}} \right\} $ let $ Z_i \colon \mathbb{R}^m \times \mathbb{R}^n \to \mathbb{R} $ satisfy for all $ (\theta, x) \in \mathbb{R}^m \times \mathbb{R}^n $ that

    $ Zi(θ,x)=Pi,0(θ)n1j=1Pi,j(θ)xj. $ (4.12)

    Observe that (4.10), (4.11), and (4.12) imply for all $ \theta \in \mathbb{R}^m $, $ x = (x_1, \ldots, x_n) \in \mathbb{R}^n $ that

    $ f(θ,x)=R(θ)(n=1(x)i)(ri=11[0,)(Pi,n(θ)xnZi(θ,x))). $ (4.13)

    This shows that $ f(\theta, x) $ can only be nonzero if

    $ iSθ1:xnZi(θ,x)Pi,n(θ),iSθ1:xnZi(θ,x)Pi,n(θ),iSθ0:Zi(θ,x)0. $ (4.14)

    Hence, if for given $ \theta \in \mathbb{R}^m $, $ (x_1, \ldots, x_{n-1}) \in \mathbb{R}^{n-1} $ there exists $ x_n \in [\mathscr{a}, \mathscr{b}] $ which satisfies these conditions then (4.13) and the fact that $ \int y ^{i_ n } \, \mathrm{d} y = \frac{1}{i_n + 1 } y^{i_n + 1 } $ imply that

    $ baf(θ,x1,,xn)dxn=R(θ)in+1(n1=1xi)[(min{b,minjSθ1Zj(θ,x)Pj,n(θ)})in+1(max{a,maxjSθ1Zj(θ,x)Pj,n(θ)})in+1]. $ (4.15)

    Otherwise, we have that $ \int_ \mathscr{a}^ \mathscr{b} f (\theta, x_1, \ldots, x_n) \, \mathrm{d} x_n = 0 $. It remains to write these expressions in the different cases as a sum of functions of the required form in Definition 4.6 by introducing suitable indicator functions. Note that there are four possible cases where the integral is nonzero:

    ● It holds that $ \mathscr{a} < \max_{j \in \mathcal{S}_1^\theta} \frac{ Z_j (\theta, x) }{ P_{j, n} (\theta) } < \min_{j \in \mathcal{S}_{-1}^\theta} \frac{Z_j (\theta, x) }{ P_{j, n} (\theta) } < \mathscr{b} $. In this case, we have

    $ baf(θ,x1,,xn)dxn=R(θ)in+1(n1=1xi)[(minjSθ1Zj(θ,x)Pj,n(θ))in+1(maxjSθ1Zj(θ,x)Pj,n(θ))in+1]. $ (4.16)

    ● It holds that $ \mathscr{a} < \max_{j \in \mathcal{S}_1^\theta} \frac{ Z_j (\theta, x) }{ P_{j, n} (\theta) } < \mathscr{b} \le \min_{j \in \mathcal{S}_{-1}^\theta} \frac{Z_j (\theta, x) }{ P_{j, n} (\theta) } $. In this case, we have

    $ baf(θ,x1,,xn)dxn=R(θ)in+1(n1=1xi)[bin+1(maxjSθ1Zj(θ,x)Pj,n(θ))in+1]. $ (4.17)

    ● It holds that $ \max_{j \in \mathcal{S}_1^\theta} \frac{ Z_j (\theta, x) }{ P_{j, n} (\theta) } \le \mathscr{a} < \min_{j \in \mathcal{S}_{-1}^\theta} \frac{Z_j (\theta, x) }{ P_{j, n} (\theta) } < \mathscr{b} $. In this case, we have

    $ baf(θ,x1,,xn)dxn=R(θ)in+1(n1=1xi)[(minjSθ1Zj(θ,x)Pj,n(θ))in+1ain+1]. $ (4.18)

    ● It holds that $ \max_{j \in \mathcal{S}_1^\theta} \frac{ Z_j (\theta, x) }{ P_{j, n} (\theta) } \le \mathscr{a} < \mathscr{b} \le \min_{j \in \mathcal{S}_{-1}^\theta} \frac{Z_j (\theta, x) }{ P_{j, n} (\theta) } $. In this case, we have

    $ baf(θ,x1,,xn)dxn=R(θ)in+1(n1=1xi)[bin+1ain+1]. $ (4.19)

    Since these four cases are disjoint, by summing over all possible choices $ A, B, C \subseteq \left\{ {{1, 2, \ldots, r}} \right\} $ of the sets $ \mathcal{S}^\theta_k $, $ k \in \left\{ {{-1, 0, 1}} \right\} $, and all choices of (non-empty) subsets $ \mathcal{I}, \mathcal{J} $ of $ \mathcal{S}^\theta_1 $, $ \mathcal{S}_{-1}^\theta $ where the maximal/minimal values are achieved, we can write

    $ baf(θ,x1,,xn)dxn=R(θ)in+1(n1=1xi)[(I)+(II)+(III)+(IV)], $ (4.20)

    where $ (I), (II), (III), (IV) $ denote the functions of $ \theta \in \mathbb{R}^m $ and $ (x_1, \ldots, x_{n-1}) \in \mathbb{R}^{ n - 1 } $ given by

    $ (I)=A˙B˙C={1,,r}[jA1(0,)(Pj,n(θ))jB1(0,)(Pj,n(θ))jC(1{0}(Pj,n(θ))1[0,)(Zj(θ,x))]IAJB[[iI(1(a,b)(Zi(θ,x)Pi,n(θ))1{0}(Zi(θ,x)Pi,n(θ)ZminI(θ,x)PminI,n(θ)))×jAI1(0,)(ZminI(θ,x)PminI,n(θ)Zj(θ,x)Pj,n(θ))iJ(1(a,b)(Zi(θ,x)Pi,n(θ))1{0}(Zi(θ,x)Pi,n(θ)ZminJ(θ,x)PminJ,n(θ)))×jBJ1(0,)(Zj(θ,x)Pj,n(θ)ZminJ(θ,x)PminJ,n(θ))1(0,)(ZminJ(θ,x)PminJ,n(θ)ZminI(θ,x)PminI,n(θ))]×[(ZminJ(θ,x)PminJ,n(θ))in+1(ZminI(θ,x)PminI,n(θ))in+1]], $ (4.21)
    $ (II)=A˙B˙C={1,,r}[jA1(0,)(Pj,n(θ))jB1(0,)(Pj,n(θ))jC(1{0}(Pj,n(θ))1[0,)(Zj(θ,x))]IA[[iI(1(a,b)(Zi(θ,x)Pi,n(θ))1{0}(Zi(θ,x)Pi,n(θ)ZminI(θ,x)PminI,n(θ)))×jAI1(0,)(ZminI(θ,x)PminI,n(θ)Zj(θ,x)Pj,n(θ))iB(1[b,)(Zi(θ,x)Pi,n(θ)))×[bin+1(ZminI(θ,x)PminI,n(θ))in+1]], $ (4.22)
    $ (III)=A˙B˙C={1,,r}[jA1(0,)(Pj,n(θ))jB1(0,)(Pj,n(θ))jC(1{0}(Pj,n(θ))1[0,)(Zj(θ,x))]JB[[iA(1(,a](Zi(θ,x)Pi,n(θ)))iJ(1(a,b)(Zi(θ,x)Pi,n(θ))1{0}(Zi(θ,x)Pi,n(θ)ZminJ(θ,x)PminJ,n(θ)))×jBJ1(0,)(Zj(θ,x)Pj,n(θ)ZminJ(θ,x)PminJ,n(θ))]×[(ZminJ(θ,x)PminJ,n(θ))in+1ain+1]], $ (4.23)

    and

    $ (IV)=A˙B˙C={1,,r}[jA1(0,)(Pj,n(θ))jB1(0,)(Pj,n(θ))jC(1{0}(Pj,n(θ))1[0,)(Zj(θ,x))]×(iA1(,a](Zi(θ,x)Pi,n(θ))iB1[b,)(Zi(θ,x)Pi,n(θ)))[bin+1ain+1]. $ (4.24)

    Note that the first products over all elements of $ A, B, C $ precisely describe the conditions that $ \mathcal{S}_1^\theta = A $, $ \mathcal{S}_{ - 1 }^\theta = B $, $ \mathcal{S}_0^\theta = C $, and $ \forall \, j \in \mathcal{S}_0^\theta \colon - Z_j (\theta, x) \ge 0 $. Furthermore, observe that, e.g., in $ (I) $ we we must have for all $ i \in \mathcal{I} $, $ j \in A \backslash \mathcal{I} $ that $ \frac{Z_j (\theta, x)}{ P_{j, n } (\theta) } < \frac{ Z_{\min \mathcal{I}} (\theta, x) }{P_{ \min \mathcal{I}, n} (\theta) } = \frac{Z_i (\theta, x)}{ P_{i, n } (\theta) } \in (\mathscr{a}, \mathscr{b}) $ in order to obtain a non-zero value. In other words, the maximal value of $ \frac{Z_i (\theta, x)}{ P_{i, n } (\theta) } $, $ i \in A $, is achieved exactly for $ i \in \mathcal{I} $, and similarly the minimal value of $ \frac{Z_j (\theta, x)}{ P_{j, n } (\theta) } $, $ j \in B $, is achieved exactly for $ j \in \mathcal{J} $ (and analogously in $ (II), (III) $). Moreover, note that we have for all $ i \in \mathcal{I} \subseteq A $ that

    $ 1(a,b)(Zi(θ,x)Pi,n(θ))=1(a,)(Zi(θ,x)Pi,n(θ))1(,b)(Zi(θ,x)Pi,n(θ))=1(0,)(Zi(θ,x)aPi,n(θ))1(0,)(bPi,n(θ)Zi(θ,x)). $ (4.25)

    Here $ Z_i (\theta, x) $ is polynomial in $ \theta $ and linear in $ x_1, \ldots, x_{n-1} $, and thus of the form required by Definition 4.6. Similarly, the other indicator functions can be brought into the correct form, taking into account the different signs of $ P_{j, n} (\theta) $ for $ j \in A $ and $ j \in B $. Moreover, observe that the remaining terms can be written as linear combinations of rational functions in $ \theta $ and polynomials in $ x $. Hence, we obtain that the functions defined by $ (I), (II), (III), (IV) $ are elements of $ \mathscr{A}_{m, n-1} $. The proof of Proposition 4.8 is thus complete.

    Definition 4.9. Let $ d \in \mathbb{N} $, let $ A \subseteq \mathbb{R}^d $ be a set, and let $ f \colon A \to \mathbb{R} $ be a function. Then we say that $ f $ is piecewise polynomial if and only if there exist $ n \in \mathbb{N} $, $ \alpha_1, \alpha_2, \ldots, \alpha_n \in \mathbb{R}^{n \times d} $, $ \beta_{1}, \beta_2, \ldots, \beta_n \in \mathbb{R}^n $, $ P_1, P_2, \ldots, P_n \in \mathscr{P}_d $ such that for all $ x \in A $ it holds that

    $ f(x)=ni=1[Pi(x)1[0,)n(αix+βi)] $ (4.26)

    (cf. Definition 4.1).

    Corollary 4.10. Assume Setting 2.1 and assume that $ f $ and $ \mathfrak{p} $ are piecewise polynomial (cf. Definition 4.9). Then $ \mathcal{L} $ is semialgebraic (cf. Definition 4.3).

    Proof of Corollary 4.10. Throughout this proof let $ F \colon \mathbb{R}^d \to \mathbb{R} $ and $ \mathfrak{P} \colon \mathbb{R}^d \to \mathbb{R} $ satisfy for all $ x \in \mathbb{R}^d $ that

    $ F(x)={f(x):x[a,b]d0:x[a,b]dandP(x)={p(x):x[a,b]d0:x[a,b]d. $ (4.27)

    Note that (4.27) and the assumption that $ f $ and $ \mathfrak{p} $ are piecewise polynomial assure that

    $ [Rd×Rd(θ,x)F(x)R]Ad,dand[Rd×Rd(θ,x)P(x)R]Ad,d $ (4.28)

    (cf. Definition 4.6). In addition, observe that the fact that for all $ \theta \in \mathbb{R}^ \mathfrak{d} $, $ x \in \mathbb{R}^d $ we have that

    $ Nθ(x)=cθ+Hi=1vθimax{d=1wθi,x+bθi,0}=cθ+Hi=1vθi(d=1wθi,x+bθi)1[0,)(d=1wθi,x+bθi) $ (4.29)

    demonstrates that

    $ [Rd×Rd(θ,x)Nθ(x)R]Ad,d. $ (4.30)

    Combining this with (4.28) and the fact that $ \mathscr{A}_{ \mathfrak{d}, d} $ is an algebra proves that

    $ [Rd×Rd(θ,x)(Nθ(x)F(x))2P(x)R]Ad,d. $ (4.31)

    This, Proposition 4.8, and induction demonstrate that

    $ [Rdθbababa(Nθ(x)F(x))2P(x)dxddx2dx1R]Ad,0. $ (4.32)

    Fubini's theorem hence implies that $ \mathcal{L} \in \mathscr{A}_{ \mathfrak{d}, 0 } $. Combining this and Lemma 4.7 shows that $ \mathcal{L} $ is semialgebraic. The proof of Corollary 4.10 is thus complete.

    In this section we employ the findings from Sections 2 and 4 to establish in Proposition 5.2 in Subsection 5.2 below, in Proposition 5.3 in Subsection 5.2, and in Theorem 5.4 in Subsection 5.3 below several convergence rate results for solutions of GF differential equations. Theorem 1.2 in the introduction is a direct consequence of Theorem 5.4. Our proof of Theorem 5.4 is based on an application of Proposition 5.3 and our proof of Proposition 5.3 uses Proposition 5.2. Our proof of Proposition 5.2, in turn, employs Proposition 5.1 in Subsection 5.1 below. In Proposition 5.1 we establish that under the assumption that the target function $ f \colon [\mathscr{a}, \mathscr{b}] ^d \to \mathbb{R} $ and the unnormalized density function $ \mathfrak{p} \colon [\mathscr{a}, \mathscr{b}] ^d \to [0, \infty) $ are piecewise polynomial (see Definition 4.9 in Subsection 4.3) we have that the risk function $ \mathcal{L} \colon \mathbb{R}^ \mathfrak{d} \to \mathbb{R} $ satisfies an appropriately generalized Kurdyka-Łojasiewicz inequality.

    In the proof of Proposition 5.1 the classical Łojasiewicz inequality for semialgebraic or subanalytic functions (cf., e.g., Bierstone & Milman [40]) is not directly applicable since the generalized gradient function $ \mathcal{G} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ \mathfrak{d} } $ is not continuous. We will employ the more general results from Bolte et al. [9] which also apply to not necessarily continuously differentiable functions.

    The arguments used in the proof of Proposition 5.2 are slight adaptions of well-known arguments in the literature; see, e.g., Kurdyka et al. [12, Section 1], Bolte et al. [9, Theorem 4.5], or Absil et al. [6, Theorem 2.2]. On the one hand, in Kurdyka et al. [12, Section 1] and Absil et al. [6, Theorem 2.2] it is assumed that the object function of the considered optimization problem is analytic and in Bolte et al. [9, Theorem 4.5] it is assumed that the objective function of the considered optimization problem is convex or lower $ C^2 $ and Proposition 5.2 does not require these assumptions. On the other hand, Bolte et al. [9, Theorem 4.5] consider more general differential dynamics and the considered gradients are allowed to be more general than the specific generalized gradient function $ \mathcal{G} \colon \mathbb{R}^{ \mathfrak{d} } \to \mathbb{R}^{ \mathfrak{d} } $ which is considered in Proposition 5.2.

    Proposition 5.1 (Generalized Kurdyka-Łojasiewicz inequality). Assume Setting 2.1, assume that $ \mathfrak{p} $ and $ f $ are piecewise polynomial, and let $ \vartheta \in \mathbb{R}^ \mathfrak{d} $ (cf. Definition 4.9). Then there exist $ \varepsilon, \mathfrak{D} \in (0, \infty) $, $ \alpha \in (0, 1) $ such that for all $ \theta \in B_\varepsilon (\vartheta) $ it holds that

    $ |L(θ)L(ϑ)|αD||G(θ)||. $ (5.1)

    Proof of Proposition 5.1. Throughout this proof let $ \mathbf{M} \colon \mathbb{R}^ \mathfrak{d} \to [0, \infty] $ satisfy for all $ \theta \in \mathbb{R}^ \mathfrak{d} $ that

    $ M(θ)=inf({||h||:hL(θ)}{}). $ (5.2)

    Note that Proposition 2.12 implies for all $ \theta \in \mathbb{R}^ \mathfrak{d} $ that

    $ M(θ)||G(θ)||. $ (5.3)

    Furthermore, observe that Corollary 4.10, the fact that semialgebraic functions are subanalytic, and Bolte et al. [9, Theorem 3.1 and Remark 3.2] ensure that there exist $ \varepsilon, \mathfrak{D} \in (0, \infty) $, $ \mathfrak{a} \in [0, 1) $ which satisfy for all $ \theta \in B_\varepsilon (\vartheta) $ that

    $ |L(θ)L(ϑ)|aDM(θ). $ (5.4)

    Combining this and (5.3) with the fact that $ \sup_{\theta \in B_\varepsilon (\vartheta) } | \mathcal{L} (\theta) - \mathcal{L} (\vartheta) | < \infty $ demonstrates that for all $ \theta \in B_\varepsilon (\vartheta) $, $ \alpha \in (\mathfrak{a}, 1) $ we have that

    $ |L(θ)L(ϑ)|α|L(θ)L(ϑ)|a(supψBε(ϑ)|L(ψ)L(ϑ)|αa)(DsupψBε(ϑ)|L(ψ)L(ϑ)|αa)||G(θ)||. $ (5.5)

    This completes the proof of Proposition 5.1.

    Proposition 5.2. Assume Setting 2.1 and let $ \vartheta \in \mathbb{R}^ \mathfrak{d} $, $ \varepsilon, \mathfrak{D} \in (0, \infty) $, $ \alpha \in (0, 1) $ satisfy for all $ \theta \in B_\varepsilon (\vartheta) $ that

    $ |L(θ)L(ϑ)|αD||G(θ)||. $ (5.6)

    Then there exists $ \delta \in (0, \varepsilon) $ such that for all $ \Theta \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ with $ \Theta_0 \in B_\delta (\vartheta) $, $ \forall \, t \in [0, \infty) \colon \Theta_t = \Theta_0 - \int_0^t \mathcal{G} (\Theta_s) \, \mathrm{d} s $, and $ \inf_{t \in \left\{ {{ s \in [0, \infty) \colon \Theta_s \in B_\varepsilon (\vartheta) }} \right\} } \mathcal{L} (\Theta_t) \geq \mathcal{L} (\vartheta) $ there exists $ \psi \in \mathcal{L}^{ - 1 }(\left\{ {{ \mathcal{L}(\vartheta) }} \right\}) $ such that for all $ t \in [0, \infty) $ it holds that

    $ ΘtBε(ϑ),0||G(Θs)||dsε,|L(Θt)L(ψ)|(1+D2t)1, $ (5.7)
    $ and||Θtψ||[1+(D1/α(1α))α1αt]min{1,1αα}. $ (5.8)

    Proof of Proposition 5.2. Note that the fact that $ \mathcal{L} $ is continuous implies that there exists $ \delta \in (0, \varepsilon / 3) $ which satisfies for all $ \theta \in B_\delta (\vartheta) $ that

    $ |L(θ)L(ϑ)|1αmin{ε(1α)3D,1αD,1}. $ (5.9)

    In the following let $ \Theta \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ satisfy $ \forall \, t \in [0, \infty) \colon \Theta_t = \Theta_0 - \int_0^t \mathcal{G} (\Theta_s) \, \mathrm{d} s $, $ \Theta_0 \in B_\delta (\vartheta) $, and

    $ inft{s[0,):ΘsBε(ϑ)}L(Θt)L(ϑ). $ (5.10)

    In the first step we show that for all $ t \in [0, \infty) $ it holds that

    $ ΘtBε(ϑ). $ (5.11)

    Observe that, e.g., [37, Lemma 3.1] ensures for all $ t \in [0, \infty) $ that

    $ L(Θt)=L(Θ0)t0||G(Θs)||2ds. $ (5.12)

    This implies that $ [0, \infty) \ni t \mapsto \mathcal{L} (\Theta_t) \in [0, \infty) $ is non-increasing. Next let $ L \colon [0, \infty) \to \mathbb{R} $ satisfy for all $ t \in [0, \infty) $ that

    $ L(t)=L(Θt)L(ϑ) $ (5.13)

    and let $ T \in [0, \infty] $ satisfy

    $ T=inf({t[0,):||Θtϑ||ε}{}). $ (5.14)

    We intend to show that $ T = \infty $. Note that (5.10) assures for all $ t \in [0, T) $ that $ L(t) \geq 0 $. Moreover, observe that (5.12) and (5.13) ensure that for almost all $ t \in [0, T) $ it holds that $ L $ is differentiable at $ t $ and satisfies $ L ' (t) = \frac{ \mathrm{d}}{ \mathrm{d} t} (\mathcal{L} (\Theta_t)) = - || \mathcal{G} (\Theta_t) || ^2 $. In the following let $ \tau \in [0, T] $ satisfy

    $ τ=inf({t[0,T):L(t)=0}{T}). $ (5.15)

    Note that the fact that $ L $ is non-increasing implies that for all $ s \in [\tau, T) $ it holds that $ L(s) = 0 $. Combining this with (5.12) demonstrates for almost all $ s \in (\tau, T) $ that $ \mathcal{G} (\Theta_s) = 0 $. This proves for all $ s \in [\tau, T) $ that $ \Theta_s = \Theta_\tau $. Next observe that (5.6) ensures that for all $ t \in [0, \tau) $ it holds that

    $ 0<[L(t)]α=|L(Θt)L(ϑ)|αD||G(Θt)||. $ (5.16)

    Combining this with the chain rule proves for almost all $ t \in [0, \tau) $ that

    $ ddt([L(t)]1α)=(1α)[L(t)]α(||G(Θt)||2)(1α)D1||G(Θt)||1||G(Θt)||2=D1(1α)||G(Θt)||. $ (5.17)

    In addition, note that the fact that $ [0, \infty) \ni t \mapsto L(t) \in \mathbb{R} $ is absolutely continuous and the fact that for all $ r \in (0, \infty) $ it holds that $ (r, \infty) \ni y \mapsto y^{1 - \alpha } \in \mathbb{R} $ is Lipschitz continuous demonstrate for all $ t \in [0, \tau) $ that $ [0, t] \ni s \mapsto [L (s)]^{ 1 - \alpha } \in \mathbb{R} $ is absolutely continuous. Integrating (5.17) hence shows for all $ s, t \in [0, \tau) $ with $ t \le s $ that

    $ st||G(Θu)||duD(1α)1([L(s)]1α[L(t)]1α)D(1α)1[L(t)]1α. $ (5.18)

    This and the fact that for almost all $ s \in (\tau, T) $ it holds that $ \mathcal{G} (\Theta_s) = 0 $ ensure that for all $ s, t \in [0, T) $ with $ t \le s $ we have that

    $ st||G(Θu)||duD(1α)1[L(t)]1α. $ (5.19)

    Combining this with (5.9) demonstrates for all $ t \in [0, T) $ that

    $ ||ΘtΘ0||=t0G(Θs)dst0||G(Θs)||dsD|L(Θ0)L(ϑ)|1α1αmin{ε3,1}. $ (5.20)

    This, the fact that $ \delta < \varepsilon / 3 $, and the triangle inequality assure for all $ t \in [0, T) $ that

    $ ||Θtϑ||||ΘtΘ0||+||Θ0ϑ||ε3+δε3+ε3=2ε3. $ (5.21)

    Combining this with (5.14) proves that $ T = \infty $. This establishes (5.11).

    Next observe that the fact that $ T = \infty $ and (5.20) prove that

    $ 0||G(Θs)||dsmin{ε3,1}ε<. $ (5.22)

    In the following let $ \sigma \colon [0, \infty) \to [0, \infty) $ satisfy for all $ t \in [0, \infty) $ that

    $ σ(t)=t||G(Θs)||ds. $ (5.23)

    Note that (5.22) proves that $ \limsup_{t \to \infty} \sigma (t) = 0 $. In addition, observe that (5.22) assures that there exists $ \psi \in \mathbb{R}^ \mathfrak{d} $ such that

    $ lim supt||Θtψ||=0. $ (5.24)

    In the next step we combine the weak chain rule for the risk function in (5.12) with (5.11) and (5.6) to obtain that for almost all $ t \in [0, \infty) $ we have that

    $ L(t)=||G(Θt)||2D2[L(t)]2α. $ (5.25)

    In addition, note that the fact that $ L $ is non-increasing and (5.9) ensure that for all $ t \in [0, \infty) $ it holds that $ L (t) \leq L (0) \leq 1 $. Therefore, we get for almost all $ t \in [0, \infty) $ that

    $ L(t)D2[L(t)]2. $ (5.26)

    Combining this with the fact that for all $ t \in [0, \tau) $ it holds that $ L (t) > 0 $ establishes for almost all $ t \in [0, \tau) $ that

    $ ddt(D2L(t))=D2L(t)[L(t)]21. $ (5.27)

    The fact that for all $ t \in [0, \tau) $ it holds that $ [0, t] \ni s \mapsto L (s) \in (0, \infty) $ is absolutely continuous hence demonstrates for all $ t \in [0, \tau) $ that

    $ D2L(t)D2L(0)+tD2+t. $ (5.28)

    Therefore, we infer for all $ t \in [0, \tau) $ that

    $ L(t)D2(D2+t)1=(1+D2t)1. $ (5.29)

    This and the fact that for all $ t \in [\tau, \infty) $ it holds that $ L(t) = 0 $ prove that for all $ t \in [0, \infty) $ we have that

    $ |L(Θt)L(ϑ)|=L(t)(1+D2t)1. $ (5.30)

    Furthermore, observe that (5.24) and the fact that $ \mathcal{L} $ is continuous imply that $ \limsup_{t \to \infty} | \mathcal{L} (\Theta_t) - \mathcal{L} (\psi) | = 0 $. Hence, we obtain that $ \mathcal{L} (\psi) = \mathcal{L} (\vartheta) $. This shows for all $ t \in [0, \infty) $ that

    $ |L(Θt)L(ψ)|(1+D2t)1. $ (5.31)

    In the next step we establish a convergence rate for the quantity $ ||\Theta_t - \psi|| $, $ t \in [0, \infty) $. We accomplish this by employing an upper bound for the tail length of the curve $ \Theta_t \in \mathbb{R}^ \mathfrak{d} $, $ t \in [0, \infty) $. More formally, note that (5.19), (5.11), and (5.6) demonstrate for all $ t \in [0, \infty) $ that

    $ σ(t)=t||G(Θu)||du=lims[st||G(Θu)||du]D(1α)1(L(t))1αD(1α)1(D||G(Θt)||)1αα. $ (5.32)

    Next observe that the fact that for all $ t \in [0, \infty) $ it holds that $ \sigma (t) = \int_0^\infty || \mathcal{G} (\Theta_s)|| \, \mathrm{d} s - \int_0^t || \mathcal{G} (\Theta_s) || \, \mathrm{d} s $ shows that for almost all $ t \in [0, \infty) $ we have that $ \sigma' (t) = - || \mathcal{G} (\Theta_t) || $. This and (5.32) yield for almost all $ t \in [0, \infty) $ that $ \sigma(t) \leq \mathfrak{D}^{1 / \alpha} \left({{1-\alpha}} \right)^{-1} \left[{{- \sigma ' (t) }} \right]^{\frac{1-\alpha}{\alpha}} $. Therefore, we obtain for almost all $ t \in [0, \infty) $ that

    $ σ(t)[(1α)D1/ασ(t)]α1α. $ (5.33)

    Combining this with the fact that $ \sigma $ is absolutely continuous implies for all $ t \in [0, \infty) $ that

    $ σ(t)σ(0)[(1α)D1/α]α1αt0[σ(s)]α1αds. $ (5.34)

    In the following let $ \beta, \mathfrak{C} \in (0, \infty) $ satisfy $ \beta = \max \left\{ {{ 1, \frac{\alpha}{1-\alpha} }} \right\} $ and $ \mathfrak{C} = \left({{ (1-\alpha) \mathfrak{D} ^{ -1 / \alpha} }} \right)^{\frac{\alpha}{1-\alpha}} $. Note that (5.34) and the fact that for all $ t \in [0, \infty) $ it holds that $ \sigma (t) \leq \sigma (0) \leq 1 $ ensure that for all $ t \in [0, \infty) $ it holds that

    $ σ(t)σ(0)Ct0[σ(s)]βds. $ (5.35)

    This, the fact that $ \sigma $ is non-increasing, and the fact that for all $ t \in [0, \infty) $ it holds that $ 0 \leq \sigma(t) \leq 1 $ prove that for all $ t \in [0, \infty) $ we have that

    $ (σ(t))βσ(t)σ(0)C[σ(t)]βt1Ct[σ(t)]β. $ (5.36)

    Hence, we obtain for all $ t \in [0, \infty) $ that $ \sigma(t) \leq \left({{ 1 + \mathfrak{C} t }} \right)^{-\frac{1}{\beta}} $. Combining this with the fact that for all $ t \in [0, \infty) $ it holds that

    $ ||Θtψ||lim sups||ΘtΘs||=lim supsstG(Θu)dulim sups[st||G(Θu)||du]=t||G(Θu)||du=σ(t) $ (5.37)

    shows that for all $ t \in [0, \infty) $ we have that $ ||\Theta_t - \psi || \le (1 + \mathfrak{C} t) ^{- 1 / \beta} $. This, (5.11), (5.22), and (5.31) establish (5.8). The proof of Proposition 5.2 is thus complete.

    Proposition 5.3. Assume Setting 2.1, assume that $ \mathfrak{p} $ and $ f $ are piecewise polynomial, and let $ \Theta \in C ([0, \infty), \mathbb{R}^ \mathfrak{d}) $ satisfy

    $ lim inft||Θt||<andt[0,):Θt=Θ0t0G(Θs)ds $ (5.38)

    (cf. Definition 4.9). Then there exist $ \vartheta \in \mathcal{G}^{ - 1 } (\left\{ {{ 0 }} \right\}) $, $ \mathfrak{C}, \tau, \beta \in (0, \infty) $ which satisfy for all $ t \in [\tau, \infty) $ that

    $ ||Θtϑ||(1+C(tτ))βand|L(Θt)L(ϑ)|(1+C(tτ))1. $ (5.39)

    Proof of Proposition 5.3. First observe that [37, Lemma 3.1] ensures that for all $ t \in [0, \infty) $ it holds that

    $ L(Θt)=L(Θ0)t0||G(Θs)||2ds. $ (5.40)

    This implies that $ [0, \infty) \ni t \mapsto \mathcal{L} (\Theta_t) \in [0, \infty) $ is non-increasing. Hence, we obtain that there exists $ \mathbf{m} \in [0, \infty) $ which satisfies that

    $ m=lim suptL(Θt)=lim inftL(Θt)=inft[0,)L(Θt). $ (5.41)

    Moreover, note that the assumption that $ \liminf_{t \to \infty } ||\Theta_t || < \infty $ ensures that there exist $ \vartheta \in \mathbb{R}^ \mathfrak{d} $ and $ \tau = (\tau_n)_{n \in \mathbb{N}} \colon \mathbb{N} \to [0, \infty) $ which satisfy $ \liminf_{n \to \infty} \tau_n = \infty $ and

    $ lim supn||Θτnϑ||=0. $ (5.42)

    Combining this with (5.41) and the fact that $ \mathcal{L} $ is continuous shows that

    $ L(ϑ)=mandt[0,):L(Θt)L(ϑ). $ (5.43)

    Next observe that Proposition 5.1 demonstrates that there exist $ \varepsilon, \mathfrak{D} \in (0, \infty) $, $ \alpha \in (0, 1) $ such that for all $ \theta \in B_\varepsilon (\vartheta) $ we have that

    $ |L(θ)L(ϑ)|αD||G(θ)||. $ (5.44)

    Combining this and (5.42) with Proposition 5.2 demonstrates that there exists $ \delta \in (0, \varepsilon) $ which satisfies for all $ \Phi \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ with $ \Phi_0 \in B_\delta (\vartheta) $, $ \forall \, t \in [0, \infty) \colon \Phi_t = \Phi_0 - \int_0^t \mathcal{G} (\Phi_s) \, \mathrm{d} s $, and $ \inf_{ t \in \left\{ {{ s \in [0, \infty) \colon \Phi_s \in B_{ \varepsilon }(\vartheta) }} \right\} } \mathcal{L}(\Phi_t) \ge \mathcal{L}(\vartheta) $ that it holds for all $ t \in [0, \infty) $ that

    $ ΦtBε(ϑ),|L(Φt)L(ϑ)|(1+D2t)1, $ (5.45)
    $ and||Φtϑ||[1+(D1/α(1α))α1αt]min{1,1αα}. $ (5.46)

    Moreover, note that (5.42) ensures that there exists $ n \in \mathbb{N} $ which satisfies $ \Theta_{\tau_n } \in B_\delta (\vartheta) $. Next let $ \Phi \in C([0, \infty), \mathbb{R}^ \mathfrak{d}) $ satisfy for all $ t \in [0, \infty) $ that

    $ Φt=Θt+τn. $ (5.47)

    Observe that (5.43) and (5.47) assure that

    $ Φ0Bδ(ϑ),inft[0,)L(Φt)L(ϑ),andt[0,):Φt=Φ0t0G(Φs)ds. $ (5.48)

    Combining this with (5.46) proves for all $ t \in [\tau_n, \infty) $ that

    $ |L(Θt)L(ϑ)|(1+D2(tτn))1 $ (5.49)

    and

    $ ||Θtϑ||[1+(D1/α(1α))α1α(tτn)]min{1,1αα}. $ (5.50)

    Next note that [37, Corollary 2.15] shows that $ \mathbb{R}^ \mathfrak{d} \ni \theta \mapsto || \mathcal{G} (\theta) || \in [0, \infty) $ is lower semicontinuous. The fact that $ \liminf_{s \to \infty} || \mathcal{G} (\Theta_s) || = 0 $ and the fact that $ \limsup_{t \to \infty} ||\Theta_t - \vartheta || = 0 $ hence imply that $ \mathcal{G} (\vartheta) = 0 $. Combining this with (5.49) and (5.50) establishes (5.39). The proof of Proposition 5.3 is thus complete.

    By choosing a sufficiently large $ \mathscr{C} \in (0, \infty) $ we can conclude a simplified version of Proposition 5.3. This is precisely the subject of the next result, Theorem 5.4 below. Theorem 1.2 in the introduction is a direct consequence of Theorem 5.4.

    Theorem 5.4. Assume Setting 2.1, assume that $ \mathfrak{p} $ and $ f $ are piecewise polynomial, and let $ \Theta \in C ([0, \infty), \mathbb{R}^ \mathfrak{d}) $ satisfy $ \liminf_{t \to \infty } ||\Theta_t || < \infty $ and $ \forall \, t \in [0, \infty) \colon \Theta_t = \Theta_0 - \int_0^t \mathcal{G} (\Theta_s) \, \mathrm{d} s $ (cf. Definition 4.9). Then there exist $ \vartheta \in \mathcal{G}^{ - 1 } (\left\{ {{ 0 }} \right\}) $, $ \mathscr{C}, \beta \in (0, \infty) $ which satisfy for all $ t \in [0, \infty) $ that

    $ ||Θtϑ||C(1+t)βand|L(Θt)L(ϑ)|C(1+t)1. $ (5.51)

    Proof of Theorem 5.4. Observe that Proposition 5.3 assures that there exist $ \vartheta \in \mathcal{G}^{ - 1 } (\left\{ {{ 0 }} \right\}) $, $ \mathfrak{C}, \tau, \beta \in (0, \infty) $ which satisfy for all $ t \in [\tau, \infty) $ that

    $ ||Θtϑ||(1+C(tτ))β $ (5.52)

    and

    $ |L(Θt)L(ϑ)|(1+C(tτ))1. $ (5.53)

    In the following let $ \mathscr{C} \in (0, \infty) $ satisfy

    $ C=max{C1,1+τ,Cβ,(1+τ)β,(1+τ)β[sups[0,τ]||Θsϑ||],(1+τ)L(Θ0)}. $ (5.54)

    Note that (5.53), (5.54), and the fact that $ [0, \infty) \ni t \mapsto \mathcal{L} (\Theta_t) \in [0, \infty) $ is non-increasing show for all $ t \in [0, \tau] $ that

    $ ||Θtϑ||sups[0,τ]||Θsϑ||C(1+τ)βC(1+t)β $ (5.55)

    and

    $ |L(Θt)L(ϑ)|=L(Θt)L(ϑ)L(Θt)L(Θ0)C(1+τ)1C(1+t)1. $ (5.56)

    Moreover, observe that (5.52) and (5.54) imply for all $ t \in [\tau, \infty) $ that

    $ ||Θtϑ||C(C1/β+CC1/β(tτ))βC(C1/βτ+t)βC(1+t)β. $ (5.57)

    In addition, note that (5.53) and (5.54) demonstrate for all $ t \in [\tau, \infty) $ that

    $ |L(Θt)L(ϑ)|C(C+CC(tτ))1C(Cτ+t)1C(1+t)1. $ (5.58)

    This completes the proof of Theorem 5.4.

    This project has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044-390685587, Mathematics Münster: Dynamics-Geometry-Structure.

    The authors declare that there are no conflicts of interest.

    [1] Acta Biotheor., 43 (1995), 3-25.
    [2] SIAM J. Appl. Math., 53 (1993), 1480-1504.
    [3] J. Theor. Med., 1 (1997), 35-51.
    [4] Math. Biosci., 218 (2009), 1-14.
    [5] Bull Math. Biol., 67 (2005), 815-830.
    [6] Bull Math. Biol., 69 (2007) 1673-1690.
    [7] Mathematical Modelling of Natural Phenomena, 7 (2012) 306-336.
    [8] Math. Comp. Simul., 2012. in press, available on line Apr. 2012.
    [9] Personalized Medicine, 8 (2011), 271-286.
    [10] Mathematical Modelling of Natural Phenomena, 4 (2009), 183-209.
    [11] Mathematical and Computer Modelling, 53 (2011), 1558-1567.
    [12] Technical report, Number 4892, INRIA, Domaine de Voluceau, BP 105, 78153 Rocquencourt, France, 2003.
    [13] PLoS one, 4 (2009), 1-12.
    [14] Cell, 64 Jan. (1991), 271-280.
    [15] Cancer Causes Control, 17 (2006), 539-545.
    [16] J. Biosci. Bioeng., 114 (2012), 220-227.
    [17] Cancer Causes Control, 17 (2006), 509-514.
    [18] Mutat. Res., 680 (2009), 95-105.
    [19] Genes Cancer, 1 Nov. (2010), 1124-1131.
    [20] J. Math. Biol., 28 (1990), 671-694.
    [21] Cancer Causes Control, 17 (2006), 531-537.
    [22] in "Advances on Estimation of Distribution Algorithms" (editors, J. Lozano, P. Larranaga, I. Inza, and E. Bengoetxea), 75-102. Springer, New York, 2006.
    [23] Theor. Biol. Med. Model, 4 (2007), pp.14.
    [24] J. Theor. Biol., 203 (2000), 177-186.
    [25] Oncogene, 19 Nov. (2000), 5558-5567.
    [26] Math. Models Methods Appl. Sci., 16 (2006), 1155-1172.
    [27] Annu. Rev. Pharmacol. Toxicol., 50 (2010), 377-421.
    [28] Annu. Rev. Pharmacol. Toxicol., 47 (2007), 593-628.
    [29] Nat. Rev. Mol. Cell Biol., 1 Dec. (2000), 169-178.
    [30] Cell, 103 Oct. (2000), 295-309.
    [31] Arch. Med. Res., 39 (2008), 743-752.
    [32] Proc. Natl. Acad. Sci. USA, 31 (1997), 814-819.
    [33] Proc. Edinburgh Math. Soc., 54 (1926), 98-130.
    [34] J. Clin. Oncol., 14 (2003), 2787-2799.
    [35] volume 68 of Lecture Notes in Biomathematics. Springer, New York, 1986.
    [36] Proc. Natl. Acad. Sci. USA, 71 (1974), 1286-1290.
    [37] Frontiers in Mathematics series. Birkhäuser, Boston, 2007.
    [38] J. Cell Sci. Suppl., 18 (1994), 69-73.
    [39] Mol. Cell Biol., 14 (1994), 1669-1679.
    [40] J. Theor. Biol., 243 (2006), 532-541.
    [41] Cell, 132 (2008), 487-498.
    [42] Chem. Biol., 15 (2008), 1243-1248.
    [43] Proc. Natl. Acad. Sci. USA, 105 (2008), 17256-17261.
    [44] Biotechnol. Bioeng., 99 (2008), 960-974.
    [45] Nat. Rev. Mol. Cell Biol., 5 Oct. (2004), 836-847.
    [46] Rocky Mountain J. Math., 20 (1990), 1195-1216.
    [47] Curr. Opin. Cell Biol., 7 Dec. (1995), 835-842.
  • This article has been cited by:

    1. W. Jung, C.A. Morales, Training neural networks from an ergodic perspective, 2023, 0233-1934, 1, 10.1080/02331934.2023.2239852
    2. Steffen Dereich, Arnulf Jentzen, Sebastian Kassing, On the Existence of Minimizers in Shallow Residual ReLU Neural Network Optimization Landscapes, 2024, 62, 0036-1429, 2640, 10.1137/23M1556241
  • Reader Comments
  • © 2013 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3828) PDF downloads(657) Cited by(30)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog