Processing math: 100%
Research article Special Issues

Using computational techniques of fixed point theory for studying the stationary infinite horizon problem from the financial field

  • In this article, we will solve optimization problems from the financial and economic field with constants, infinite-horizon iterative techniques and elements from fixed point theory. We will resort to Ćirić contractions in Banach space and the main result consists of the existence of a fixed point to solve an important class of infinite-horizon iterative schemes for optimization problems with state constraints for which the value function is merely lower semi-continuous. The developed tools allowed us to solve the stationary infinite-horizon optimization problems, especially for the maximization of the utility of households. We present some fixed point results that are fundamental for the development of our contributions: Notably, existence, monotonicity, attainability and results in the Ćirić contribution. We show the convergence in norm with probability for an iterative procedure defined for our problem under the stated assumptions. By using the Ćirić operator and the Reich-Rus type ψF-contraction, we prove the existence of the results of the optional cost function of an infinite horizon problem in a complete metric space. For a particular case, we realize a numerical simulation in C++. The conclusions are that the convergence, the existence and the uniqueness results of an optimal cost function of an infinite horizon problem in a Banach space can be treated by resorting to the Ćirić operator.

    Citation: Abdelkader Belhenniche, Amelia Bucur, Liliana Guran, Adrian Nicolae Branga. Using computational techniques of fixed point theory for studying the stationary infinite horizon problem from the financial field[J]. AIMS Mathematics, 2024, 9(1): 2369-2388. doi: 10.3934/math.2024117

    Related Papers:

    [1] Madeaha Alghanmi, Shahad Alqurayqiri . Existence results for a coupled system of nonlinear fractional functional differential equations with infinite delay and nonlocal integral boundary conditions. AIMS Mathematics, 2024, 9(6): 15040-15059. doi: 10.3934/math.2024729
    [2] Jiali Wu, Maoning Tang, Qingxin Meng . A stochastic linear-quadratic optimal control problem with jumps in an infinite horizon. AIMS Mathematics, 2023, 8(2): 4042-4078. doi: 10.3934/math.2023202
    [3] Changlong Yu, Jufang Wang, Huode Han, Jing Li . Positive solutions of IBVPs for q-difference equations with p-Laplacian on infinite interval. AIMS Mathematics, 2021, 6(8): 8404-8414. doi: 10.3934/math.2021487
    [4] Naveed Iqbal, Azmat Ullah Khan Niazi, Ikram Ullah Khan, Rasool Shah, Thongchai Botmart . Cauchy problem for non-autonomous fractional evolution equations with nonlocal conditions of order (1,2). AIMS Mathematics, 2022, 7(5): 8891-8913. doi: 10.3934/math.2022496
    [5] Mengjiao Zhao, Chen Yang . An Erdélyi-Kober fractional coupled system: Existence of positive solutions. AIMS Mathematics, 2024, 9(2): 5088-5109. doi: 10.3934/math.2024247
    [6] Kareem T. Elgindy, Hareth M. Refat . A direct integral pseudospectral method for solving a class of infinite-horizon optimal control problems using Gegenbauer polynomials and certain parametric maps. AIMS Mathematics, 2023, 8(2): 3561-3605. doi: 10.3934/math.2023181
    [7] H. H. G. Hashem, Hessah O. Alrashidi . Qualitative analysis of nonlinear implicit neutral differential equation of fractional order. AIMS Mathematics, 2021, 6(4): 3703-3719. doi: 10.3934/math.2021220
    [8] Bancha Panyanak, Chainarong Khunpanuk, Nattawut Pholasa, Nuttapol Pakkaranang . A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators. AIMS Mathematics, 2023, 8(4): 9692-9715. doi: 10.3934/math.2023489
    [9] Mouffak Benchohra, John R. Graef, Nassim Guerraiche, Samira Hamani . Nonlinear boundary value problems for fractional differential inclusions with Caputo-Hadamard derivatives on the half line. AIMS Mathematics, 2021, 6(6): 6278-6292. doi: 10.3934/math.2021368
    [10] Wenlong Sun, Gang Lu, Yuanfeng Jin, Zufeng Peng . Strong convergence theorems for split variational inequality problems in Hilbert spaces. AIMS Mathematics, 2023, 8(11): 27291-27308. doi: 10.3934/math.20231396
  • In this article, we will solve optimization problems from the financial and economic field with constants, infinite-horizon iterative techniques and elements from fixed point theory. We will resort to Ćirić contractions in Banach space and the main result consists of the existence of a fixed point to solve an important class of infinite-horizon iterative schemes for optimization problems with state constraints for which the value function is merely lower semi-continuous. The developed tools allowed us to solve the stationary infinite-horizon optimization problems, especially for the maximization of the utility of households. We present some fixed point results that are fundamental for the development of our contributions: Notably, existence, monotonicity, attainability and results in the Ćirić contribution. We show the convergence in norm with probability for an iterative procedure defined for our problem under the stated assumptions. By using the Ćirić operator and the Reich-Rus type ψF-contraction, we prove the existence of the results of the optional cost function of an infinite horizon problem in a complete metric space. For a particular case, we realize a numerical simulation in C++. The conclusions are that the convergence, the existence and the uniqueness results of an optimal cost function of an infinite horizon problem in a Banach space can be treated by resorting to the Ćirić operator.



    The applicability of dynamic programming results for optimal control and for optimal growth problems [1,2,3,4], in which fixed-point theory-based methods were used, is well-known. In this article we investigate the application of the Ćirić fixed-point theorem to prove the existence and uniqueness results of the solution of the dynamic programming Bellman's equation under assumptions that are significantly weaker than the ones generally considered in the specialty literature [5].

    Our work afforded us to further extend the topic of an already relevant class of methods to solve optimization problems (notably optimal control, respectively dynamic programming), that have a long study in the history of mathematical techniques.

    Discrete Markov decision models for the financial field may be studied using the dynamic programming principles developed by Richard Bellman (1956). Dynamic programming is based on the Principle of Optimality, which was explained by Bellman in the following text: "An optimal policy has the property that, whatever the initial state and decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision" [6].

    For the simulation of the mathematical model, Ricard Torres published in 2014 a MATLAB script implementing policy iteration [7]. Torres' result provides insight to how we can programmatically find the optimal policies, by developing a way to programmatically organize the Dynamic Programming problem into a cost network.

    Dynamic programming and fixed point theory are important mathematical tools in the financial and economic fields (for modern financial, respectively econometric modulization). The book by Stokey and Lucas [8] published in 1989 is an example of an important book in this direction. Numerical methods to solve dynamic programming (DP) models can be DP models with sequential decision making, the optimal inventory model (Arrow et al. [9]), the optimal investment model (Lucas and Prescott [10]), the optimal growth model under uncertainty (Brock and Mirman [11]), the asset pricing models (Lucas [12] and Brock [13]), the business cycle model (Kydland and Prescott [14]) or DP formulated as a Markov decision process (MDP) (Androulakis [15]).

    Blackwell (1965) and Denardo (1967) showed that the Bellman operator is a contraction mapping. Kamihigashi [16] showed that the Bellman operator has a unique fixed point, and this fixed point is a value function. The consistent Bellman operator (Bellemare et al. [17]) is a modified operator that addresses the problem of inconsistency of the optimal action-value functions for suboptimal actions. The distributional Bellman operator (Bellemare et al. [18]) enables operation of the whole return distribution, instead of its expectation, i.e., the value function (Bellemare et al. [19], Bellman [20]). The logistic Bellman operator uses a logistic loss to solve a convex linear programming problem to find optimal value functions (Bas-Serrano et al. [21]).

    The Markov decision process (MDP) is a fundamental framework for stochastic games, control design in stochastic environments, reinforcement learning, etc. (Filar and Vrieze [22]). Relying on the fact that the optimal value function is the fixed point of the Bellman operator, dynamic programming methods iteratively use different types of the Bellman operator to converge to the optimal value function (Puterman [23], Alden and Smith [24]).

    The literature on infinite horizon optimization and its applications is rich. For example, in 1989, Schochetman and Smith [25] considered the general problem of choosing a discounted cost. They solved examples for equipment replacement and production planning, by applying the study of the minimizing infinite sequence of decisions from a closed subset of the product space formed by a sequence of arbitrary compact metric spaces.

    Modeling for optimizing production was analyzed in 1955 by Chanes et al. [26] and by Modigliani and Hohn [27], and sequential production planning was described in an article from the year 1957 (Johnson [28]).

    The existence of forecast horizons in undiscounted discrete-time lot size models was investigated in 1990 by Chand et al. [29].

    The literature on infinite horizon optimization is vast and encompasses diverse fields, including finance, electrical engineering (especially control systems), economics, operation research, management science, statistics and mathematics (Sethi and Thompson [30], Denardo [31], Cheevaprawatdomrong [32], Bes and Sethi [33]).

    The planning of the horizon model of cash management was investigated by Sethi in 1971 (Sethi [34]), and optimal backlogging over an infinite horizon under time-varying convex production and inventory costs was studied by Ghate and Smith in an article from the year 2009 (Ghate and Smith [35]).

    The use of mathematics within the field of finance has been increasing, for analyzing and solving problems such as development [1], control theory, differential game theory, capital asset pricing model from stochastic optimum [37], risk management, derivative security pricing and valuation, portofolio creation and structuring [38], efficiency quantification of capital markets [39], quantitative investing strategies, models of growth, exchange economy and effects of mergers on consumers on both sides of the market [40], etc.

    Fixed point theory plays an important role in topology, nonlinear analysis, dynamic optimization and obtaining results in the theory of differential and integral equations, notably in the existence of differential and integral equations or inclusions. These results are essential in many branches of science, economics, management and finance; thus, its increasing use in control and optimization is not surprising [1,41,42].

    In [43], Richard Bellman introduced dynamic programming, which is typically useful to investigate problems that involve choices to be made over an infinite number of periods, such as the problem that considers a periodic wage offer for a worker. The acceptance implies receiving this wage in all future periods while the rejection implies receiving a new wage offer in the next period. Another example that is increasingly important concerns choosing how to allocate output between consumption and investment, that is, the consumption yields utility in the current period, while investment increases future production.

    Based on the work of Torres, in this article, we will develop tools that allow us to solve the stationary, infinite-horizon optimization problems, that is, precisely, the maximization of the utility of households under weaker assumptions than those considered so far.

    Let us formulate our problem as the following:

    maxkiKtt=0βtU[f(kt)kt+1], (1.1)
    s.t.0kt+1f(kt),t, and given k0>0.

    We can solve (1.1) by defining and solving the following associated functional equation:

    Tv(k)=max0yf(k)U[f(k)y]+βv(y), (1.2)

    where Kt={kt}0, kt is Capital at start of period t, and f(kt) is the production. ct is the consumption, and this means

    kt+1=f(kt)ct.

    U(.) is the current period of utility function, and β(0,1).

    The well-known Banach contraction principle states that, if (X,d) is a complete metric space, and T:XX is a mapping satisfying

    d(T(x),T(y))σd(x,y), (1.3)

    for some σ(0,1) and for all x,yX, then T has a unique fixed point x, and the sequence {xn} generated by the iterative process xn+1=Txn converges to x for some xX.

    The Banach contraction has been widely generalized in several settings. In [44], Ćirić introduced quasi contraction map and showed that the condition of quasi contraction implies all the conditions of Banach's contraction principle.

    In this paper we will consider Ćirić contractions in Banach space, and the main result consists of the existence of a fixed point. The practical relevance of our result is that it can be applied even to discontinuous operators. An important class of problems consists of, for example, infinite-horizon iterative schemes for optimization problems with state constraints for which the value function is merely lower semi-continuous.

    In this paper we investigate the Ćirić operator to prove the properties of infinite-horizon problems. A self map T:XX on a metric space (X,d) is said to be a Ćirić mapping if, for all x and y in X,

    d(T(x),T(y))σmax{d(x,y),d(x,Tx),d(y,Ty),12[d(x,Ty)+d(y,Tx)]}. (1.4)

    We can write

    M(x,y)=max{d(x,y),d(x,Tx),d(y,Ty),12[d(x,Ty)+d(y,Tx)]}.

    To see the relevance of this extension, just consider the following very simple example Ćirić contractive mapping that is not a contraction. Let T:XX be defined by

    Tx={x4for x[0,1],x5for x(1,2]. (1.5)

    The mapping T is Ćirić type with σ=35. Indeed, if both x and y are in X1 or in X2, then

    d(Tx,Ty)=14|x45y|14|x15y|=14d(x,Ty);
    d(Tx,Ty)=15|54xy|15|yTx|14d(y,Tx).

    Therefore, T satisfies the condition

    d(Tx,Ty)14max{d(x,y),d(x,Ty),d(y,Tx)}

    and hence the inequality (1.4).

    To show that T is not Banach contraction on X, let x=9991000 and y=10011000.

    Then, we have d(x,y)=|999100010011000|=4020000

    and, on other side, d(Tx,Ty)=99120000.

    Then, we get, d(Tx,Ty)=99120000>σ4080=σd(x,y), as σ(0,1). Therefore, the Banach contraction condition is not satisfied.

    The Ćirić contractive map does not have to be continuous in general, but it is always convergent to a fixed point.

    This article is organized as follows. In the next section, we formulate the infinite horizon problem that we are going to investigate in two cases: Implicit and fully explicit. Then, we provide the basic definitions, as well as the assumptions to be satisfied by its data. In the ensuing section, Section 3, we present several fixed point results that are fundamental for the development of our contributions: Notably, existence, monotonicity, and attainability. Also pertinent to our results is the Ćirić contraction [44], which will also be introduced in this section. In Section 5, we show that the convergence in norm with probability is one of the iterative procedures defined for our problem under the stated assumptions and application to dynamical programming. Section 6, numerical simulation, contains an algorithm in C++, which was applied for a particular case. Finally, some conclusions, and prospective future work are briefly addressed in Section 7.

    This section is organized into two parts. In the first part, we introduce the operator that is defined via an explicit function in a complete metric space, while in the second part, we study the one that defines an implicit function.

    The optimal value function is a unique solution of the Bellman equation given by

    Tv(x)=max0yf(x)U[f(x)y]+βv(y) for all xX.

    Given some positive function ν:XR, we denote by C(X) the set of functions v such that v<, where the norm on C(X) is defined by :

    v=supxX|v(x)|.

    Hence, C(X) is complete with metric induced by norm. To proceed further and in order to get the optimal cost function, we need to consider for each policy yM the mapping T:C(X)C(X) defined by

    Tv(x)=max0yf(x)U[f(x)y]+βv(y) for all xX.

    Thus, in order to solve Bellman's equation, we only need to introduce the material that leads us to prove the existence and uniqueness of the fixed point, that is, Tv=v.

    Example 2.1. One sector sustainable growth

    Further, we need the following notations related to the economy concepts.

    - Capital at the start of period t is denoted by kt.

    - Production (including depreciated capital) is f(kt) and consumption is ct,

    so kt+1=f(kt)ct.

    Given k00 and 0kt+1f(kt), the planner maximizes

    t=0δtU(ct)=t=0δtU(f(kt)kt+1).

    Let U:R+R be an increasing and bounded map and let δ(0,1). Suppose the problem has a solution for all k00. Let us define the value function v:R+R, where v(k0) is value of the maximized objective function given k0.

    Then, the planner's problem in period 0 is maxc0,k1(U(c0)+δV(k1), subject to c0+k1=f(k0), c0,k10 and k00 given.

    The value function must satisfy the following functional equation:

    Tv(k)=max0yf(k)(U(f(k)y)+δv(y).

    Note vB(X), where X=R+.

    We define T:C(X)C(X) as follows:

    Tv(k)=max0yf(k)(U(f(k)y)+δv(y).

    A solution is a function v satisfying v=Tv.

    Note, however, that evaluating an optimal policy requires not only availability of the optimal value function v but also the dynamics function f and stage cost function U. If f and U are unknown, then knowing v is not sufficient for evaluating an optimal policy. To overcome this issue, we will use the optimal H-function to compute an optimal policy without knowing f and U explicitly.

    We consider a set X of states, a set N of controls (the current period of utility) and, for each xX, a nonempty control constraint N(x)N. We denote by M the set of all functions μ:XN with μ(x)N(x) for all xX, which will be referred to as current period choices.

    We denote by V(X) the set of functions v:XR and by ˉV(X) the set of functions v:X¯R where ¯R=R{,}. We study the operator of the form

    H:X×N×V(X)R,

    and, for each policy μM, we consider the mapping T:V(X)ˉV(X) defined by

    Tv(x)=maxμMH(x,μ,v)xX.

    In view of the definition of M, T and Tμ, we have the following relations:

    Tv(x)=maxμM{H(x,μ(x),v)}=maxuU{H(x,u,v)}.

    Example 2.2. Let T:C(X)C(X) be defined as follows:

    Tv(x)={v(x)4for xS1,v(x)5for xS2, (2.1)

    where

    S1={vC(X):0v(x)1},

    and

    S2={vC(X):1<v(x)2}.

    Then, it is clear that T is a Ćirić contractive map, but it does not satisfy the Banach contraction condition.

    In this section, many results that will be instrumental in the proof of the main result of this article are presented.

    Definition 3.1. (Ćirić map) A self map T of B(X) is called a Ćirić contractive map if

    TvTvσmax{vv,vTv,vTv,12[vTv+vTv]},

    for all v,vC(X), where σ(0,1).

    Assumption 3.1. The self map Tμ is a Ćirić contractive map.

    Theorem 3.1. (Existence) Let the operators T,Tμ:C(X)C(X) be Ćirić contractive. Then, T and Tμ have, respectively, v and vμ as fixed points.

    Lemma 3.1. The following hold:

    i) For an arbitrary v0C(X), the sequence {vk} defined by vk+1=Tμvk converges in norm to vμ.

    ii) For an arbitrary v0C(X), the sequence {vk} defined by vk+1=Tvk converges in norm to v.

    Lemma 3.2. C(X) is complete with respect to the topology induced by .

    It is not difficult to observe that C(X) is closed and convex. Thus, given {vk}k=1C(X) and vC(X), if vkV in the sense that limkvkv=0, then limkvk(x)=v(x) for all xX.

    Now, we introduce the following standard assumptions:

    Assumption 3.2. (Well-posedness) For all vC(X) and for allμM, we have that TμvC(X) and TvC(X).

    From Definition 3.1, we conclude that every contraction T is also a Ćirić contractive map. However, the inverse is not always true.

    We will require the following properties to hold.

    Assumption 3.3. (Monotonicity) For allv,vC(X), we have that vv implies

    H(x,u,v)H(x,u,v),xX,uU(x),

    where "≤" is defined in a pointwise sense in X.

    Assumption 3.4. (Attainability) For all vC(X), there exists μM such that Tμv=Tv.

    Blackwell's sufficient condition for Ćirić contractive map

    Theorem 4.1. (Blackwell [7]) Let T be an operator defined on C(X) satisfying the following properties:

    (a) [monotonicity] For all xX and for all v,vC(X), v(x)v(x), implies that for all xX,Tv(x)Tv(x).

    (b) [discounting] There exists some β(0,1) such that

    for allvC(X),a0,xX,[T(v+a)](x)(Tv)(x)+βa.

    Then, T is a Ćirić map with modulus β.

    Proof. For any v and vC(X) and any xX,

    v(x)v(x)vvM(v,v).

    Hence, v(x)M(v,v)+v(x) for all xX, so

    vM(v,v)+v.

    Suppose T:C(X)C(X) satisfies Blackwell's condition. Then, there exists β(0,1) such that

    T(v)T(v+M(v,v))Tv+βM(v,v).

    By going through the exact same reasoning, by reversing the role of v and v we will end up with

    T(v)T(v+M(v,v))Tv+βM(v,v).

    Hence, for all xX, with |Tv(x)Tv(x)|βM(v,v), then

    TvTvβM(v,v).

    Thus, T is a Ćirić contractive map.

    Theorem 5.1. Let T:C(X)C(X) be a Ćirić contractive map. Then, we have the following.

    1) There exists a unique solution to the Bellman equation Tv=v.

    2) The solution can be found by iterating the relation vk+1=Tvk.

    Remark 5.1. The unique solution of the Bellman equation here is the optimal cost function, which is the maximum of the utility.

    Remark 5.2. Unlike the Banach contraction operator, the Ćirić operator is not necessarily continuous.

    Stochastic shortest path problem

    Next, we present an example illustrating our results.

    Let

    Tv(k)={max0yf(k)(U(f(k)y)+14v(y)for vS1,max0yf(k)(U(f(k)y)+15v(y)for vS2, (5.1)

    where

    S1={vC(X):v(x)[0,1]},
    S2={vC(X):v(x)(1,2]}.

    g:X×UR is incurred when control uU(x) is selected at state x.

    Let ν(x)=1,  for all xX. Given an arbitrary v0B(X), since H(,,) is a Ćirić contractive map, it is clear that the Bellman equation defined by

    Tv(k)={max0yf(k)(U(f(k)y)+14v(y)for vS1,max0yf(k)(U(f(k)y)+15v(y)for vS2 (5.2)

    is not a Banach contraction, but it is a Ćirić contractive map.

    If in [37] we have

    Tv(k)=max0yf(x){U[f(k)y]+β(y)}, (5.3)

    where fBd(Kt) is the set of all real-valued bounded functions on Kt, we define a norm on Bd(Kt) by f=supt[0,)|f(kt)| for all fBd(Kt).

    Then, (Bd(Kt),) forms a Banach space equipped with the metric defined by d(f1,f2):=supt[0,)|f1(kt)f2(kt)| for all f1,f2Bd(Kt).

    We are going to analyze the existence of the solution for the functional equation [46] by using the concept of the extended interpolative Reich-Rus type ψF-contraction. For this, we use an operator T:Bd(Kt)Bd(Kt) defined by

    Tv(k)=max0yf(k){U[f(x)y]+βv(y)}, (5.4)

    for all fBd(Kt).

    Obviously, for the cases in which U and v are bounded, T becomes well-defined.

    Theorem 5.2. Let T:Bd(Kt)Bd(Kt) be an operator defined by (5.4) and the following properties:

    1) U and v are continuous and bounded.

    2) For all f1,f2Bd(Kt)Fix(T), satisfying

    |U[f1(k)y]U[f2(k)y]|1ed(f1,f2)d(f1,Tf1)d(f2,Tf2), (5.5)

    the Eq (5.3) has a bounded solution.

    Proof. Let ϵ>0 and f1,f2Bd(Kt)Fix(T). Then, there exist k1,k2 such that

    U[f1(k1)y]+βv(y)>Tf1(k)ϵ, (5.6)
    U[f2(k2)y]+βv(y)>Tf2(k)ϵ, (5.7)
    Tf1(k)U[f2(k2)y]+βv(y), (5.8)
    Tf2(k)U[f1(k1)y]+βv(y). (5.9)

    From (5.6) and (5.7) results

    Tf1(k)Tf2(k)<U[f1(k1)y]+βv(y)U[f2(k2)y]+βv(y)+ε|U[f1(k1)y]U[f2(k2)y]|+ε1e3d(f1,f2)d(f1,Tf1)d(f2,Tf2),for allk. (5.10)

    Analogously, (5.8) and (5.9) imply

    Tf2(k)Tf1(k)<1e3d(f1,f2)d(f1,Tf1)d(f2,Tf2)+ε,for allk. (5.11)

    From (5.10) and (5.11) results

    |Tf1(k)Tf2(k)|<1e3d(f1,f2)d(f1,Tf1)d(f2,Tf2)+ε,for allk. (5.12)

    Since ε>0 is arbitrary, we have

    |Tf1(k)Tf2(k)|1e3d(f1,f2)d(f1,Tf1)d(f2,Tf2),

    for all k, and thus,

    d(Tf1(k),Tf2(k))1e3d(f1,f2)d(f1,Tf1)d(f2,Tf2).

    We apply the logarithm function, and we obtain

    lnd(Tf1,Tf2)1+13lnd(f1,f2)+13lnd(f1,Tf1)+13lnd(f2,Tf2)=13[lnd(f1,f2)1]+13[lnd(f1,Tf1)1]+13[lnd(f2,Tf2)1]. (5.13)

    The relation (5.13) is nothing but extended interpolative Reich-Rus type ψF-contractions for the operator T, with F(x)=lnx, ψ(x)=x1,a=b=c=13 and s=1, because: A mapping T:Bd(kt)Bd(kt) (Bd(kt) being a b - metric space) is an extended interpolative Reich-Rus type ψT-contraction if there exist FF and ψΨ such that for all f1,f2Bd(kt)Fix(T) with Tf1Tf2,

    F(d(Tf1,Tf2))aψ(F(d(f1,f2)))+bψ(F(d(f1,Tf1)))+cψ(F(d(f2,Tf2))),

    for some constants a,b,c[0,1] with 0<a+b+c1,

    where F denotes the class of all functions F:(0,)R for which the following hold:

    (F1) limnxn=0limnF(xn)=, for all sequences {xn}(0,).

    (F2) There exists k(0,1), and xkF(x)0 for x0+.

    Ψ={ψ:RR|ψ is monotone increasing and ψn(t) for n for all tR}; ψn is the nt composition of ψ [45].

    We have ψn(x)=xn, and for fBd(Kt)Fix(T),β=d(f,Tf)=supt[0,)|f(t)Tf(t)|R. Then, for all p(0,1), the series n|ψnF(f)|1/p is convergent. Hence, by Theorem 3.1 from [46], T has a fixed point in Bd(Kt), and that implies that Eq (5.3) has a bounded solution.

    Example 6.1. We consider the functions f:RR+, U:R+R, U(t)=t+1t+2, v:RR, v(y)=2y+1y+1, the parameter β(0,1) and a point xR. Define the function hβ,x:[0,f(x)]R,

    hβ,x(y)=U[f(x)y]+βv(y)=f(x)y+1f(x)y+2+β2y+1y+1=
    =11f(x)y+2+β(21y+1).

    After a simple calculation we find

    hβ,x(y)=1(f(x)y+2)2+β(y+1)2.

    It follows that hβ,x(y)=0 if and only if y=y0=f(x)+21β1β+1. If y0[0,f(x)], then hβ,x(y)0 for y[0,y0], and hβ,x(y)0 for y[y0,f(x]). Consequently, y0=f(x)+21β1β+1 is the unique maximum point of the function hβ,x(y) on the interval [0,f(x)]. Moreover, the maximum value of the function hβ,x(y) on the interval [0,f(x)] is given by

    h0=hβ,x(y0)=11f(x)y0+2+β(21y0+1)=
    =11f(x)f(x)+21β1β+1+2+β(21f(x)+21β1β+1+1)=1+2β(1+β)2f(x)+3.

    Considering the particular case for β=14, we find that y0=f(x)3[0,f(x)] is the unique maximum point of the function h14,x(y) on the interval [0,f(x)]. The maximum value of the function h14,x(y) on the interval [0,f(x)] is

    h0=h14,x(y0)=34(1+f(x)f(x)+3).

    ALGORITHM

    Input parameters

    - The analytical expression of the function f:RR+;

    - The analytical expression of the function U:R+R;

    - The analytical expression of the function v:RR;

    - The parameter β(0,1);

    - The point xR for which we calculate the solution of the optimization problem

    maxy[0,f(x)]{U[f(x)y]+βv(y)}=miny[0,f(x)]{U[f(x)y]βv(y)};

    - The error for the computation of the maximum point of the optimization problem, ε>0.

    Output parameters

    xmax - the maximum point of the optimization problem, calculated with the error ε;

    gmax - the maximum value of the function to be optimized;

    n - the number of iterations.

    Pseudocode

    Define the function gβ,x:[0,f(x)]R,

    gβ,x(y)=U[f(x)y]βv(y)

    A=0

    B=f(x)

    F0=1

    F1=1

    n=1

    While FnBAε do

    n=n+1 Fn=Fn1+Fn2

    End While

    a1=A

    b1=B

    λ1=a1+Fn2Fn(b1a1)

    μ1=a1+Fn1Fn(b1a1)

    For k=1,2,...,n1 do

    If gβ,x(λk)<gβ,x(μk) then

    ak+1=ak

    bk+1=μk

    λk+1=ak+1+Fnk2Fnk(bk+1ak+1)

    μk+1=λk

    else

    ak+1=λk

    bk+1=bk

    λk+1=μk

    μk+1=ak+1+Fnk1Fnk(bk+1ak+1)

    End If

    End For

    xmax=an+bn2

    gmax=gβ,x(xmax)

    PROCEDURE IN C++

    #include <iostream>

    #include <math.h>

    using namespace std;

    double f(double x)

    {

    return pow(2, -fabs(x));

    }

    double U(double t)

    {

    return (t+1)/(t+2);

    }

    double v(double y)

    {

    return (2*y+1)/(y+1);

    }

    double g(double beta, double x, double y)

    {

    return -U(f(x)-y)-beta*v(y);

    }

    void InputParameters(double & beta, double & x, double & eps)

    {

    cout << "beta="; cin >> beta;

    cout << "x="; cin >> x;

    cout << "eps="; cin >> eps;

    }

    void OptimizationMethod(double beta, double x, double eps, int & n, double F[100], double a[100], double b[100], double lambda[100], double miu[100], double & ymax, double & gmax)

    {

    double A, B;

    int k;

    A = 0;

    B = f(x);

    F[0] = 1;

    F[1] = 1;

    n = 1;

    while (F[n]<=(B-A)/eps)

    {

    n++;

    F[n] = F[n-1]+F[n-2];

    }

    a[1] = A;

    b[1] = B;

    lambda[1] = a[1]+F[n-2]*(b[1]-a[1])/F[n];

    miu[1] = a[1]+F[n-1]*(b[1]-a[1])/F[n];

    for(k = 1;k<=n-1;k++)

    if (g(beta, x, lambda[k])<g(beta, x, miu[k]))

    {

    a[k+1] = a[k];

    b[k+1] = miu[k];

    lambda[k+1] = a[k+1]+F[n-k-2]*(b[k+1]-a[k+1])/F[n-k];

    miu[k+1] = lambda[k];

    }

    else

    {

    a[k+1] = lambda[k];

    b[k+1] = b[k];

    lambda[k+1] = miu[k];

    miu[k+1] = a[k+1]+F[n-k-1]*(b[k+1]-a[k+1])/F[n-k];

    }

    ymax = (a[n]+b[n])/2;;

    gmax = -g(beta, x, ymax);

    }

    void OutputParameters(double beta, double x, int n, double F[100], double a[100], double b[100], double lambda[100], double miu[100], double ymax, double gmax)

    {

    int k;

    cout<<"The number of iterations is n="<<n<<endl;

    cout<<"k g(beta, x, lambda[k]) g(beta, x, miu[k]) a[k] b[k] lambda[k] miu[k]"<<endl;

    cout.precision(7);

    for(k = 1;k<=n; k++)

    cout<<k<<" "<<g(beta, x, lambda[k])<<" "<<g(beta, x, miu[k])<<" "<<a[k]<<" "

    <<b[k]<<" "<<lambda[k]<<" "<<miu[k]<<endl;

    cout<<"The maximum point of the function is ymax="<<ymax<<endl;

    cout<<"The maximum value of the function is gmax="<<gmax<<endl;

    }

    int main()

    {

    int n;

    double beta, x, eps, F[100], a[100], b[100], lambda[100], miu[100], ymax, gmax;

    InputParameters(beta, x, eps);

    OptimizationMethod(beta, x, eps, n, F, a, b, lambda, miu, ymax, gmax);

    OutputParameters(beta, x, n, F, a, b, lambda, miu, ymax, gmax);

    return 0;

    }

    TEST DATA

    f:RR+, f(x)=2|x|

    U:R+R, U(t)=t+1t+2

    v:RR, v(y)=2y+1y+1

    beta = 0.25

    x = -0.25

    eps = 0.00001

    RESULTS

    The number of iterations is n = 25

    The maximum point of the function is ymax = 0.2803

    The maximum value of the function is gmax = 0.9141993

    The outputs and outcomes are presented in Table 1 and also in a graphical representation in Figure 1:

    Table 1.  Outputs and outcomes.
    k g(beta,x,lambda[k]) g(beta,x,miu[k]) a[k] b[k] lambda[k] miu[k]
    1 -1.249433 -1.249433 -0.6 -0.4 -0.5238095 -0.4761905
    2 -0.9129618 -0.913905 0 0.5197026 0.1985087 0.3211938
    3 -0.913905 -0.9118618 0.1985087 0.5197026 0.3211938 0.3970174
    4 0.9141929 -0.913905 0.1985087 0.3970174 0.2743323 0.3211938
    5 -0.9139782 -0.9141929 0.1985087 0.3211938 0.2453703 0.2743323
    6 -0.9141929 -0.9141739 0.2453703 0.3211938 0.2743323 0.2922318
    7 -0.9141471 -0.9141929 0.2453703 0.2922318 0.2632698 0.2743323
    8 -0.9141929 -0.9141991 0.2632698 0.2922318 0.2743323 0.2811693
    9 -0.9141991 -0.9141946 0.2743323 0.2922318 0.2811693 0.2853948
    10 -0.9141987 -0.9141991 0.2743323 0.2853948 0.2785578 0.2811693
    11 -0.9141991 -0.9141981 0.2785578 0.2853948 0.2811693 0.2827833
    12 -0.9141992 -0.9141991 0.2785578 0.2827833 0.2801718 0.2811693
    13 -0.9141992 -0.9141992 0.2785578 0.2811693 0.2795553 0.2801718
    14 -0.9141992 -0.9141992 0.2795553 0.2811693 0.2801718 0.2805528
    15 -0.9141992 -0.9141992 0.2795553 0.2805528 0.2799363 0.2801718
    16 -0.9141992 -0.9141993 0.2799363 0.2805528 0.2801718 0.2803173
    17 -0.9141993 -0.9141992 0.2801718 0.2805528 0.2803173 0.2804073
    18 -0.9141992 -0.9141993 0.2801718 0.2804073 0.2802619 0.2803173
    19 -0.9141993 -0.9141992 0.2802619 0.2804073 0.2803173 0.2803519
    20 -0.9141993 -0.9141993 0.2802619 0.2803519 0.2802965 0.2803173
    21 -0.9141993 -0.9141993 0.2802619 0.2803173 0.2802826 0.2802965
    22 -0.9141993 -0.9141993 0.2802826 0.2803173 0.2802965 0.2803034
    23 -0.9141993 -0.9141993 0.2802826 0.2803034 0.2802896 0.2802965
    24 -0.9141993 -0.9141993 0.2802896 0.2803034 0.2802965 0.2802965
    25 -0.9141993 -0.9141993 0.2802965 0.2803034 0.2802965 0.2803034

     | Show Table
    DownLoad: CSV
    Figure 1.  A graphical representation in Microsoft Excel.

    In this article, we approached an economic/financial problem, the stationary infinite horizon problem, under a much wider class of contractive operators by applying methods of fixed point theory. By resorting to the Ćirić operator and the Reich-Rus type ψF-contraction, we proved the convergence, the existence and the uniqueness of the results of the optimal cost function of the infinite horizon problem in Banach space. The numerical simulation showed that a stationary infinite horizon problem can be solved for particular cases with applications in the financial/economic field.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Project financed by Lucian Blaga University of Sibiu through the research grant LBUS-IRG-2023.

    The first author acknowledges the support of FCT for the grant 2021.07608.BD, the ARISE Associated Laboratory LA/P/0112/2020, and the R&D Unit SYSTEC - Base - UIDB/00147/2020 and Programmatic - UIDP/00147/2020 funds.

    The authors declare that they have no conflict of interest. The authors contributed equally to the writing of this manuscript. All authors have approved the final version of the manuscript.



    [1] D. P. Bertsekas, Dynamic programming and optimal control, Belmont: Athena Scientific, 1995.
    [2] D. Acemoglu, Introduction to modern economic growth, 2007. Available from: https://www.theigc.org/sites/default/files/2016/06/acemoglu-2007.pdf.
    [3] Y. Guo, X. B. Shu, F. Xu, C. Yang, HJB equation for optimal control system with random impulses, Optimization, 2022. https://10.1080/02331934.2022.2154607
    [4] F. Xu, Y. Lai, X. B. Shu, Chaos in integer order and fractional order financial systems and their synchronization, Chaos Soliton. Fract., 117 (2018), 125–136. https://doi.org/10.1016/j.chaos.2018.10.005 doi: 10.1016/j.chaos.2018.10.005
    [5] D. P. Bertsekas, Dynamic programming and optimal control, Belmont: Athena Scientific, 2005. Available from: http://athenasc.com/dpbook.html.
    [6] M. J. Miranda, P. L. Fackler, Applied computational economics and finance, The MIT Press, 2004. Available from: https://mitpress.mit.edu/9780262633093/applied-computational-economics-and-finance/.
    [7] R. Torres, Dynamic programming (ECO 10401 - 001), 2014. Available from: http://ciep.itam.mx/rtorres/progdin/.
    [8] N. L. Stokey, R. E. Lucas Jr., E. C. Prescott, Recursive methods in economic dynamics, Cambridge: Harvard University Press, 1989. https://doi.org/10.2307/j.ctvjnrt76
    [9] K. J. Arrow, T. Harris, J. Marschak, Optimal inventory policy, Econometrica, 19 (1951), 250–272. Available from: https://www.or.mist.i.u-tokyo.ac.jp/takeda/FreshmanCourse/ArrowHarrisMarschak.pdf.
    [10] R. E. Lucas Jr., E. C. Prescott, Investment under uncertainty, Econometrica, 39 (1971), 659–681. https://doi.org/10.2307/1909571 doi: 10.2307/1909571
    [11] W. A. Brock, L. J. Mirman, Optimal economic growth and uncertainty: The discounted case, J. Econ. Theory, 4 (1972), 479–513. https://doi.org/10.1016/0022-0531(72)90135-4 doi: 10.1016/0022-0531(72)90135-4
    [12] R. E. Lucas, Asset prices in an exchange economy, Econometrica, 46 (1978), 1429–1445. Available from: http://www.jstor.org/stable/1913837.
    [13] W. A. Brock, The economics of information and uncertainty, Chapter title: Asset prices in a production economy, Chicago: University of Chicago Press, 1982. Available from: https://www.nber.org/system/files/chapters/c4431/c4431.pdf.
    [14] F. E. Kydland, E. C. Prescott, Time to build and aggregate fluctuations, Econometrica, 50 (1982), 1345–1370. Available from: http://www.jstor.org/stable/1913386?origin = JSTOR-pdf.
    [15] I. P. Androulakis, Dynamic programming: Infinite horizon problems overview, In: C. Floudas, P. Pardalos (eds), Encyclopedia of optimization, Boston: Springer, 2008. https://doi.org/10.1007/978-0-387-74759-0-148
    [16] T. Kamihigashi, Existence and uniqueness of a fixed point for the Bellman operator in deterministic dynamic programming, REB Disscution, 2011.
    [17] M. G. Bellemare, G. Ostrovski, A. Guez, P. Thomas, R. Munos, Increasing the action gap: New operators for reinforcement learning, Pro. AAAI Conf. Artif. Intell., 30 (2016). https://doi.org/10.48550/arXiv.1512.04860
    [18] M. G. Bellemare, W. Dabney, R. Munos, A distributional perspective on reinforcement learning, Int. Conf. Machine Learn., 2017,449–458. https://doi.org/10.48550/arXiv.1707.06887
    [19] M. G. Bellemare, W. Dabney, M. Rowland, Distributional reinforcement learning, MIT Press, 2023. Available from: http://www.distributional-rl.org.
    [20] R. Bellman, Dynamic programming, Science, 153 (1966), 34–37.
    [21] J. B. Serrano, S. Curi, A. Krause, G. Neu, Logistic q-learning, Int. Conf. Artif. Intell. Statis., 2021, 3610–3618. Available from: https://proceedings.mlr.press/v130/bas-serrano21a.html.
    [22] J. Filar, K. Vrieze, Competitive Markov decision processes, Springer Science & Business Media, 2012. Available from: https://link.springer.com/book/10.1007/978-1-4612-4054-9.
    [23] M. L. Puterman, Markov decision processes: Discrete stochastic dynamic programming, John Wiley & Sons, 2014.
    [24] J. M. Alden, R. L. Smith, Rolling horizon procedures in nonhomogeneous Markov decision processes, Oper. Res., 40 (1984), 183–194. http://dx.doi.org/10.1287/opre.40.3.S183 doi: 10.1287/opre.40.3.S183
    [25] I. E. Schochetman, R. L. Smith, Infinite horizon optimization, Math. Oper. Res., 14 (1989), 559–574. http://dx.doi.org/10.1287/moor.14.3.559 doi: 10.1287/moor.14.3.559
    [26] A. Charnes, W. W. Cooper, B. Mellon, A model for optimizing production by reference to cost surrogates, Econometrica, 23 (1955), 307–323. http://dx.doi.org/10.2307/1910387 doi: 10.2307/1910387
    [27] F. Modigliani, F. E. Hohn, Production planning over time and the nature of the expectation and planning horizon, Econometrica, 23 (1955), 46–66. http://dx.doi.org/10.2307/1905580 doi: 10.2307/1905580
    [28] S. M. Johnson, Sequential production planning over time at minimum cost, Manag. Sci., 3 (1957), 435–437. http://dx.doi.org/10.1287/mnsc.3.4.435 doi: 10.1287/mnsc.3.4.435
    [29] S. Chand, S. P. Sethi, J. M. Proth, Existence of forecast horizons in undiscounted discrete-time lot size models, Oper. Res., 38 (1990), 884–892. http://dx.doi.org/10.1287/opre.38.5.884 doi: 10.1287/opre.38.5.884
    [30] S. P. Sethi, G. L. Thompson, Optimal control theory, New York: Springer, 2005. http://dx.doi.org/10.1007/978-1-4757-6097-2
    [31] E. V. Denardo, Dynamic programming: Models and applications, Mineola: Dover, 2003.
    [32] T. Cheevaprawatdomrong, R. L. Smith, Infinite horizon production scheduling in time varying systems under stochastic demand, Oper. Res., 52 (2004), 105–115. http://dx.doi.org/10.1287/opre.1030.0080 doi: 10.1287/opre.1030.0080
    [33] C. Bes, S. P. Sethi, Concepts of forecast and decision horizons: Applications to dynamic stochastic optimization problems, Math. Oper. Res., 13 (1988), 295–310. http://dx.doi.org/10.1287/moor.13.2.295 doi: 10.1287/moor.13.2.295
    [34] S. P. Sethi, A note on a planning horizon model of cash management, J. Financ. Quant. Anal., 6 (1971), 659–665. http://dx.doi.org/10.2307/2330136 doi: 10.2307/2330136
    [35] A. Ghate, R. L. Smith, Optimal backlogging over an infinite horizon under time-varying convex production and inventory costs, Manuf. Serv. Op. Manag., 11 (2009), 362–368. http://dx.doi.org/10.1287/msom.1080.0218 doi: 10.1287/msom.1080.0218
    [36] N. L. Stokey, Recursive methods in economic dynamics, Harvard University Press, 1969. Available from: https://www.hup.harvard.edu/catalog.php?isbn = 9780674750968.
    [37] X. Yang, There important applications of mathematics in financial mathematics, Am. J. Indust. Bus. Manage., 7 (2017), 1096–1100. https://doi.org/10.4236/ajibm.2017.79077 doi: 10.4236/ajibm.2017.79077
    [38] V. Brătian, A. Bucur, C. Oprean, Finanţe cantitative: Evaluarea valorilor mobiliare şi gestiunea portofoliului, Sibiu: Lucian Blaga Publishing House, 2016. Available from: http://biblioteca.ulbsibiu.ro.
    [39] C. O. Stan, C. Tănăsescu, A. Bucur, A new proposal for efficiency quantification of capital markets in the context of complex nonlinear dynamics and chaos, Econ. Res.-Ekon. Istraž., 30 (2017), 1669–1693. http://dx.doi.org/10.1080/1331677X.2017.1383172
    [40] V. Brătian, A. Bucur, C. Oprean, C. Tănăsescu, Discretionary vs nondiscretionary in fiscal mechanism-nonautomatic fiscal stabilities, Econ. Res.-Ekon. Istraž., 29 (2017), 1–17. http://dx.doi.org/10.1080/1331677X.2015.1106330
    [41] D. E. Kirk, Optimal control theory: An introduction, New Jersey: Prentice-Hall, Englewood Cliffs, 1998. Available from: https://books.google.ro.
    [42] M. Sniedovich Dynamic programming: Foundations and principles, CRC Press, 2010. https://doi.org/10.1201/EBK0824740993
    [43] R. Bellman The theory of dynamic programming, Santa Monica: Rand Corporation Report, 1954. Available from: https://www.rand.org/content/dam/rand/pubs/papers/2008/P550.pdf.
    [44] L. B. Ćirić, A generalization of Banach's contraction principle, P. Am. Math. Soc., 45 (1974), 267–273. Available from: https://www.ams.org/journals/proc/1974-045-02/S0002-9939-1974-0356011-2/S0002-9939-1974-0356011-2.pdf.
    [45] N. A. Secelean, D. Wardowski, ψF-contraction: Not necessarily nonexpansive Picard operators, Results Math., 70 (2016), 41–431. http://dx.doi.org/10.1007/s00025-016-0570-7 doi: 10.1007/s00025-016-0570-7
    [46] S. Panya, K. Roy, M. Soha, Fixed point for a class of extended interpolative ψF-contraction maps over ab-metric space and its applications to dynamical programming, U. P. B. SCI Bull Series A, 83 (2021), 59–70. Available from: https://www.scientificbulletin.upb.ro.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1625) PDF downloads(78) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog