
This paper aims to give generating functions for the new family of polynomials, which are called parametric types of the Apostol Bernoulli-Fibonacci, the Apostol Euler-Fibonacci, and the Apostol Genocchi-Fibonacci polynomials by using Golden calculus. Numerous properties of these polynomials with their generating functions are investigated. These generating functions give us a generalization of some well-known generating functions for special polynomials such as Apostol Bernoulli-Fibonacci, Apostol Euler-Fibonacci, and Apostol Genocchi-Fibonacci polynomials. Using the Golden differential operator technique, the functional equation method for generating function, we present some properties of these newly established polynomials.
Citation: Can Kızılateş, Halit Öztürk. On parametric types of Apostol Bernoulli-Fibonacci, Apostol Euler-Fibonacci, and Apostol Genocchi-Fibonacci polynomials via Golden calculus[J]. AIMS Mathematics, 2023, 8(4): 8386-8402. doi: 10.3934/math.2023423
[1] | Weam G. Alharbi, Abdullah F. Shater, Abdelhalim Ebaid, Carlo Cattani, Mounirah Areshi, Mohammed M. Jalal, Mohammed K. Alharbi . Communicable disease model in view of fractional calculus. AIMS Mathematics, 2023, 8(5): 10033-10048. doi: 10.3934/math.2023508 |
[2] | Ateq Alsaadi . Advancing water quality management: A synergistic approach using fractional differential equations and neural networks. AIMS Mathematics, 2025, 10(3): 5332-5352. doi: 10.3934/math.2025246 |
[3] | Jagdev Singh, Jitendra Kumar, Devendra Kumar, Dumitru Baleanu . A reliable numerical algorithm for fractional Lienard equation arising in oscillating circuits. AIMS Mathematics, 2024, 9(7): 19557-19568. doi: 10.3934/math.2024954 |
[4] | Mohammad Sajid, Biplab Dhar, Ahmed S. Almohaimeed . Differential order analysis and sensitivity analysis of a CoVID-19 infection system with memory effect. AIMS Mathematics, 2022, 7(12): 20594-20614. doi: 10.3934/math.20221129 |
[5] | Ahu Ercan . Comparative analysis for fractional nonlinear Sturm-Liouville equations with singular and non-singular kernels. AIMS Mathematics, 2022, 7(7): 13325-13343. doi: 10.3934/math.2022736 |
[6] | Maysaa Al-Qurashi, Saima Rashid, Fahd Jarad, Madeeha Tahir, Abdullah M. Alsharif . New computations for the two-mode version of the fractional Zakharov-Kuznetsov model in plasma fluid by means of the Shehu decomposition method. AIMS Mathematics, 2022, 7(2): 2044-2060. doi: 10.3934/math.2022117 |
[7] | Maysaa Al Qurashi, Saima Rashid, Sobia Sultana, Fahd Jarad, Abdullah M. Alsharif . Fractional-order partial differential equations describing propagation of shallow water waves depending on power and Mittag-Leffler memory. AIMS Mathematics, 2022, 7(7): 12587-12619. doi: 10.3934/math.2022697 |
[8] | M. Mossa Al-Sawalha, Khalil Hadi Hakami, Mohammad Alqudah, Qasem M. Tawhari, Hussain Gissy . Novel Laplace-integrated least square methods for solving the fractional nonlinear damped Burgers' equation. AIMS Mathematics, 2025, 10(3): 7099-7126. doi: 10.3934/math.2025324 |
[9] | Rahat Zarin, Abdur Raouf, Amir Khan, Aeshah A. Raezah, Usa Wannasingha Humphries . Computational modeling of financial crime population dynamics under different fractional operators. AIMS Mathematics, 2023, 8(9): 20755-20789. doi: 10.3934/math.20231058 |
[10] | Peijiang Liu, Taj Munir, Ting Cui, Anwarud Din, Peng Wu . Mathematical assessment of the dynamics of the tobacco smoking model: An application of fractional theory. AIMS Mathematics, 2022, 7(4): 7143-7165. doi: 10.3934/math.2022398 |
This paper aims to give generating functions for the new family of polynomials, which are called parametric types of the Apostol Bernoulli-Fibonacci, the Apostol Euler-Fibonacci, and the Apostol Genocchi-Fibonacci polynomials by using Golden calculus. Numerous properties of these polynomials with their generating functions are investigated. These generating functions give us a generalization of some well-known generating functions for special polynomials such as Apostol Bernoulli-Fibonacci, Apostol Euler-Fibonacci, and Apostol Genocchi-Fibonacci polynomials. Using the Golden differential operator technique, the functional equation method for generating function, we present some properties of these newly established polynomials.
Reinsurance is an effective risk management tool for an insurer to mitigate the underwriting risk by transferring part of the risk exposure to a reinsurer. Starting from [1,2], the study of optimal reinsurance has remained a fascinating topic in actuarial science. Most existing literatures on optimal reinsurance are from an insurer's point of view. For example, by maximizing the expected concave utility function of an insurer's wealth, Arrow [3] showed that optimal reinsurance for an insurer is a stop-loss reinsurance. The result has been extended to different settings (see, e.g., [4,5] and references therein). It is well known that the optimal reinsurance for an insurer, which minimizes the variance of the insurer's loss, is also a stop-loss reinsurance (see [6]). However, Vajda [7] showed that the optimal reinsurance for a reinsurer, which minimizes the variance of the reinsurer's loss with a fixed net reinsurance premium, is a quota-share reinsurance among a class of ceded loss functions that include stop-loss reinsurance. Kaluszka and Okolewski [8] showed that if an insurer wants to maximize his expected utility with the maximal possible claim premium principle, the optimal form of reinsurance for the insurer is a limited stop-loss reinsurance. In recent years, Cai et al. [9,10] introduced two classes of optimal reinsurance models by minimizing the value-at-risk (VaR) and the conditional tail expectation (CTE) of the insurer's total risk exposure. Cai et al. [10] proved that depending on the risk measure level of confidence, the optimal reinsurance for an insurer, which minimizes the VaR and CTE of the total risk of the insurer, can be in the form of a stop-loss reinsurance or a quota-share reinsurance or a change-loss reinsurance under the expected value principle and among the increasing convex ceded loss functions. Recent references on VaR-minimization and CTE-minimization reinsurance models can be found in [11,12,13,14,15,16,17] and references therein.
However, a reinsurance contract involves two parties, an insurer and a reinsurer. The two parties have conflicting interests. An optimal reinsurance contract for an insurer may not be optimal for a reinsurer and it might be unacceptable for a reinsurer as pointed out by Borch [18]. Therefore, an interesting question about optimal reinsurance is to design a reinsurance contract so that it considers the interests of both an insurer and a reinsurer. Borch [1] first discussed the optimal quota-share retention and stop-loss retention that maximize the product of the expected utility functions of the two parties' wealth. Cai et al. derived the optimal reinsurance contracts that maximize the joint survival probability and joint profitable probability of the two parties, and gave the sufficient conditions for optimal reinsurance contracts within a wide class of reinsurance policies and under a general reinsurance premium principle, see [19,20]. Cai et al. [21] studied the optimal reinsurance strategy, which based on the minimum convex combination of the VaR of the insurer and the reinsurer under two types of constraints. Lo [22] discussed the generalized problems of [21] by using the Neyman-Pearson approach. Based on the optimal reinsurance strategy of [21], Jiang et al. [23] proved that the optimal reinsurance strategy is a Pareto-optimal reinsurance policy and gave optimal reinsurance strategies using the geometric method. Cai et al. [24] studied Pareto optimality of reinsurance arrangements under general model settings and obtained the explicit forms of the Pareto-optimal reinsurance contracts under TVaR risk measure and the expected value premium principle. By geometric approach, Fang et al. [25] studied Pareto-optimal reinsurance policies under general premium principles and gave the explicit parameters of the optimal ceded loss functions under Dutch premium principle and Wang's premium principle. Lo and Tang [26] characterized the set of Pareto-optimal reinsurance policies analytically and visualized the insurer-reinsurer trade-off structure geometrically. Huang and Yin [27] studied two classes of optimal reinsurance models from perspectives of both insurers and reinsurers by minimizing their convex combination where the risk is measured by a distortion risk measure and the premium is given by a distortion premium principle.
In this paper, we study the optimal reinsurance models by minimizing the insurer and the reinsurer's total costs under the criteria of loss function assuming that the reinsurance premium principles satisfy risk loading and stop-loss ordering preserving. The loss function is defined by the joint VaR based on the binary lower-orthant value-at-risk and the binary upper-orthant value-at-risk, which are proposed by Embrechts and Puccetti [28]. Methodologically, we determine the optimal reinsurance forms using the geometric approach of [11] over three ceded loss function sets, the class of increasing convex ceded loss functions, the class of ceded loss functions which satisfy both ceded and retained loss functions are increasing and the class of increasing concave ceded loss functions1*.
∗1Throughout this paper, the terms "increasing function" and "decreasing function" mean "non-decreasing function" and "non-increasing function", respectively.
The rest of the paper is organized as follows. In Section 2, we give definitions and propose an optimal reinsurance problem that takes into consideration the interests of both an insurer and a reinsurer. In Section 3, we derive optimal reinsurance forms over three ceded loss function sets by the geometric approach of [11], assuming that the reinsurance premium principles satisfy risk loading and stop-loss ordering preserving. In Section 4 and Section 5, we determine the corresponding optimal parameters under expectation premium principle and Dutch premium principle respectively. In Section 6, we provide four numerical examples. Conclusions are given in Section 7.
Let X be the loss or claim initially assumed by an insurer in a fixed time period. We assumed that X is a nonnegative random variable with distribution function F(x)=P{X≤x}, survival function S(x)=P{X≥x} and mean μ=E(X) (0<μ<∞). Under a reinsurance contract, a reinsurer will cover the part of the loss, say f(X) with 0≤f(X)≤X, and the insurer will retain the rest of the loss, which is denoted by If(X)=X−f(X). The losses If(X) and f(X) are called retained loss and ceded loss, respectively. Since the reinsurer shares the risk X, the insurer will pay an additional cost in the form of reinsurance premium to the reinsurer. We denote the reinsurance premium by Πf(X) which corresponds to a ceded loss function f(X). The total cost TfI of the insurer is composed of two components: the retained loss If(X) and the reinsurance premium Πf(X), that is
TfI=If(X)+Πf(X), | (2.1) |
and the total cost of the reinsurer is
TfR=f(X). | (2.2) |
For individual company, an important issue is to determine their maximum agammaegate loss which can occur with some given probability, value-at-risk (VaR) serves this purpose.
Definition 2.1. For 0<α<1, the VaR of a non-negative random variable X with distribution function F(x)=P{X≤x} at confidence level α is defined as
VaRX(α)=inf{x∈R:F(x)≥α}=F−1(α), | (2.3) |
where, F−1 is the generalized inverse function of the distribution function F(x).
The VaR defined by (2.3) is the maximum loss which is not exceeded at a given probability α. We list several properties of the VaR or the generalized inverse function F−1.
Proposition 2.1. For any α∈(0,1) and any nonnegative random variable X with distribution function F(x), the following properties hold:
(1) F(F−1(α))≥α.
(2) F−1(F(x))≤x for x≥0.
(3) If h is an increasing and left-continuous function, then VaRh(X)(α)=h(VaRX(α)).
Proof. Properties (1) and (2) follow immediately from Lemma 2.13 of [29] and the definition of the generalized inverse function, while for property (3), see the proof of Theorem 1 in [30].
In this paper, we assume that the initial loss X has a continuous and strictly increasing distribution function on (0,∞) with a possible mass at 0 and α∈(F(0),1) to avoid trivial cases, then
F(F−1(α))=α. | (2.4) |
For the insurer or the reinsurer, they can use Definition 2.1 to determine their maximum agammaegate cost which can occur with some given probability α. However, if the insurer and the reinsurer are considered as partners, then the total cost Tf is a two-dimensional random vector (TfI,TfR). For this case, Definition 2.1 does not make sense since, even for a one to one continuous distribution function, there are possibly infinitely many vectors (x,y)∈[0,∞)×[0,∞) at which Gf(x,y)=α, where
Gf(x,y)=P{TfI≤x, TfR≤y} |
is the distribution function of (TfI,TfR). Hence we use the definition of multivariate Value-at-Risk which is proposed by Embrechts and Puccetti (see [28]).
Definition 2.2. For α∈(0,1), the binary lower-orthant value-at-risk at confidence level α for the distribution function Gf(x,y) is the boundary of its α-level set, defined as
VaR_f(α):=∂{(x,y)∈R2+:Gf(x,y)≥α}. |
Analogously, the binary upper-orthant value-at-risk at confidence level α for the tail function ¯Gf(x,y) is defined as
¯VaRf(α):=∂{(x,y)∈R2+:¯Gf(x,y)≤1−α}, |
where
¯Gf(x,y)=P{TfI>x, TfR>y}. |
We now provide further analysis on the binary lower-orthant value-at-risk at confidence level α for the distribution function Gf(x,y) and the binary upper-orthant value-at-risk at confidence level α for the tail function ¯Gf(x,y) over the following three admissible sets of ceded loss functions:
F1≜{0⩽f(x)⩽x:f(x) is an increasing convex function }, | (2.5) |
F2≜{0⩽f(x)⩽x: both If(x) and f(x) are increasing functions }, | (2.6) |
F3≜{0⩽f(x)⩽x:f(x) is an increasing concave function }. | (2.7) |
In the set F2, the increasing condition on both ceded and retained loss functions is interesting and important. Both the insurer and the reinsurer are obligated to pay more for larger loss X, hence it potentially reduces moral hazard. In addition, in a reinsurance contract, sometimes in order to better protect the insurer, they let the loss proportion paid by the reinsurer increases in the loss (see [7]). Mathematically, f(x)/x is assumed to be an increasing function. If we assume that f(x) is increasing and convex, then f(x)/x is an increasing function. On the other hand, under the reinsurance policies with no upper limit on the indemnity, the reinsurance may be under a heavy financial burden, especially when the insurer suffers a large unexpected loss. Therefore, reinsurance contracts sometimes involve an upper limit on the indemnity in practice. In such a situation, ceded loss functions must not be convex functions but concave functions sometimes. Motivated by these observations, we consider ceded loss functions in the sets F1 and F3.
Note that F1⊂F2 (see [12]) and F3⊂F2. In addition, if f∈Fi,i=1,2,3, then If and f are increasing and continuous. Thus, from Proposition 1, we have
VaRTfI(α)=If(VaRX(α))+Πf(X), | (2.8) |
VaRTfR(α)=f(VaRX(α)). | (2.9) |
Based on the above analysis, we obtain the following theorem.
Theorem 2.1. For α∈(0,1), the binary lower-orthant value-at-risk at confidence level α for the distribution function Gf(x,y) is
VaR_f(α)=∂{(x,y)∈R2+:x≥VaRTfI(α) and y≥VaRTfR(α)}, |
and the binary upper-orthant value-at-risk at confidence level α for the tail function ¯Gf(x,y) is
¯VaRf(α)=∂{(x,y)∈R2+:x≥VaRTfI(α) or y≥VaRTfR(α)}. |
Proof. Let S1={(x,y)∈R2+:Gf(x,y)≥α} and S2={(x,y)∈R2+:x≥VaRTfI(α) and y≥VaRTfR(α)}. First, it is easy to see that S1⊆S2. Second, note that
Gf(VaRTfI(α),VaRTfR(α))=P{TfI≤VaRTfI(α), TfR≤VaRTfR(α)}=P{If(X)≤If(VaRX(α)), f(X)≤f(VaRX(α)}≥P{X≤VaRX(α)}=α, |
then for any (x,y)∈S2, we have Gf(x,y)≥Gf(VaRTfI(α),VaRTfR(α))≥α, thus we get S2⊆S1.
Similarly, let D1={(x,y)∈R2+:¯Gf(x,y)≤1−α} and D2={(x,y)∈R2+:x≥VaRTfI(α) or y≥VaRTfR(α)}. For any (x,y)∈D2, if x≥VaRTfI(α), then
¯Gf(x,y)=P{TfI>x, TfR>y}≤P{TfI>x}≤P{TfI>VaRTfI(α)}≤1−α. | (2.10) |
By the same arguments, we know that if y≥VaRTfR(α), then ¯Gf(x,y)≤1−α holds as well. Hence, D2⊆D1.
On the other hand, for any (x,y)∈¯D2, we have x<VaRTfI(α) and y<VaRTfR(α). Since TfI and TfR are co-monotonic, we have
¯Gf(x,y)=P{TfI>x, TfR>y}=min{P{TfI>x},P{TfR>y}}. | (2.11) |
Notice that for any random variable Y, if y<VaRY(α), we get P{Y≤y}<α. (Otherwise, suppose P{Y≤y}≥α, then from the definition of VaR, we get y≥VaRY(α).) Then, we have
P{TfI>x}>1−α and P{TfR>y}>1−α | (2.12) |
which implies ¯Gf(x,y)>1−α. Therefore, we have ¯D2⊆¯D1, and hence D2=D1.
The binary lower-orthant value-at-risk VaR_f(α) and the binary upper-orthant value-at-risk ¯VaRf(α) are illustrated in Figures 1 and 2.
Note that the joint VaR (VaRTfI(α),VaRTfR(α)) determines VaR_fα and ¯VaRfα. From both the insurer and the reinsurer's point of view, maximum agammaegate cost Tf which can occur with some given probability is the smaller the better, that is to say, VaR_fα and ¯VaRfα are closer to the origin the better. This motivates us to consider the loss function
L(f)=√[VaRTfI(α)]2+[VaRTfR(α)]2, | (2.13) |
and the optimization criteria for seeking the optimal reinsurance contract:
f∗=argminf L(f). | (2.14) |
In the rest of the paper, we will derive the optimal solutions corresponding to the reinsurance model (2.14) under the admissible ceded loss function sets Fi,i=1,2,3.
In this section, we consider the general reinsurance premium principles which satisfy the following two properties:
1. Risk loading: Π(X)≥E[X];
2. Stop-loss ordering preserving: Π(Y)≤Π(X) if Y is smaller than X in the stop-loss order (Y≤slX)2†.
†2 A random variable Y is said to be smaller than a random variable X in the stop-loss order sense, notation Y≤slX, if and only if Y has lower stop-loss premiums than X: E(Y−d)+≤E(X−d)+, −∞<d<+∞.
We emphasize that there are many premium principles which satisfy these two properties, such as expectation principle, p−mean value principle, Dutch principle, Wang's principle and exponential principle.
In this subsection, we derive the optimal reinsurance policies under the condition that the ceded loss function f∈F1. First, we define a ceded loss function set H1, which consists of all ceded loss functions h(x)=b(x−d)+ with 0≤b≤1 and d≥0. Note that H1 is a subclass of F1. Second, we show that the optimal ceded loss functions which minimize the loss function in the subclass H1 also optimally minimize the loss function in F1. We give the following proposition using the geometric method proposed by [11].
Proposition 3.1. For any f∈F1, there always exists a function h∈H1 such that L(h)≤L(f).
Proof. If f∈F1 is identically zero on [0,VaRX(α)], we consider h:=0∈H1. It is easy to see that h(X)≤f(X) in the usual stochastic order. It further leads to h(X)≤slf(X) according to the theory of stochastic orders in [31]. Then we have Πh(X)≤Πf(X). Consequently, from formulas (2.8) and (2.9), we obtain
VaRTfI(α)=VaRX(α)−f(VaRX(α))+Πf(X)≥VaRX(α)−h(VaRX(α))+Πh(X)=VaRThI(α), |
and
VaRTfR(α)=f(VaRX(α))=0=h(VaRX(α))=VaRThR(α). |
Hence we have L(h)≤L(f).
If f∈F1 is not identically zero on [0,VaRX(α)], let f′−(VaRX(α)) and f′+(VaRX(α)) be the left-hand derivative and right-hand derivative of f at VaRX(α). Let b be an any number in [f′−(VaRX(α)),f′+(VaRX(α))], then we have 0<b≤1. Let d=VaRX(α)−f(VaRX(α))b, define h(x)=b(x−d)+,x≥0. Then h∈H1, f(VaRX(α))=h(VaRX(α)) and f≥h for any x≥0 since f is convex. Hence we have
VaRTfI(α)=VaRX(α)−f(VaRX(α))+Πf(X)≥VaRX(α)−h(VaRX(α))+Πh(X)=VaRThI(α), |
and
VaRTfR(α)=f(VaRX(α))=h(VaRX(α))=VaRThR(α). |
Therefore, L(h)≤L(f) holds. The geometric interpretation of this proof can be seen from Figure 3.
Based on the Proposition 3.1, we know that change-loss reinsurance of the form f(x)=b(x−d)+ with 0≤b≤1 and d≥0 is optimal among F1 in the sense that it minimizes the loss function L(f). The optimal parameters b∗ and d∗ will be given under some specific reinsurance premium principles in the rest of sections.
In this subsection, we focus on the loss function minimization model for any ceded loss function f∈F2. As shown in [12], the ceded loss function f∈F2 is Lipschitz continuous, i.e.,
0⩽f(x2)−f(x1)⩽x2−x1,∀0⩽x1⩽x2. |
Let H2 denote the class of ceded loss function with the following representation h(x)=(x−a)+−(x−VaRX(α))+, a⩽VaRα(X). It is easy to see that H2 is a subclass of F2. Exactly, h(x) is a layer reinsurance with deductible a and upper limit VaRX(α). We will prove that the optimal functions which minimize the loss function in the subclass H2 also optimally minimize the loss function in F2.
Proposition 3.2. Let f∈F2 be a ceded function. There always exists a function h∈H2 such that L(h)⩽L(f).
Proof. For any f∈F2, define a=VaRX(α)−f(VaRX(α))⩾0 and h(x)=(x−a)+−(x−VaRX(α))+=min{(x−(VaRX(α)−f(VaRX(α))))+,f(VaRX(α)}, x⩾0. Then we have h∈H2 and f(VaRX(α))=h(VaRX(α)).
Furthermore, recall that the ceded loss function f∈F2 is non-negative and Lipschitz continuous, hence inequality f(x)⩾(x+f(VaRX(α))−VaRX(α))+ holds for x∈[0,VaRX(α)]. On the other hand, the increasing property of f(x) leads to h(x)=f(VaRX(α))⩽f(x) for all x>VaRX(α). Thus, inequality h(x)⩽f(x) holds for all x⩾0. Since the reinsurance premium preserves stop-loss order, we have
VaRTfI(α)=VaRX(α)−f(VaRX(α))+Πf(X)≥VaRX(α)−h(VaRX(α))+Πh(X)=VaRThI(α), |
and
VaRTfR(α)=f(VaRX(α))=h(VaRX(α))=VaRThR(α). |
Thus, L(h)≤L(f) holds. The geometric interpretation of this proof can be seen from Figure 4.
By Proposition 3.2, we know that layer reinsurance with deductible a and upper limit VaRX(α) is optimal among F2 in the sense that it minimizes the loss function L(f).
In this subsection, we derive the optimal solution to problem of (2.14) over F3. Let H3 be the class of non-negative function h(x) defined on [0,∞] with
h(x)=c(x−(x−VaRX(α))+), | (3.1) |
where 0⩽c⩽1. Note that H3⊂F3 and H3 contains the null function h(x)=0. The following result shows that the optimal ceded loss functions in F3 which minimize the L(f), must take the form of (3.1).
Proposition 3.3. For any f∈F3, there always exists a function h∈H3, such that L(h)⩽L(f).
Proof. For any f∈F3, let c=f(VaRX(α))VaRX(α), then c∈[0,1]. Define h(x)=c(x−(x−VaRX(α))+), obviously h∈H3 and h(VaRX(α))=f(VaRX(α)).
In addition, recall that the ceded loss function f∈F3 is increasing concave, hence f(x)⩾f(VaRX(α))VaRX(α)x=h(x) for x∈[0,VaRX(α)]. On the other hand, the increasing property of f(x) leads to h(x)=f(VaRX(α))⩽f(x) for x>VaRX(α). Since the reinsurance premium preserves stop-loss order, we have
VaRTfI(α)=VaRX(α)−f(VaRX(α))+Πf(X)≥VaRX(α)−h(VaRX(α))+Πh(X)=VaRThI(α), |
and
VaRTfR(α)=f(VaRX(α))=h(VaRX(α))=VaRThR(α). |
Thus, we have L(h)⩽L(f). The geometric interpretation of this proof can be seen from Figure 5.
From Proposition 3.3, we know that the quota-share reinsurance with a policy limit is always optimal among F3 in the sense that it minimizes the loss function L(f).
In this section, we consider the expectation reinsurance premium principle, i.e.,
Πf(X)=(1+θ)E[f(x)], | (4.1) |
where, θ>0 is the safety loading.
As a result of Proposition 3.1, we can deduce optimal ceded loss functions by confining attention to H1. For a change-loss reinsurance with b∈[0,1] and d∈[0,∞), the total costs of the insurer and the reinsurer are
Tb,dI=X−b(X−d)++ΠE(b,d), |
Tb,dR=b(X−d)+, |
where, ΠE(b,d)=(1+θ)E[b(X−d)+]=(1+θ)b∫∞dS(x)dx is the reinsurance premium. Then the VaR of Tb,dI and Tb,dR at confidence level α are
VaRTb,dI(α)=VaRX(α)−b(VaRX(α)−d)++ΠE(b,d), | (4.2) |
VaRTb,dR(α)=b(VaRX(α)−d)+. | (4.3) |
Hence, the loss function is
LE(b,d)={√[(1−b)VaRX(α)+bd+ΠE(b,d)]2+[b(VaRX(α)−d)]2,d≤VaRX(α),VaRX(α)+ΠE(b,d),d>VaRX(α). | (4.4) |
Lemma 4.1. Optimal ceded functions which minimize the loss function LE(b,d) in the class H1 exist.
Proof. Note that the function ΠE(b,d) is an increasing function with respect to b. Then the loss function LE(b,d) attains its minimum value over [0,1]×(VaRX(α),∞) at b=0 (the ceded function is h(x)≡0) and the minimum value is VaRX(α). Hence, the study of optimal ceded functions which minimize the loss function LE(b,d) in the class H1 is simplified to solving the two-parameter minimization problem over closed subset [0,1]×[0,VaRX(α)]. Since LE(b,d) is continuous, then the minimum of LE(b,d) over [0,1]×[0,VaRX(α)] must attain at some stationary point or lie on the boundary.
First, we define A≜[0,1]×[0,VaRX(α)]. In this subsection, we will identify the minimum points of LE(b,d) over A and discuss the optimal ceded function f∗1(x). We split A into five disjoint subsets, i.e. A=⋃5i=1Ai, where, A1={(0,d):0≤d≤VaRX(α)}, A2={(b,d):0<b<1,0<d<VaRX(α)}, A3={(1,d):0<d<VaRX(α)}, A4={(b,0):0<b⩽1} and A5={(b,VaRX(α)):0<b≤1}.
If (b,d)∈A, the loss function is
LE(b,d)=√[(1−b)VaRX(α)+bd+ΠE(b,d)]2+[b(VaRX(α)−d)]2. | (4.5) |
Let HE(b,d)=L2E(b,d)=[(1−b)VaRX(α)+bd+ΠE(b,d)]2+[b(VaRX(α)−d)]2, then HE(b,d) and LE(b,d) have the same minimum points. Thus, we will study the minimization problem of HE(b,d) on A in the rest of this subsection. Note that HE(b,d) is differentiable with partial derivatives
{∂HE(b,d)∂b=2[(VaRX(α)−g(d))2+(VaRX(α)−d)2]b+2VaRX(α)(g(d)−VaRX(α)),∂HE(b,d)∂d=2b[(1−b)VaRX(α)+bg(d)]g′(d)−2b2(VaRX(α)−d), | (4.6) |
where, g(d)=d+(1+θ)∫∞dS(x)dx.
Next, we divide the following analysis into five cases.
● First, we demonstrate that HE(b,d) has no minimum points on A5. For any (b,d)∈A5, HE(b,d)>[VaRX(α)]2=HE(0,d)=minA1HE(b,d), then the minimum value of HE(b,d) over A is not attainable in A5.
● The minimum points of HE(b,d) are located in A1 if and only if
min[0,VaRX(α)]g(d)≥VaRX(α). | (4.7) |
In fact, if inequality (4.7) holds, then it follows from the expression of ∂HE(b,d)∂b in (4.6) that ∂HE(b,d)∂b>0. Thus, HE(b,d) is strictly increasing with respect to b. Furthermore, for any d∈[0,VaRX(α)], HE(0,d)≡[VaRX(α)]2. As a result, the minimum value of HE(b,d) over A is attained at any point (0,d) in A1.
Conversely, if min[0,VaRX(α)]g(d)<VaRX(α), then there exists a ˜d∈[0,VaRX(α)] such that ∂HE(b,˜d)∂b<0 holds in a right neighborhood of b=0. That is to say, (0,˜d) is not a minimum point of HE(b,d). Since HE(0,d)=HE(0,˜d)≡[VaRX(α)]2 for any (0,d)∈A1, then no minimum points of HE(b,d) are located in A1.
● If (b∗,d∗)∈A2 is a minimum point of HE(b,d), then (b∗,d∗) is a stationary point of HE(b,d). Therefore, we have
{∂HE(b,d)∂b|(b,d)=(b∗,d∗)=0,∂HE(b,d)∂d|(b,d)=(b∗,d∗)=0. | (4.8) |
By straightforward algebra, we know that d∗ is a root of equation q(d)=0, where
q(d)=S(d)(VaRX(α)−d)−∫∞dS(x)dx. | (4.9) |
Substituting d∗ in the second equation of (4.8) yields
b∗=VaRX(α)g′(d∗)VaRX(α)−d∗+[VaRX(α)−g(d∗)]g′(d∗). | (4.10) |
Furthermore, b∗ must lie in (0,1), which is equivalent to
p(d∗)>0, | (4.11) |
where, the function p(d) is given by p(d)=VaRX(α)−d−g(d)g′(d).
● If (1,ˉd)∈A3 is a minimum point of HE(b,d), then Fermat's theorem implies
{∂HE(1,d)∂d|d=ˉd=0,∂HE(b,ˉd)∂b|b=1≤0, | (4.12) |
which is equivalent to
{p(ˉd)=0,g(ˉd)[g(ˉd)−VaRX(α)]+[VaRX(α)−ˉd]2≤0. | (4.13) |
● If (ˉb,0)∈A4 is a minimum point of HE(b,d), then ˉb must satisfy the following conditions
{∂HE(b,0)∂b|b=ˉb=0,∂HE(ˉb,d)∂d|d=0≥0. | (4.14) |
From (4.14), we yield
ˉb=VaRX(α)[VaRX(α)−g(0)][VaRX(α)−g(0)]2+[VaRX(α)]2 | (4.15) |
and
[(1−ˉb)VaRX(α)+ˉbg(0)]g′(0)−ˉbVaRX(α)≥0. | (4.16) |
Based on the above arguments, we analyze the conditions for the minimum points of HE(b,d) are located in sets Ai, i=1,2,3,4. The results are summarized in the following theorem.
Theorem 4.1. The optimal solutions to reinsurance problem (2.14) are given as follows.
(1) If one of the three conditions (C1)-(C3) holds, then the optimal ceded loss function is given by f∗1(x)=0, where, d0=S−1(11+θ) and
(C1):α≤θ1+θ,(C2):{F(0)<θ1+θ<α,g(d0)≥VaRX(α),(C3):{S(0)≤11+θ,(1+θ)μ≥VaRX(α). |
(2) If condition (C4) or (C5) holds, then the optimal ceded loss function is given by f∗1(x)=b∗(x−d∗)+, where d∗ is the unique solution of equation q(d)=0, b∗ is given by (4.10) and
(C4):{F(0)<θ1+θ<α,g(d0)<VaRX(α),p(d∗)>0,(C5):{S(0)≤11+θ,μ<S(0)VaRX(α),p(d∗)>0. |
(3) If condition (C6) or (C7) holds, then the optimal ceded loss function is given by f∗1(x)=(x−ˉd)+, where ˉd is the unique solution of equation p(d)=0 and
(C6):{F(0)<θ1+θ<α,g(d0)<VaRX(α),p(d∗)≤0,(C7):{S(0)≤11+θ,μ<S(0)VaRX(α),p(d∗)≤0. |
(4)If condition (C8) holds, then the optimal ceded loss function is given by f∗1(x)=ˉbx, where ˉb is given by (4.15) and
(C8):{S(0)≤11+θ,S(0)VaRX(α)≤μ<11+θVaRX(α). |
Proof. (1) If one of the three conditions (C1)–(C3) holds, it is easy to show that
min[0,VaRX(α)]g(d)≥VaRX(α). |
Then the minimum points of HE(b,d) are located in A1. That is to say, the optimal ceded loss function is f∗1(x)=0.
(2) If condition (C4) holds, then g′(d)<0 for any d∈[0,d0). From the expression of ∂HE(b,d)∂d in (4.6), we have ∂HE(b,d)∂d<0 for any (b,d)∈(0,1]×[0,d0]. Thus, the minimum points are not located in [0,1]×[0,d0]. Furthermore, let d1>d0 such that g(d1)=VaRX(α), from the expression of ∂HE(b,d)∂b in (4.6), we have ∂HE(b,d)∂b>0 for any (b,d)∈(0,1]×[d1,VaRX(α)]. Thus, the minimum points are also not located in [0,1]×[d1,VaRX(α)]. As a result, the minimum points of HE(b,d) over A are located in (0,1]×(d0,d1) and the minimum must be attainable at some stationary point (b∗,d∗) or must lie on the right boundary at some point (1,ˉd). Note that q′(d)=S′(d)(VaRX(α)−d)<0, q(d0)=S(d0)(VaRX(α)−d0)−∫∞d0S(x)dx=11+θ(VaRX(α)−g(d0))>0 and q(d1)=S(d1)(VaRX(α)−d1)−∫∞d1S(x)dx=[(1+θ)S(d1)−1]∫∞d1S(x)dx<0. Thus, the equation q(d)=0 has a unique solution d∗ in (d0,d1). Substituting d∗ in the second equation of (4.8) yields
b∗=VaRX(α)g′(d∗)VaRX(α)−d∗+[VaRX(α)−g(d∗)]g′(d∗). |
It is easy to show 0<b∗<1 since p(d∗)>0. Thus HE(b,d) has a unique stationary point (b∗,d∗). In the following, we show that HE(b,d) attains the minimum at the stationary point (b∗,d∗).
Conversely, we suppose that the minimum of HE(b,d) is attainable at (1,ˉd) if condition (C4) holds. Then we have p(ˉd)=0 and g(ˉd)[g(ˉd)−VaRX(α)]+[VaRX(α)−ˉd]2≤0. Since g′(ˉd)>0, we yield [g(ˉd)[g(ˉd)−VaRX(α)]+[VaRX(α)−ˉd]2]g′(ˉd)≤0. Straightforward algebra leads to q(ˉd)≥0. Note that q′(d)<0 and q(d∗)=0, then we have ˉd≤d∗. However, since p′(d)<0, p(d∗)>0 and p(ˉd)=0, then we have d∗<ˉd. This leads to contradictions. Thus, if condition (C4) holds, the function HE(b,d) attains the minimum at the stationary point (b∗,d∗), that is to say, the optimal ceded loss function is f∗1(x)=b∗(x−d∗)+.
If condition (C5) holds, then ∂HE(b,d)∂b>0 for any (b,d)∈(0,1]×[d1,VaRX(α)]. Thus, the minimum points are not located in [0,1]×[d1,VaRX(α)]. As a result, the minimum points of HE(b,d) over A are located in (0,1]×[0,d1) and the minimum must be attainable at some stationary point (b∗,d∗) or must lie on the right boundary at some point (1,ˉd) or must lie on the lower boundary at some point (ˉb,0). In the following, we consider equation q(d)=0. Note that q′(d)<0, q(0)=S(0)VaRX(α)−μ>0 and
q(d1)=S(d1)(VaRX(α)−d1)−∫∞d1S(x)dx=S(d1)(g(d1)−d1)−∫∞d1S(x)dx=S(d1)(1+θ)∫∞d1S(x)dx−∫∞d1S(x)dx=[S(d1)(1+θ)−1]∫∞d1S(x)dx<0. | (4.17) |
Thus, the equation q(d)=0 has a unique solution d∗ in (0,d1). Further, we know that HE(b,d) has a unique stationary point (b∗,d∗) if condition (C5) holds. By the same argument as above, we show that the minimum of HE(b,d) is not attainable at (1,ˉd) if p(d∗)>0 holds. Meanwhile, we demonstrate that the minimum of HE(b,d) is not attainable at (ˉb,0) if condition (C5) holds. Conversely, we suppose that the minimum value of HE(b,d) is attainable at (ˉb,0) if condition (C5) holds. Then we have conditions (4.15) and (4.16) hold. Substituting (4.15) into (4.16), we get μ−S(0)VaRX(α)≥0, that is contradicted to the second inequality of condition (C5). Thus, if condition (C5) holds, the function HE(b,d) attains the minimum at the stationary point (b∗,d∗).
In summary, if condition (C4) or (C5) holds, the optimal ceded loss function is given by f∗1(x)=b∗(x−d∗)+.
(3) If condition (C6) or (C7) holds, from the above arguments in (2), we know that HE(b,d) has no stationary points because p(d∗)≤0. Furthermore, if the second inequality of (C7) holds, the minimum value of HE(b,d) is not attainable at (ˉb,0). Thus, the function HE(b,d) attains the minimum at the boundary point (1,ˉd) if condition (C6) or (C7) holds, that is to say, the optimal ceded loss function is given by f∗1(x)=(x−ˉd)+.
(4) If condition(C8) holds, then ∂HE(b,d)∂b>0 for any (b,d)∈(0,1]×[d1,VaRX(α)]. Thus, the minimum points are not located in [0,1]×[d1,VaRX(α)]. As a result, the minimum points of HE(b,d) over A are located in (0,1]×[0,d1) and the minimum must be attainable at some stationary point (b∗,d∗) or must lie on the right boundary at some point (1,ˉd) or must lie on the lower boundary at some point (ˉb,0). In the following, we consider equation q(d)=0. Note that q(0)=S(0)VaRX(α)−μ≤0 and q′(d)<0, then the equation q(d)=0 has no solutions in (0,d1), namely, the function HE(b,d) has no stationary points. Thus, the minimum point of HE(b,d) over A must lie on the right boundary at some point (1,ˉd) or must lie on the lower boundary at some point (ˉb,0). If the minimum of HE(b,d) is attainable at (1,ˉd), the we have conditions (4.13) hold. Since g′(ˉd)>0, we yield [g(ˉd)[g(ˉd)−VaRX(α)]+[VaRX(α)−ˉd]2]g′(ˉd)≤0. Straightforward algebra leads to q(ˉd)≥0. This is contradicted to q(0)≤0 and q′(d)<0. Thus, minimum point of HE(b,d) over A must lie on the lower boundary at point (ˉb,0), namely, the optimal ceded loss function is given by f∗1(x)=ˉbx.
As a result of Proposition 3.2, we can deduce optimal ceded loss functions by confining attention to H2. For a layer reinsurance policy h(x)=(x−a)+−(x−VaRX(α))+ with a∈[0,VaRX(α)], the total costs of the insurer and the reinsurer under the VaR risk measure are
VaRTfI(α)=a+(1+θ)∫VaRX(α)aS(x)dx, |
VaRTfR(α)=VaRX(α)−a. |
Hence, the loss function is
LE(a)=√[a+(1+θ)∫VaRX(α)aS(x)dx]2+[VaRX(α)−a]2. |
Theorem 4.2. The optimal ceded loss function that solves (2.14) with F2 constraint is given by
f∗2(x)={(x−a∗1)+−(x−VaRX(α))+,θ1+θ<α,0,otherwise, | (4.18) |
where a∗1 is the unique solution of equation (4.19)
[a+(1+θ)∫VaRX(α)aS(x)dx][1−(1+θ)S(a)]−[VaRX(α)−a]=0. | (4.19) |
Proof. Let HE(a)=[a+(1+θ)∫VaRX(α)aS(x)dx]2+[VaRX(α)−a]2, then
H′E(a)=2(a+(1+θ)∫VaRX(α)aS(x)dx)(1−(1+θ)S(a))−2(VaRX(α)−a), | (4.20) |
HE″ | (4.21) |
If \alpha \leqslant \frac{\theta}{1+\theta} holds, it is easy to show that H_{E}'({\rm VaR}_{X}(\alpha))\leqslant 0 . According to (4.20) and (4.21), then H_{E}(a) and L_{E}(a) attain their minimum at a = {\rm VaR}_{X}(\alpha) . In this case, f_{2}^{*}(x)\equiv 0 .
If \alpha > \frac{\theta}{1+\theta} holds, then from (4.19) and (4.21), we have
\begin{equation*} H_{E}'(a)\gtreqqless 0\quad for\quad a_{1}^*\lesseqqgtr a. \end{equation*} |
Recall that 0\leqslant a \leqslant {\rm VaR}_{X}(\alpha) and H_{E}'(0) = 2(1+\theta)\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx(1-(1+\theta)S(0))- 2{\rm VaR}_{X}(\alpha) . If 1-(1+\theta)S(0)\leqslant 0 , then H_{E}'(0) < 0 and if 1-(1+\theta)S(0) > 0 , then H_{E}'(0)\leqslant 2(1+\theta)\int^{{\rm VaR}_{X}(\alpha)}_0S(0)dx- 2{\rm VaR}_{X}(\alpha) < 0 . So H_{E}'(0) < 0 and H_{E}'({\rm VaR}_{X}(\alpha)) > 0 imply that a_{1}^* exists and is the only minimum point of H_{E}(a) and L_{E}(a) .
As a result of Proposition 3.3 , we can deduce optimal ceded loss functions by confining attention to \mathcal{H}^{3} . For a quota-share reinsurance with a policy limit h(x) = c(x-(x-{\rm VaR}_{X}(\alpha))_+) , the total costs of the insurer and the reinsurer under the VaR risk measure are
\begin{equation*} VaR_{T_{I}^{f}}(\alpha) = (1-c){\rm VaR}_{X}(\alpha)+(1+\theta)c\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx, \end{equation*} |
\begin{equation*} VaR_{T_{R}^{f}}(\alpha) = c{\rm VaR}_{X}(\alpha). \end{equation*} |
Hence, the loss function is
\begin{equation*} L_{E}(c) = \sqrt{[(1-c){\rm VaR}_{X}(\alpha)+(1+\theta)c\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx]^2+[c{\rm VaR}_{X}(\alpha)]^2}. \end{equation*} |
Theorem 4.3. The optimal ceded loss function that solves (2.14) with \mathcal{F}^3 constraint is given by
\begin{equation} f_{3}^{*}(x) = \left\{ \begin{aligned} &c_{1}^*(x-(x-{\rm VaR}_{X}(\alpha))_+), &\phi({\rm VaR}_{X}(\alpha)) \lt 0, \\ &0, &otherwise, \end{aligned} \right. \end{equation} | (4.22) |
where \phi({\rm VaR}_{X}(\alpha)) = (1+\theta)\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx-{\rm VaR}_{X}(\alpha) and c_{1}^* = \frac {-\phi ({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha)}{({\rm VaR}_{X}(\alpha))^2+(\phi ({\rm VaR}_{X}(\alpha))^2} .
Proof. Let H_{E}(c) = [(1-c){\rm VaR}_{X}(\alpha)+(1+\theta)c\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx]^2+[c{\rm VaR}_{X}(\alpha)]^2 , then
\begin{equation} H_{E}'(c) = 2c[({\rm VaR}_{X}(\alpha))^2+(\phi ({\rm VaR}_{X}(\alpha))^2]+2\phi({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha), \end{equation} | (4.23) |
\begin{equation} H_{E}''(c) = 2[({\rm VaR}_{X}(\alpha))^2+(\phi ({\rm VaR}_{X}(\alpha))^2] \gt 0. \end{equation} | (4.24) |
If \phi({\rm VaR}_{X}(\alpha))\geqslant 0 , according to (4.23), we have \frac {\partial H_{E}(c)}{\partial c}\geqslant 0 . Thus, L_{E}(c) attains its minimum at c = 0 . Therefore, the optimal ceded loss function is given by f_{3}^{*}(x) = 0 .
If \phi({\rm VaR}_{X}(\alpha)) < 0 , according to (4.23) and (4.24), L_{E}(c) attains its minimum at c = c_{1}^* . Thus, the optimal ceded loss function is given by f_{3}^{*}(x) = c_{1}^*(x-(x-{\rm VaR}_{X}(\alpha))_+) .
In this section, we determine the optimal reinsurance policies among the ceded loss function sets \mathcal{F}^i (i = 1, 2, 3) under the Dutch premium principle. The Dutch premium principle is given by
\begin{equation} \Pi_{f}(X) = E[f(X)]+\beta E[f(X)-E[f(X)]]_+, ^{} \end{equation} | (5.1) |
where, 0 < \beta \leqslant 1 .
From Proposition 2 , we know that the optimal ceded loss function in \mathcal{F}^1 can be determined by confining attention to \mathcal{H}^{1} . For a change-loss reinsurance with b\in [0, 1] and d\in [0, \infty) , the total costs of the insurer and the reinsurer under the Dutch premium principle are
T_{I}^{b, d} = X-b(X-d)_{+}+\Pi_{D}(b, d), |
T_{R}^{b, d} = b(X-d)_{+}, |
where \Pi_{D}(b, d) = b\int^{\infty}_dS(x)+\beta b\int^{\infty}_{d+\int^{\infty}_dS(x)dx}S(x)dx is the reinsurance premium. Then the VaR of T_{I}^{b, d} and T_{R}^{b, d} at confidence level \alpha are
\begin{eqnarray} {\rm VaR}_{T_{I}^{b, d}}(\alpha)& = &{\rm VaR}_{X}(\alpha)-b({\rm VaR}_{X}(\alpha)-d)_{+}+\Pi_{D}(b, d), \end{eqnarray} | (5.2) |
\begin{eqnarray} {\rm VaR}_{T_{R}^{b, d}}(\alpha)& = &b({\rm VaR}_{X}(\alpha)-d)_{+}. \end{eqnarray} | (5.3) |
Hence, the loss function is
\begin{eqnarray*} L_{D}(b, d) = \left \{ \begin{array}{ll} \sqrt{\big[(1-b){\rm VaR}_{X}(\alpha)+bd+\Pi_{D}(b, d)\big]^{2}+\big[b({\rm VaR}_{X}(\alpha)-d)\big]^{2}}, \quad &d\leq {\rm VaR}_{X}(\alpha), \\\\ {\rm VaR}_{X}(\alpha)+\Pi_{D}(b, d), \qquad \qquad \qquad \quad &d \gt {\rm VaR}_{X}(\alpha). \end{array} \right. \end{eqnarray*} |
Let H_{D}(b, d) = [(1-b){\rm VaR}_{X}(\alpha)+bd+\Pi_{D}(b, d)\big]^{2}+\big[b({\rm VaR}_{X}(\alpha)-d)]^{2} , then
\begin{eqnarray} \left \{ \begin{array}{ll} {\frac{\partial H_{D}(b, d)}{\partial b} = 2\big[\big({\rm VaR}_{X}(\alpha)-k(d)\big)^{2}+\big({\rm VaR}_{X}(\alpha)-d\big)^{2}\big]b+2{\rm VaR}_{X}(\alpha)\big(k(d)-{\rm VaR}_{X}(\alpha)\big)}, \\\\ {\frac{\partial H_{D}(b, d)}{\partial d} = 2b\big[(1-b){\rm VaR}_{X}(\alpha)+bk(d)\big]k'(d)-2b^{2}({\rm VaR}_{X}(\alpha)-d)}, \end{array} \right. \end{eqnarray} | (5.4) |
where, k(d) = d+\int^{\infty}_d S(x)dx+\beta\int^{\infty}_{d+\int^{\infty}_dS(x)dx}S(x)dx .
Theorem 5.1. The optimal ceded loss function to reinsurance problem (2.14) is given as follows.
(1) If condition (M1) holds, then the optimal ceded loss function is given by f_{4}^{*}(x) = 0 , where
(M1): \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx}\leqslant \beta. |
(2) If condition (M2) holds, then the optimal ceded loss function is given by f_{4}^{*}(x) = b^{*}(x-d^{*})_{+} , where,
\begin{eqnarray*} (M2): \left \{ \begin{array}{ll} \beta \lt \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx} , \\ {u(0) \gt 0}, \\ {v(d^{*}) \gt 0, } \end{array} \right. \end{eqnarray*} |
b^{*} = \frac{{\rm VaR}_{X}(\alpha)k'(d^{*})}{{\rm VaR}_{X}(\alpha)-d^{*}+[{\rm VaR}_{X}(\alpha)-k(d^{*})]k'(d^{*})} , d^{*} is the unique solution of equation u(d) = 0 and u(d) = {\rm VaR}_{X}(\alpha)-k(d)-k'(d)({\rm VaR}_{X}(\alpha)-d) , v(d) = {\rm VaR}_{X}(\alpha)-d-k(d)k'(d) .
(3) If condition (M3)holds, then the optimal ceded loss function is given by f_{4}^{*}(x) = (x-\bar{d})_{+} , where \bar{d} is the unique solution of equation v(d) = 0 and
\begin{eqnarray*} (M3): \left \{ \begin{array}{ll} \beta \lt \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx}, \\ {u(0) \gt 0}, \\ {v(d^{*})\leqslant 0.} \end{array} \right. \end{eqnarray*} |
(4)If condition (M4)holds, then the optimal ceded loss function is given by f_{4}^{*}(x) = \bar{b}x , where \bar{b} = \frac{{\rm VaR}_{X}(\alpha)[{\rm VaR}_{X}(\alpha)-k(0)]}{[{\rm VaR}_{X}(\alpha)-k(0)]^{2}+[{\rm VaR}_{X}(\alpha)]^{2}} and
\begin{eqnarray*} (M4): \left \{ \begin{array}{ll} \beta \lt \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx}, \\ {u(0)\leqslant 0.} \end{array} \right. \end{eqnarray*} |
Proof. Similarly to the proof of Lemma 4.1, the function \Pi_{D}(b, d) is an increasing function with respect to b . Then the study of optimal ceded functions which minimize the loss function L_{D}(b, d) in the class \mathcal{H}^{1} is simplified to solving the two-parameter minimization problem over closed subset [0, 1]\times [0, {\rm VaR}_{X}(\alpha)] . Since L_{D}(b, d) is continuous, then the minimum of L_{D}(b, d) over [0, 1]\times [0, {\rm VaR}_{X}(\alpha)] must attain at some stationary point or lie on the boundary.
(1) Note that the function k(d) is an increasing function. If condition (M1) holds, it is easy to show that
k(d)\geq {\rm VaR}_{X}(\alpha), \ {\rm for \ all} \ d \in [0, {\rm VaR}_{X}(\alpha)]. |
Then from the expression of \frac{\partial H_{D}(b, d)}{\partial b} in (5.4), we know that H_{D}(b, d) is an increasing function with respect to b . Thus the minimum points of H_{D}(b, d) are located in A_{1} .
Conversely, if condition (M1) does not hold, then there exists a \tilde{d} \in [0, {\rm VaR}_{X}(\alpha)] such that \frac{\partial H_{D}(b, \tilde{d})}{\partial b} < 0 holds in a right neighborhood of b = 0 . That is to say, (0, \tilde{d}) is not a minimum point of H_{D}(b, d) . Since H_{D}(0, d) = H_{D}(0, \tilde{d})\equiv [{\rm VaR}_{X}(\alpha)]^{2} for any (0, d)\in A_{1} , then no minimum points of H_{D}(b, d) are located in A_{1} .
That is to say, the minimum points of H_{D}(b, d) are located in A_{1} if and only if condition (M1) holds. In this case the optimal ceded loss function is f_{4}^{*}(x) = 0 .
(2) We first consider the stationary points of H_{D}(b, d) . Let
\begin{eqnarray} \left \{ \begin{array}{ll} {\frac{\partial H_{D}(b, d)}{\partial b} = 0}, \\ {\frac{\partial H_{D}(b, d)}{\partial d} = 0}. \end{array} \right. \end{eqnarray} | (5.5) |
By straightforward algebra, we obtain
\begin{eqnarray} \left \{ \begin{array}{ll} {u(d) = {\rm VaR}_{X}(\alpha)-k(d)-k'(d)({\rm VaR}_{X}(\alpha)-d) = 0}, \\ {b = \frac{{\rm VaR}_{X}(\alpha)k'(d)}{{\rm VaR}_{X}(\alpha)-d+[{\rm VaR}_{X}(\alpha)-k(d)]k'(d)}}. \end{array} \right. \end{eqnarray} | (5.6) |
If condition (M2) holds, then u(0) > 0 and u({\rm VaR}_{X}(\alpha)) < 0 hold. Since u'(d)\leq 0 for any d\in[0, {\rm VaR}_{X}(\alpha)] , then the equation u(d) = 0 has a unique root d^{*} in (0, {\rm VaR}_{X}(\alpha)) . Substituting d^{*} in the second equation of (5.6) yields
b^{*} = \frac{{\rm VaR}_{X}(\alpha)k'(d^{*})}{{\rm VaR}_{X}(\alpha)-d^{*}+[{\rm VaR}_{X}(\alpha)-k(d^{*})]k'(d^{*})}. |
Since v(d^{*}) > 0 , then we have 0 < b^{*} < 1 . Thus H_{D}(b, d) has a unique stationary point (b^{*}, d^{*}) . In the following, we show that H_{D}(b, d) attains the minimum at the stationary point (b^{*}, d^{*}) .
Conversely, if the minimum value of H_{D}(b, d) is attainable at some point (1, \bar{d}) on the right boundary, then Fermat's theorem implies
\begin{eqnarray} \left \{ \begin{array}{ll} {\frac{\partial H_{D}(1, d)}{\partial d}|_{d = \bar{d}} = 0}, \\ {\frac{\partial H_{D}(b, \bar{d})}{\partial b}|_{b = 1}\leq 0}, \end{array} \right. \end{eqnarray} | (5.7) |
which is equivalent to
\begin{eqnarray} \left \{ \begin{array}{ll} {v(\bar{d}) = 0}, \\ {k(\bar{d})[k(\bar{d})-{\rm VaR}_{X}(\alpha)]+[{\rm VaR}_{X}(\alpha)-\bar{d}]^{2}\leq 0}. \end{array} \right. \end{eqnarray} | (5.8) |
Since k'(\bar{d}) > 0 , we yield [k(\bar{d})[k(\bar{d})-{\rm VaR}_{X}(\alpha)]+[{\rm VaR}_{X}(\alpha)-\bar{d}]^{2}]k'(\bar{d})\leq 0 . Straightforward algebra leads to u(\bar{d})\geq 0 . Note that u'(d) < 0 and u(d^{*}) = 0 , then we have \bar{d}\leq d^{*} . However, since v'(d) < 0 , v(d^{*}) > 0 and v(\bar{d}) = 0 , then we have d^{*} < \bar{d} . This leads to contradictions. Thus, if condition (M2) holds, the function H_{D}(b, d) does not attain the minimum at the right boundary.
If the minimum value of H_{D}(b, d) is attainable at some point (\bar{b}, 0) on the lower boundary, then \bar{b} must satisfy the following conditions
\begin{eqnarray} \left \{ \begin{array}{ll} {\frac{\partial H_{D}(b, 0)}{\partial b}|_{b = \bar{b}} = 0}, \\ {\frac{\partial H_{D}(\bar{b}, d)}{\partial d}|_{d = 0}\geq 0}. \end{array} \right. \end{eqnarray} | (5.9) |
From (5.9), we yield
\begin{eqnarray} \left \{ \begin{array}{ll} {\bar{b} = \frac{{\rm VaR}_{X}(\alpha)[{\rm VaR}_{X}(\alpha)-k(0)]}{[{\rm VaR}_{X}(\alpha)-k(0)]^{2}+[{\rm VaR}_{X}(\alpha)]^{2}}}, \\ {[(1-\bar{b}){\rm VaR}_{X}(\alpha)+\bar{b}k(0)]k'(0)-\bar{b}{\rm VaR}_{X}(\alpha)\geq 0}, \end{array} \right. \end{eqnarray} | (5.10) |
which means u(0)\leq 0 , this is contradicted to the condition (M2).
In summary, if condition (M2) holds, the minimum of the function H_{D}(b, d) must be attained at the unique stationary point (b^{*}, d^{*}) , i.e., the optimal ceded loss function is given by f_{4}^{*}(x) = b^{*}(x-d^{*})_{+} .
(3) If condition (M3) holds, from the above arguments in (2), we know that H_{D}(b, d) has no stationary points because v(d^{*})\leq 0 and H_{E}(b, d) does not attain the minimum at (\bar{b}, 0) because u(0) > 0 . Thus, the function H_{D}(b, d) attains the minimum at the boundary point (1, \bar{d}) if condition (M3) holds, that is to say, the optimal ceded loss function is given by f_{4}^{*}(x) = (x-\bar{d})_{+} .
(4) If condition (M4) holds, then the equation u(d) = 0 has no solutions in (0, {\rm VaR}_{X}(\alpha)) , namely, the function H_{D}(b, d) has no stationary points. Thus, the minimum point of H_{D}(b, d) over \mathcal{A} must lie on the right boundary at some point (1, \bar{d}) or must lie on the lower boundary at some point (\bar{b}, 0) . If the minimum of H_{D}(b, d) is attainable at (1, \bar{d}) , then the conditions in (5.7) hold. Since k'(\bar{d}) > 0 , we yield [k(\bar{d})[k(\bar{d})-{\rm VaR}_{X}(\alpha)]+[{\rm VaR}_{X}(\alpha)-\bar{d}]^{2}]k'(\bar{d})\leq 0 . Straightforward algebra leads to u(\bar{d})\geq 0 . This is contradicted to u(0) \leq 0 and u'(d) < 0 . Thus, the minimum point of H_{E}(b, d) over \mathcal{A} must lie on the lower boundary at point (\bar{b}, 0) , namely, the optimal ceded loss function is given by f_{4}^{*}(x) = \bar{b}x .
For a layer reinsurance with a \in [0, {\rm VaR}_{X}(\alpha)] , the total costs of the insurer and the reinsurer under the VaR risk measure are
\begin{equation*} VaR_{T_{I}^{f}}(\alpha) = t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx, \end{equation*} |
\begin{equation*} VaR_{T_{R}^{f}}(\alpha) = {\rm VaR}_{X}(\alpha)-a, \end{equation*} |
where, t(a) = a+\int^{{\rm VaR}_{X}(\alpha)}_aS(x)dx . Hence, the loss function is
\begin{equation*} L_{D}(a) = \sqrt{[t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx]^2+[{\rm VaR}_{X}(\alpha)-a]^2}. \end{equation*} |
Theorem 5.2. The optimal ceded loss function that solves (2.14) with \mathcal{F}^2 constraint is given by
\begin{equation} f_{5}^*(x) = (x-a_{2}^*)_+-(x-{\rm VaR}_{X}(\alpha))_+, \end{equation} | (5.11) |
where a_{2}^* is the unique solution of equation
\begin{equation} \begin{aligned} [t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx] [1-S(a)][1-\beta S(t(a))]- ({\rm VaR}_{X}(\alpha)-a) = 0 \end{aligned} \end{equation} | (5.12) |
Proof. Let H_{D}(a) = L_{D}^2(a) = [t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx]^2+[{\rm VaR}_{X}(\alpha)-a]^2 , then
\begin{equation} H_{D}'(a) = 2\{[t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx] [1-S(a)][1-\beta S(t(a))]- ({\rm VaR}_{X}(\alpha)-a)\}, \end{equation} | (5.13) |
\begin{equation} H_{D}''(a) \gt 0. \end{equation} | (5.14) |
From Eq (5.13), we know that
\begin{equation*} H_{D}'({\rm VaR}_{X}(\alpha)) = ({\rm VaR}_{X}(\alpha))(1-S({\rm VaR}_{X}(\alpha))(1-\beta S({\rm VaR}_{X}(\alpha)) \gt 0, \end{equation*} |
and
\begin{equation*} \begin{aligned} H_{D}'(0)& = (t(0)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx)(1-S(0))(1-\beta S(t(0))))-{\rm VaR}_{X}(\alpha)\\ & \lt t(0)+{\rm VaR}_{X}(\alpha)-t(0)-{\rm VaR}_{X}(\alpha)\\ & = 0. \end{aligned} \end{equation*} |
Hence, from (5.12) and (5.14), we have
\begin{equation*} H_{D}'(a)\gtreqqless 0 \Leftrightarrow a_{2}^*\lesseqqgtr a. \end{equation*} |
Therefore, a^* is the unique minimum point of H_{D}(a) . Since L_{D}(a) and H_{D}(a) have the same minimum points, then the optimal ceded loss function that solves (2.14) with \mathcal{F}^2 constraint is given by (5.11) and (5.12).
From Proposition 4 , we can deduce optimal ceded loss functions by confining attention to \mathcal{H}^{3} . For a quota-share reinsurance with a policy limit h(x) = c(x-(x-{\rm VaR}_{X}(\alpha))_+) , the total costs of the insurer and the reinsurer under the VaR risk measure are
\begin{equation} \begin{aligned} VaR_{T_{I}^{f}}(\alpha)& = (1-c){\rm VaR}_{X}(\alpha)+ct(0)+\beta c\int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx, \\ VaR_{T_{R}^{f}}(\alpha)& = c{\rm VaR}_{X}(\alpha). \end{aligned} \end{equation} | (5.15) |
Hence, the loss function is
\begin{equation*} L_{D}(c) = \sqrt{[(1-c){\rm VaR}_{X}(\alpha)+ct(0)+\beta c\int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx]^2+[c{\rm VaR}_{X}(\alpha)]^2}. \end{equation*} |
Theorem 5.3. The optimal ceded loss function that solves (2.14) with \mathcal{F}^3 constraint is given by
\begin{equation} f_{6}^{*}(x) = c_{2}^*(x-(x-{\rm VaR}_{X}(\alpha))_+), \end{equation} | (5.16) |
where
\begin{equation*} c_{2}^* = \frac {-\varphi ({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha)}{({\rm VaR}_{X}(\alpha))^2+(\varphi ({\rm VaR}_{X}(\alpha))^2} \end{equation*} |
and \varphi({\rm VaR}_{X}(\alpha)) = \int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx-{\rm VaR}_{X}(\alpha) .
Proof. Let H_{D}(c) = L_{D}^2(c) = [(1-c){\rm VaR}_{X}(\alpha)+ct(0)+\beta c\int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx]^2+[c{\rm VaR}_{X}(\alpha)]^2 , then L_{D}(c) and H_{D}(c) have the same minimum points. Taking the derivative of H_{D}(c) , we obtain
\begin{equation} H_{D}'(c) = 2c[({\rm VaR}_{X}(\alpha))^2+(\varphi ({\rm VaR}_{X}(\alpha))^2]+2\varphi({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha), \end{equation} | (5.17) |
\begin{equation} H_{D}''(c) = 2[({\rm VaR}_{X}(\alpha))^2+(\varphi ({\rm VaR}_{X}(\alpha))^2] \gt 0. \end{equation} | (5.18) |
Note that
\begin{equation} \begin{aligned} \varphi({\rm VaR}_{X}(\alpha))& = \int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx-{\rm VaR}_{X}(\alpha)\\ & \lt \int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx+{\rm VaR}_{X}(\alpha)-t(0)-{\rm VaR}_{X}(\alpha)\\ & = 0. \end{aligned} \end{equation} | (5.19) |
Then, according to (5.17), (5.18) and (5.19), H_{D}(c) and L_{D}(c) attain there minimum at c = c_{2}^* , where c_{2}^* = \frac {-\varphi ({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha)}{({\rm VaR}_{X}(\alpha))^2+(\varphi ({\rm VaR}_{X}(\alpha))^2}\leq \frac{1}{2}.
In this section, we construct four numerical examples to illustrate the optimal reinsurance policies that we derived in the previous sections. Let the confidence level \alpha = 0.95 , safety loading parameters \theta = 0.2 and \beta = 0.5 .
Example 6.1. Assume that the reinsurance premium is expectation premium principle and the loss variable X has an exponential distribution with survival function S(x) = e^{0.001x} , then F(0) = 0 < \frac{\theta}{1+\theta} = 0.1667 < \alpha = 0.95 , {\rm VaR}_{X}(\alpha) = 2995.73 > 1182.32 = g(d_{0}) , d^{*} = 1995.73 , p(d^{*}) = -806.73 . By Theorem (4.1), Theorem (4.2) and Theorem (4.3), we know that the optimal ceded loss function among \mathcal{F}^1 is f_{1}^{*} = (x-1599.90)_{+} , the optimal ceded loss function among \mathcal{F}^2 is f_{2}^{*} = (x-1622.55)_{+}-(x-2995.73)_{+} and the optimal ceded loss function among \mathcal{F}^3 is f_{3}^{*} = 0.4477(x-(x-2995.73)_{+}) .
Example 6.2. Assume that the reinsurance premium is expectation premium principle and the loss variable X has a Pareto distribution with survival function S(x) = (\frac{2000}{x+2000})^{3} , then F(0) = 0 < \frac{\theta}{1+\theta} = 0.1667 < \alpha = 0.95 , {\rm VaR}_{X}(\alpha) = 3428.84 > 1187.98 = g(d_{0}) , d^{*} = 1619.22 , p(d^{*}) = 226.05 . By Theorem (4.1), Theorem (4.2) and Theorem (4.3), we know that the optimal ceded loss function among \mathcal{F}^1 is f_{1}^{*} = 0.9236(x-1619.22)_{+} , the optimal ceded loss function among \mathcal{F}^2 is f_{2}^{*} = (x-1801.98)_{+}-(x-3428.84)_{+} and the optimal ceded loss function among \mathcal{F}^3 is f_{3}^{*} = 0.4692(x-(x-3428.84)_{+}) .
Example 6.3. Assume that the reinsurance premium is Dutch premium principle and the loss variable X has an exponential distribution with survival function S(x) = e^{0.001x} , then {\rm VaR}_{X}(\alpha) = 2995.73 , \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx} = 5.4250 > 0.5 = \beta , u(0) = 1811.79 , d^{*} = 1950.79 , v(d^{*}) = -689.40 . By Theorem (5.1), Theorem (5.2) and Theorem (5.3), we know that the optimal ceded loss function among \mathcal{F}^1 is f_{4}^{*} = (x-1607.99)_{+} , the optimal ceded loss function among \mathcal{F}^2 is f_{5}^{*} = (x-2994.81)_{+}-(x-2995.73)_{+} and the optimal ceded loss function among \mathcal{F}^3 is f_{6}^{*} = 0.4500(x-(x-2995.73)_{+}) .
Example 6.4. Assume that the reinsurance premium is Dutch premium principle and the loss variable X as a Pareto distribution with survival function S(x) = (\frac{2000}{x+2000})^{3} , then {\rm VaR}_{X}(\alpha) = 3428.84 , \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx} = 5.4649 > 0.5 = \beta , u(0) = 2206.61 , d^{*} = 1525.01 , v(d^{*}) = 397.65 . By Theorem (5.1), Theorem (5.2) and Theorem (5.3), we know that the optimal ceded loss function among \mathcal{F}^1 is f_{4}^{*} = 0.8676(x-1525.01)_{+} , the optimal ceded loss function among \mathcal{F}^2 is f_{5}^{*} = (x-3427.91)_{+}-(x-3428.84)_{+} and the optimal ceded loss function among \mathcal{F}^3 is f_{6}^{*} = 0.4690(x-(x-3428.84)_{+}) .
Remark 6.1. Note that the risks X have the same mean and the parameters are same in the above four examples. For the exponential case, the optimal reinsurance policy is a stop-loss reinsurance when f\in \mathcal{F}^1 , while for the Pareto case, the optimal reinsurance policy is a change-loss reinsurance when f\in \mathcal{F}^1 . Therefore, the form of the optimal reinsurance policy depends on the distribution of loss variable X .
The optimal reinsurance policies from the perspective of both the insurer and reinsurer have remained a fascinating topic in actuarial science. Many interesting optimal reinsurance models have been proposed. In contrast to the existing literatures, we provide two new findings to the optimal reinsurance models from both the insurer and reinsurer in this paper. First, we propose an optimization criterion to minimize their total costs under the criteria of loss function which is defined by the joint value-at-risk. Second, we extend the premium principle to a much wide class of premium principles satisfying two axioms: risk loading and stop-loss ordering preserving. Under these conditions, we derive the optimal reinsurance policies over three ceded loss function sets, (ⅰ) the change-loss reinsurance is optimal among the class of increasing convex ceded loss functions; (ⅱ) when the constraints on both ceded and retained loss functions are relaxed to increasing functions, the layer reinsurance is shown to be optimal; (ⅲ) the quota-share reinsurance with a limit is always optimal when the ceded loss functions are in the class of increasing concave functions. We further use the expectation premium principle and Dutch premium principle to illustrate the application of our results by deriving the optimal parameters.
We also wish to point out that further research on this topic is needed. First, for reinsurance, the challenges of classical insurance are amplified, particularly when it comes to dealing with extreme situations like large claims and rare events. We have to rethink classical models in order to cope successfully with the respective challenges. One of the better ways is to focus on modelling and statistics, related literature can be referred to [32,33]. Second, in most of optimal reinsurance problems, it is assumed that the distributions of the insurer's risks are known. However, in practice, only incomplete information on the distributions is available. How to obtain optimal reinsurance contracts with incomplete information is also an interesting topic. An attempt to such a problem is to use the statistical methods, see [34,35]. Third, although some papers have been devoted to deriving optimal reinsurance under model uncertainty, the optimal reinsurance with uncertainty still lacks of available analysis tools, maybe we can draw support from sub-linear expectation, for details, see [36,37]. We hope that these important open problems can be addressed in future research. We also believe that this article can and will foster further research in this direction.
The research was supported by Project of Shandong Province Higher Educational Science and Technology Program (J18KA249) and Social Science Planning Project of Shandong Province (20CTJJ02).
The authors declare that there is no conflict of interest.
[1] |
Q. M. Luo, On the Apostol-Bernoulli polynomials, Centr. Eur. J. Math., 2 (2004), 509–515. https://doi.org/10.2478/BF02475959 doi: 10.2478/BF02475959
![]() |
[2] |
Q. M. Luo, H. M. Srivastava, Some generalizations of the Apostol-Bernoulli and Apostol-Euler polynomials, J. Math. Anal. Appl., 308 (2005), 290–302. https://doi.org/10.1016/j.jmaa.2005.01.020 doi: 10.1016/j.jmaa.2005.01.020
![]() |
[3] |
Q. M. Luo, H. M. Srivastava, Some relationships between the Apostol-Bernoulli and Apostol-Euler polynomials, Comput. Math. Appl., 51 (2006), 631–642. https://doi.org/10.1016/j.camwa.2005.04.018 doi: 10.1016/j.camwa.2005.04.018
![]() |
[4] |
D. Q. Lu, H. M. Srivastava, Some series identities involving the generalized Apostol type and related polynomials, Comput. Math. Appl., 62 (2011), 3591–3602. https://doi.org/10.1016/j.camwa.2011.09.010 doi: 10.1016/j.camwa.2011.09.010
![]() |
[5] |
Q. M. Luo, Apostol-Euler polynomials of higher order and Gaussian hyper-geometric functions, Taiwanese J. Math., 10 (2006), 917–925. https://doi.org/10.11650/twjm/1500403883 doi: 10.11650/twjm/1500403883
![]() |
[6] | Q. M. Luo, Extensions of the Genocchi polynomials and their Fourier expansions and integral representations, Osaka J. Math., 48 (2011), 291–309. |
[7] |
M. A. Ozarslan, Unified Apostol-Bernoulli, Euler and Genocchi polynomials, Comput. Math. Appl., 62 (2011), 2452–2462. https://doi.org/10.1016/j.camwa.2011.07.031 doi: 10.1016/j.camwa.2011.07.031
![]() |
[8] |
H. Ozden, Y. Simsek, H. M. Srivastava, A unified presentation of the generating functions of the generalized Bernoulli, Euler and Genocchi polynomials, Comput. Math. Appl., 60 (2010), 2779–2787. https://doi.org/10.1016/j.camwa.2010.09.031 doi: 10.1016/j.camwa.2010.09.031
![]() |
[9] |
N. Kilar, Y. Simsek, Computational formulas and identities for new classes of Hermite-based Milne-Thomson type polynomials: analysis of generating functions with Euler's formula, Math. Meth. Appl. Sci., 44 (2021), 6731–6762. https://doi.org/10.1002/mma.7220 doi: 10.1002/mma.7220
![]() |
[10] |
Y. Simsek, Construction of some new families of Apostol-type numbers and polynomials via Dirichlet character and p-adic q-integrals, Turk. J. Math., 42 (2018), 557–577. https://doi.org/10.3906/mat-1703-114 doi: 10.3906/mat-1703-114
![]() |
[11] |
Y. Simsek, Computation methods for combinatorial sums and Euler-type numbers related to new families of numbers, Math. Methods Appl. Sci., 40 (2017), 2347–2361. https://doi.org/10.1002/mma.4143 doi: 10.1002/mma.4143
![]() |
[12] | H. M. Srivastava, Some generalizations and basic (or q-) extensions of the Bernoulli, Euler and Genocchi polynomials, Appl. Math. Inf. Sci., 5 (2011), 390–444. |
[13] |
S. Jin, M. C. Dagli, F. Qi, Degenerate Fubini-type polynomials and numbers, degenerate Apostol-Bernoulli polynomials and numbers, and degenerate Apostol-Euler polynomials and numbers, Axioms, 11 (2022), 477. https://doi.org/10.3390/axioms11090477 doi: 10.3390/axioms11090477
![]() |
[14] | Y. He, T. Kim, General convolution identities for Apostol-Bernoulli, Euler and Genocchi polynomials, J. Nonlinear Sci. Appl., 9 (2016), 4780–4797. |
[15] | H. M. Srivastava, M. Masjed-Jamei, M. R. Beyki, Some new generalizations and applications of the Apostol-Bernoulli, Apostol-Euler and Apostol-Genocchi polynomials, Rocky Mountain J. Math., 49 (2019), 681–697. |
[16] |
M. Masjed-Jamei, M. R. Beyki, W. Koepf, A new type of Euler polynomials and numbers, Mediterr. J. Math., 15 (2018), 1–17. https://doi.org/10.1007/s00009-018-1181-1 doi: 10.1007/s00009-018-1181-1
![]() |
[17] | M. Masjed-Jamei, M. R. Beyki, E. Omey, On a parametric kind of Genocchi polynomials, J. Inequal. Spec. Funct., 9 (2018), 68–81. |
[18] |
M. Masjed-Jamei, W. Koepf, Symbolic computation of some power-trigonometric series, J. Symb. Comput., 80 (2017), 273–284. https://doi.org/10.1016/j.jsc.2016.03.004 doi: 10.1016/j.jsc.2016.03.004
![]() |
[19] | H. M. Srivastava, M. Masjed-Jamei, M. R. Beyki, A parametric type of the Apostol-Bernoulli, Apostol-Euler and Apostol-Genocchi polynomials, Appl. Math. Inf. Sci., 12 (2018), 907–916. |
[20] |
H. M. Srivastava, C. Kizilates, A parametric kind of the Fubini-type polynomials, RACSAM, 113 (2019), 3253–3267. https://doi.org/10.1007/s13398-019-00687-4 doi: 10.1007/s13398-019-00687-4
![]() |
[21] |
O. K. Pashaev, S. Nalci, Golden quantum oscillator and Binet-Fibonacci calculus, J. Phys. A: Math. Theor., 45 (2012), 1–23. https://doi.org/10.1088/1751-8113/45/1/015303 doi: 10.1088/1751-8113/45/1/015303
![]() |
[22] |
O. K. Pashaev, Quantum calculus of Fibonacci divisors and infinite hierarchy of bosonic-fermionic golden quantum oscillators, Internat. J. Geom. Methods Modern Phys., 18 (2021), 1–32. https://doi.org/10.1142/S0219887821500754 doi: 10.1142/S0219887821500754
![]() |
[23] |
E. Krot, An introduction to finite fibonomial calculus, Centr. Eur. J. Math., 2 (2004), 754–766. https://doi.org/10.2478/BF02475975 doi: 10.2478/BF02475975
![]() |
[24] | M. Ozvatan, Generalized golden-Fibonacci calculus and applications, Ph.D. thesis, Izmir Institute of Technology, 2018. |
[25] |
O. K. Pashaev, M. Ozvatan, Bernoulli-Fibonacci polynomials, arXiv, 2020. https://doi.org/10.48550/arXiv.2010.15080 doi: 10.48550/arXiv.2010.15080
![]() |
[26] |
S. Kus, N. Tuglu, T. Kim, Bernoulli F-polynomials and Fibo-Bernoulli matrices, Adv. Differ. Equ., 2019 (2019), 145. https://doi.org/10.1186/s13662-019-2084-6 doi: 10.1186/s13662-019-2084-6
![]() |
[27] | N. Tuglu, E. Ercan, Some properties of Apostol Bernoulli Fibonacci and Apostol Euler Fibonacci polynomials, Icmee-2021, 32-34. |
[28] | E. Gulal, N. Tuglu, Apostol Bernoulli-Fibonacci polynomials, Apostol Euler-Fibonacci polynomials, and their generating functions, unpublished paper. |