In today's competitive market, predicting clients' behavior is crucial for businesses to meet their needs and prevent them from being attracted by competitors. This is especially important in industries like telecommunications, where the cost of acquiring new customers exceeds retaining existing ones. To achieve this, companies employ Customer Churn Prediction approaches to identify potential customer attrition and develop retention plans. Machine learning models are highly effective in identifying such customers; however, there is a need for more effective techniques to handle class imbalance in churn datasets and enhance prediction accuracy in complex churn prediction datasets. To address these challenges, we propose a novel two-level stacking-mode ensemble learning model that utilizes the Whale Optimization Algorithm for feature selection and hyper-parameter optimization. We also introduce a method combining K-member clustering and Whale Optimization to effectively handle class imbalance in churn datasets. Extensive experiments conducted on well-known datasets, along with comparisons to other machine learning models and existing churn prediction methods, demonstrate the superiority of the proposed approach.
Citation: Bijan Moradi, Mehran Khalaj, Ali Taghizadeh Herat, Asghar Darigh, Alireza Tamjid Yamcholo. A swarm intelligence-based ensemble learning model for optimizing customer churn prediction in the telecommunications sector[J]. AIMS Mathematics, 2024, 9(2): 2781-2807. doi: 10.3934/math.2024138
[1] | Thabet Abdeljawad, Muhammad Aamir Ali, Pshtiwan Othman Mohammed, Artion Kashuri . On inequalities of Hermite-Hadamard-Mercer type involving Riemann-Liouville fractional integrals. AIMS Mathematics, 2021, 6(1): 712-725. doi: 10.3934/math.2021043 |
[2] | Miguel Vivas-Cortez, Muhammad Aamir Ali, Artion Kashuri, Hüseyin Budak . Generalizations of fractional Hermite-Hadamard-Mercer like inequalities for convex functions. AIMS Mathematics, 2021, 6(9): 9397-9421. doi: 10.3934/math.2021546 |
[3] | Jia-Bao Liu, Saad Ihsan Butt, Jamshed Nasir, Adnan Aslam, Asfand Fahad, Jarunee Soontharanon . Jensen-Mercer variant of Hermite-Hadamard type inequalities via Atangana-Baleanu fractional operator. AIMS Mathematics, 2022, 7(2): 2123-2141. doi: 10.3934/math.2022121 |
[4] | Saad Ihsan Butt, Artion Kashuri, Muhammad Umar, Adnan Aslam, Wei Gao . Hermite-Jensen-Mercer type inequalities via Ψ-Riemann-Liouville k-fractional integrals. AIMS Mathematics, 2020, 5(5): 5193-5220. doi: 10.3934/math.2020334 |
[5] | Shuang-Shuang Zhou, Saima Rashid, Muhammad Aslam Noor, Khalida Inayat Noor, Farhat Safdar, Yu-Ming Chu . New Hermite-Hadamard type inequalities for exponentially convex functions and applications. AIMS Mathematics, 2020, 5(6): 6874-6901. doi: 10.3934/math.2020441 |
[6] | Tekin Toplu, Mahir Kadakal, İmdat İşcan . On n-Polynomial convexity and some related inequalities. AIMS Mathematics, 2020, 5(2): 1304-1318. doi: 10.3934/math.2020089 |
[7] | M. Emin Özdemir, Saad I. Butt, Bahtiyar Bayraktar, Jamshed Nasir . Several integral inequalities for (α, s,m)-convex functions. AIMS Mathematics, 2020, 5(4): 3906-3921. doi: 10.3934/math.2020253 |
[8] | Yamin Sayyari, Mana Donganont, Mehdi Dehghanian, Morteza Afshar Jahanshahi . Strongly convex functions and extensions of related inequalities with applications to entropy. AIMS Mathematics, 2024, 9(5): 10997-11006. doi: 10.3934/math.2024538 |
[9] | Haoliang Fu, Muhammad Shoaib Saleem, Waqas Nazeer, Mamoona Ghafoor, Peigen Li . On Hermite-Hadamard type inequalities for n-polynomial convex stochastic processes. AIMS Mathematics, 2021, 6(6): 6322-6339. doi: 10.3934/math.2021371 |
[10] | Muhammad Zakria Javed, Muhammad Uzair Awan, Loredana Ciurdariu, Omar Mutab Alsalami . Pseudo-ordering and δ1-level mappings: A study in fuzzy interval convex analysis. AIMS Mathematics, 2025, 10(3): 7154-7190. doi: 10.3934/math.2025327 |
In today's competitive market, predicting clients' behavior is crucial for businesses to meet their needs and prevent them from being attracted by competitors. This is especially important in industries like telecommunications, where the cost of acquiring new customers exceeds retaining existing ones. To achieve this, companies employ Customer Churn Prediction approaches to identify potential customer attrition and develop retention plans. Machine learning models are highly effective in identifying such customers; however, there is a need for more effective techniques to handle class imbalance in churn datasets and enhance prediction accuracy in complex churn prediction datasets. To address these challenges, we propose a novel two-level stacking-mode ensemble learning model that utilizes the Whale Optimization Algorithm for feature selection and hyper-parameter optimization. We also introduce a method combining K-member clustering and Whale Optimization to effectively handle class imbalance in churn datasets. Extensive experiments conducted on well-known datasets, along with comparisons to other machine learning models and existing churn prediction methods, demonstrate the superiority of the proposed approach.
Theory of inequalities play pivotal role in almost all branches of pure and applied mathematics. Theory of convex functions has played vital role in the development of theory of inequalities. In modern analysis many inequalities are direct consequences of the applications of convexity property of the functions. One of the most extensively as well as intensively studied inequality pertaining to convexity property of the functions is Hermite–Hadamard's inequality. This inequality provides necessary and sufficient condition for a function to be convex. It reads as: Let Φ:I=[♭1,♭2]⊂R↦R be a convex function on closed interval [♭1,♭2], then
Φ(♭1+♭22)≤1♭2−♭1♭2∫♭1Φ(τ)dτ≤Φ(♭1)+Φ(♭2)2. |
In recent years several successful attempts have been made in obtaining novel improvements and generalizations of Hermite–Hadamard's inequality, see [1,2,3,4]. Dragomir and Pearce [5] have written a very informative monograph on Hermite–Hadamard's inequality and its applications. Interested readers can find very useful details pertaining to these inequalities. Another remarkable inequality which has played significant role in theory of inequalities is Jensen's inequality, see [6]. It reads as: Let Φ be a convex function on [♭1,♭2], then for all xi∈[♭1,♭2] and μi∈[0,1], where i=1,2,…,n, we have
Φ(n∑i=1μixi)≤n∑i=1μiΦ(xi). |
Following inequality is known as Jensen–Mercer's inequality in the literature:
Φ(♭1+♭2−n∑i=1μixi)≤Φ(♭1)+Φ(♭2)−n∑i=1μiΦ(xi), |
for μi∈[0,1], where Φ is a convex function. For more details, see [7].
Pavić [8] presented the generalized version of Jensen–Mercer's inequality as: Assume that Φ:[♭1,♭2]↦R be a convex function, where xi∈[♭1,♭2] are n–points. Let α,β,μi∈[0,1], γ∈[−1,1] be coefficients of sums α+β+γ=∑ni=1μi=1, then
Φ(α♭1+β♭2+γn∑i=1μixi)≤αΦ(♭1)+βΦ(♭2)+γn∑i=1μiΦ(xi). | (1.1) |
Remark 1.1. Note that
1) If we take α=1=β and γ=−1 in (1.1), then we get Jensen–Mercer inequality.
2) If we choose α=0=β and γ=1 in (1.1), then we obtain the well-known Jensen inequality.
For some recent studies regarding Hermite-Hadamard-Mercer type inequalities, see [9,10].
Fractional calculus is the branch of mathematics which deals with integrals and derivatives of any arbitrary real or complex order. The history of fractional calculus is old but in recent years it has received significant popularity and importance. This can be attributed mainly due to its great many applications in various fields of science and engineering. It provides many useful tools for solving differential equations, integral equations, and problems involving special functions of mathematical physics. Among several known forms of fractional integrals, the Riemann–Liouville fractional integral has been investigated extensively, which is defined as follows:
Definition 1.1 ([11]). Let Φ∈L1[♭1,♭2] (the set of all integrable functions on [♭1,♭2]). The Riemann–Liouville integrals Jν♭1+Φ and Jν♭2−Φ of order ν>0 are defined by
Jν♭1+Φ(x1)=1Γ(ν)x1∫♭1(x1−τ)ν−1Φ(τ)dτ,x1>♭1, |
and
Jν♭2−Φ(x1)=1Γ(ν)♭2∫x1(τ−x1)ν−1Φ(τ)dτ,x1<♭2, |
Mubeen and Habibullah [12] introduced the notion of κ-Riemann–Liouville fractional integrals as: Let Φ∈L1[♭1,♭2], then
Jν,κ♭1+Φ(x1)=1κΓκ(ν)∫x1♭1(x1−τ)νκ−1Φ(τ)dτ,x1>♭1,Jν,κ♭2−Φ(x1)=1κΓκ(ν)∫♭2x1(τ−x1)νκ−1Φ(τ)dτ,x1<♭2, |
where Γκ(ν)=∫∞0τν−1eτκκdτ,ℜ(ν)>0,κ∈R+ is the κ–gamma function which was introduced and studied in [13].
Sarikaya et al. [14] were the first to derive fractional analogue of Hermite–Hadamard's inequality. Since then blend of techniques both from fractional calculus and convex analysis have been used in obtaining various fractional analogues of classical inequalities. For more details, see [15,16,17,18,19,20,21,22].
Having inspiration from the ongoing research, we will establish some new Hermite–Hadamard–Mercer type of inequalities by using κ–Riemann–Liouville fractional integrals. Moreover, we will derive two new integral identities as auxiliary results. Applying two identities as auxiliary results, we will obtain some new variants of Hermite–Hadamard–Mercer type via κ–Riemann–Liouville fractional integrals. Several special cases will be deduce in details and some know results will be recaptured as well. In order to illustrate the efficiency of our main results, some applications regarding special means of positive real numbers and error estimations for trapezoidal quadrature formula will be provide as well.
In this section, we discuss our main results.
Theorem 2.1. Assume that Φ:[♭1,♭2]↦R be a convex function. Let α,β,∈[0,1], γ∈(0,1] be coefficients of sums α+β+γ=1 and ν,κ>0, then
Φ(α♭1+β♭2+γx1+x22)≤Γκ(ν+κ)2γνκ(x2−x1)νκ[(Jν,κ(α♭1+β♭2+γx2)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1)+Φ)(α♭1+β♭2+γx2)]≤αΦ(♭1)+βΦ(♭2)+γΦ(x1)+Φ(x2)2, |
holds for all x1,x2∈[♭1,♭2] with x1<x2.
Proof. Consider
Φ(α♭1+β♭2+γx11+x212)=Φ(α♭1+β♭2+γx11+α♭1+β♭2+γx212). |
Using change of variable technique, for α♭1+β♭2+γx11=τ(α♭1+β♭2+γx1)+(1−τ)(α♭1+β♭2+γx2) and α♭1+β♭2+γx21=(1−τ)(α♭1+β♭2+γx1)+τ(α♭1+β♭2+γx2), we have
Φ(α♭1+β♭2+γx1+x22)≤12[Φ(τ(α♭1+β♭2+γx1)+(1−τ)(α♭1+β♭2+γx2))+Φ(τ(α♭1+β♭2+γx2)+(1−τ)(α♭1+β♭2+γx1))]. |
Multiplying both side of above inequality τνκ−1 and integrating with respect to τ on [0,1], we get
Φ(α♭1+β♭2+γx1+x22)≤ν2κ[∫10τνκ−1Φ(τ(α♭1+β♭2+γx1)+(1−τ)(α♭1+β♭2+γx2))dτ+∫10τνκ−1Φ(τ(α♭1+β♭2+γx2)+(1−τ)(α♭1+β♭2+γx1))dτ]. |
After simplify, we obtain
Φ(α♭1+β♭2+γx1+x22)≤Γκ(ν+κ)2κγνκ(x2−x1)νκΓκ(ν)[∫α♭1+β♭2+γx2α♭1+β♭2+γx1(α♭1+β♭2+γx2−u)νκ−1Φ(u)du+∫α♭1+β♭2+γx2α♭1+β♭2+γx1(u−(α♭1+β♭2+γx1))νκ−1Φ(u)du]. |
Consequently, we have
Φ(α♭1+β♭2+γx1+x22)≤Γκ(ν+κ)2γνκ(x2−x1)νκ[(Jν,κ(α♭1+β♭2+γx2)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1)+Φ)(α♭1+β♭2+γx2)]. |
To prove second inequality, from convexity of Φ, we have
Φ(τ(α♭1+β♭2+γx1)+(1−τ)(α♭1+β♭2+γx2))≤τΦ(α♭1+β♭2+γx1)+(1−τ)Φ(α♭1+β♭2+γx2), | (2.1) |
and
Φ(τ(α♭1+β♭2+γx2)+(1−τ)(α♭1+β♭2+γx1))≤τΦ(α♭1+β♭2+γx2)+(1−τ)Φ(α♭1+β♭2+γx1). | (2.2) |
Adding inequalities (2.1) and (2.2), and then multiplying both side of above inequality by τνκ−1, and integrating with respect to τ on [0,1], we get
∫10τνκΦ(τ(α♭1+β♭2+γx1)+(1−τ)(α♭1+β♭2+γx2))dτ+∫10τνκ−1Φ(τ(α♭1+β♭2+γx2)+(1−τ)(α♭1+β♭2+γx1))dτ≤2νκ[αΦ(♭1)+βΦ(♭2)+γx1+x22]. |
After simple calculation, we obtain second part of our result. This completes our proof.
Corollary 2.1. If we choose α=0=β and γ=1 in Theorem 2.1, then
Φ(x1+x22)≤Γκ(ν+κ)2(x2−x1)νκ[(Jν,κx2−Φ)(x1)+(Jν,κx1+Φ)(x2)]≤Φ(x1)+Φ(x2)2, |
holds for all x1,x2∈[♭1,♭2] with x1<x2, see [23].
Theorem 2.2. Assume that Φ:[♭1,♭2]↦R be a convex function. Let α,β∈[0,1], γ∈(0,1] be coefficients of sums α+β+γ=1 and ν,κ>0, then
Φ(α♭1+β♭2+γx1+x22)≤Γκ(ν+κ)(ω+1)νκ2γνκ(x2−x1)νκ[(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)+(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)]≤αΦ(♭1)+βΦ(♭2)+γΦ(x1)+Φ(x2)2, |
holds for all x1,x2∈[♭1,♭2] with x1<x2, and ω∈N.
Proof. Since Φ is convex function, then
Φ(α♭1+β♭2+γx11+x122)≤12[Φ(α♭1+β♭2+γx11)+Φ(α♭1+β♭2+γx21)]. |
Using change of variable technique for x11=τω+1x1+ω+1−τω+1x2 and x21=ω+1−τω+1x1+τω+1x2, we have
Φ(α♭1+β♭2+γx1+x22)≤12[Φ(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))+Φ(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))]. |
Multiplying both side of above inequality τνκ−1 and integrating with respect to τ on [0,1], we get
Φ(α♭1+β♭2+γx1+x22)≤ν2κ[∫10τνκ−1Φ(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))dτ+∫10τνκ−1Φ(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))dτ]=ν(ω+1)νκ2κγνκ(x2−x1)νκ[∫α♭1+β♭2+γx2α♭1+β♭2+γx1+ωx2ω+1(α♭1+β♭2+γx2−u)νκ−1Φ(u)du+∫α♭1+β♭2+γωx1+x2ω+1α♭1+β♭2+γx1(u−(α♭1+β♭2+γx1))νκ−1Φ(u)du]=Γκ(ν+κ)(ω+1)νκ2γνκ(x2−x1)νκ[(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)+(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)]. |
The first inequality is proved. To prove second inequality, from convexity of property of Φ, we have
Φ(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))≤αΦ(♭1)+βΦ(♭2)+γ(τω+1Φ(x1)+ω+1−τω+1Φ(x2)), | (2.3) |
and
Φ(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))≤αΦ(♭1)+βΦ(♭2)+γ(τω+1Φ(x2)+ω+1−τω+1Φ(x1)). | (2.4) |
Adding inequalities (2.3) and (2.4), multiplying both side by τνκ−1, and then integrating with respect to τ on [0,1], we obtain second inequality. This completes the proof.
Corollary 2.2. If we choose α=0=β and γ=1 in Theorem 2.2, then
Φ(x1+x22)≤Γκ(ν+κ)(ω+1)νκ2(x2−x1)νκ[(Jν,κ(x1+ωx2ω+1)+Φ)(x2)+(Jν,κ(ωx1+x2ω+1)−Φ)(x1)]≤Φ(x1)+Φ(x2)2, |
holds for all x1,x2∈[♭1,♭2] with x1<x2, and ω∈N.
In this section, we derive two new auxiliary identities, which will be used in obtaining our further results.
Lemma 3.1. Let Φ:[♭1,♭2]↦R be a differentiable function on (♭1,♭2) with ♭1<♭2. If Φ′∈L1[♭1,♭2] and α,β∈[0,1], γ∈(0,1] be coefficients of sums α+β+γ=1 and ν,κ>0, then
Φ(α♭1+β♭2+γx1)+Φ(α♭1+β♭2+γx2)2−Γκ(ν+κ)2γνκ(x2−x1)νκ×[(Jα,κ(α♭1+β♭2+γx2)−Φ)(α♭1+β♭2+γx1)+(Jα,κ(α♭1+β♭2+γx1)+Φ)(α♭1+β♭2+γx2)]=γ(x2−x1)2[∫10(1−τ)νκΦ′(α♭1+β♭2+γ(τx1+(1−τ)x2))dτ−∫10τνκΦ′(α♭1+β♭2+γ(τx1+(1−τ)x2))dτ], |
holds for all x1,x2∈[♭1,♭2] with x1<x2.
Proof. Consider
I:=γ(x2−x1)2[∫10(1−τ)νκΦ′(α♭1+β♭2+γ(τx1+(1−τ)x2))dτ−∫10τνκΦ′(α♭1+β♭2+γ(τx1+(1−τ)x2))dτ]=γ(x2−x1)2[I1−I2], |
where
I1:=∫10(1−τ)νκΦ′(α♭1+β♭2+γ(τx1+(1−τ)x2))dτ=−(1−τ)νκΦ(α♭1+β♭2+γ(τx1+(1−τ)x2))γ(x2−x1)|10−νκγ(x2−x1)∫10(1−τ)νκ−1Φ(α♭1+β♭2+γ(τx1+(1−τ)x2))dτ=Φ(α♭1+β♭2+γx2)γ(x2−x1)−Γκ(ν+κ)γνκ+1(x2−x1)νκ+1(Jν,κ(α♭1+β♭2+γx2)−Φ)(α♭1+β♭2+γx1), |
and
I2:=∫10τνκΦ′(α♭1+β♭2+γ(τx1+(1−τ)x2))dτ=−τνκΦ(α♭1+β♭2+γ(τx1+(1−τ)x2))γ(x2−x1)|10+νκγ(x2−x1)∫10τνκ−1Φ(α♭1+β♭2+γ(τx1+(1−τ)x2))dτ=−Φ(α♭1+β♭2+γx1)γ(x2−x1)+Γκ(ν+κ)γνκ+1(x2−x1)νκ+1(Jν,κ(α♭1+β♭2+γx1)+Φ)(α♭1+β♭2+γx2). |
Substituting the values of I1 and I2 in I, we obtain our required result.
Corollary 3.1. If we choose α=0=β and γ=1 in Lemma 3.1, then
Φ(x1)+Φ(x2)2−Γκ(ν+κ)2(x2−x1)νκ[(Jν,κx2−Φ)(x1)+(Jν,κx1+Φ)(x2)]=(x2−x1)2[∫10(1−τ)νκΦ′(τx1+(1−τ)x2)dτ−∫10τνκΦ′(τx1+(1−τ)x2)dτ]. |
Lemma 3.2. Let Φ:[♭1,♭2]↦R be a differentiable function on (♭1,♭2) with ♭1<♭2. If Φ′∈L1[♭1,♭2] and α,β∈[0,1], γ∈(0,1] be coefficients of sums α+β+γ=1 and ν,κ>0, then
Φ(α♭1+β♭2+γωx1+x2ω+1)+Φ(α♭1+β♭2+γx1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1γνκ(x2−x1)νκ[(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)]=γ(x2−x1)(ω+1)2[∫10τνκΦ′(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))dτ−∫10τνκΦ′(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))dτ], |
holds for all x1,x2∈[♭1,♭2] with x1<x2, and ω∈N.
Proof. Consider
J:=γ(x2−x1)(ω+1)2[∫10τνκΦ′(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))dτ−∫10τνκΦ′(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))dτ]=γ(x2−x1)(ω+1)2[J1−J2], |
where
J1:=∫10τνκΦ′(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))dτ=(ω+1)τνκΦ(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))γ(x2−x1)|10−(ω+1)νκγ(x2−x1)∫10τνκ−1Φ(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))dτ=(ω+1)Φ(α♭1+β♭2+γωx1+x2ω+1)γ(x2−x1)−ν(ω+1)νκ+1κγνκ+1(x2−x1)νκ+1∫α♭1+β♭2+γωx1+x2ω+1α♭1+β♭2+γx1(u−(α♭1+β♭2+γx1))νκ−1Φ(u)du=(ω+1)Φ(α♭1+β♭2+γωx1+x2ω+1)γ(x2−x1)−Γκ(ν+κ)(ω+1)νκ+1γνκ+1(x2−x1)νκ+1(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1), |
and
J2:=∫10τνκΦ′(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))dτ=−(ω+1)Φ(α♭1+β♭2+γx1+ωx2ω+1)γ(x2−x1)+(ω+1)νκ+1Γκ(ν+κ)γνκ+1(x2−x1)νκ+1(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2). |
Substituting the values of J1 and J2 in J and multiplying both sides by γ(x2−x1)(ω+1)2, we obtain our required result.
Now we derive some new results related to Hermite-Hadamard-Mercer type inequality using Lemma 3.1 and Lemma 3.2.
Theorem 3.1. Under the assumptions of Lemma 3.1, if |Φ′| is a convex function, then
|Φ(α♭1+β♭2+γx1)+Φ(α♭1+β♭2+γx2)2−Γκ(ν+κ)2γνκ(x2−x1)νκ×[(Jα,κ(α♭1+β♭2+γx2)−Φ)(α♭1+β♭2+γx1)+(Jα,κ(α♭1+β♭2+γx1)+Φ)(α♭1+β♭2+γx2)]|≤γ(x2−x1)2[(2κ−κ(12)νκ−1ν+κ)(α|Φ′(♭1)|+β|Φ′(♭2)|)+κγν+κ(κ−(12)νκ)(|Φ′(x1)|+|Φ′(x2)|)]. |
Proof. Using Lemma 3.1, property of modulus, and convexity property of |Φ′|, we have
|Φ(α♭1+β♭2+γx1)+Φ(α♭1+β♭2+γx2)2−Γκ(ν+κ)2γνκ(x2−x1)νκ×[(Jα,κ(α♭1+β♭2+γx2)−Φ)(α♭1+β♭2+γx1)+(Jα,κ(α♭1+β♭2+γx1)+Φ)(α♭1+β♭2+γx2)]|≤γ(x2−x1)2[∫10|(1−τ)νκ−τνκ||Φ′(α♭1+β♭2+γ(τx1+(1−τ)x2))|dτ]≤γ(x2−x1)2[∫120[(1−τ)νκ−τνκ]|Φ′(α♭1+β♭2+γ(τx1+(1−τ)x2))|dτ+∫112[τνκ−(1−τ)νκ]|Φ′(α♭1+β♭2+γ(τx1+(1−τ)x2))|dτ]≤γ(x2−x1)2[∫120[(1−τ)νκ−τνκ][α|Φ′(♭1)|+β|Φ′(♭2)|+γ(τ|Φ′(x1)|+(1−τ)|Φ′(x2)|)]dτ+∫112[τνκ−(1−τ)νκ][α|Φ′(♭1)|+β|Φ′(♭2)|+γ(τ|Φ′(x1)|+(1−τ)|Φ′(x2)|)]dτ]. |
After simple calculations, we obtain the required result.
Corollary 3.2. If we take α=0=β and γ=1 in Theorem 3.1, then
|Φ(x1)+Φ(x2)2−Γκ(ν+κ)2(x2−x1)νκ[(Jν,κx2−Φ)(x1)+(Jα,κx1+Φ)(x2)]|≤(x2−x1)2[κν+κ(κ−(12)νκ)(|Φ′(x1)|+|Φ′(x2)|)]. |
Corollary 3.3. If we choose ν=1=κ in Theorem 3.1, then
|Φ(α♭1+β♭2+γx1)+Φ(α♭1+β♭2+γx2)2−12γ(x2−x1)∫α♭1+β♭2+γx2α♭1+β♭2+γx1Φ(u)du|≤γ(x2−x1)2[α|Φ′(♭1)|+β|Φ′(♭2)|2+γ(|Φ′(x1)|+|Φ′(x2)|)4]. |
Remark 3.1. Using Lemma 3.1, Hölder's inequality or power mean inequality, interested reader can obtain new interesting integral inequalities. We omit here their proofs.
Theorem 3.2. Under the assumptions of Lemma 3.2, if |Φ′| is a convex function, then
|Φ(α♭1+β♭2+γωx1+x2ω+1)+Φ(α♭1+β♭2+γx1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1γνκ(x2−x1)νκ×[(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)]|≤2γ(x2−x1)(ω+1)2(κν+κ)[α|Φ′(♭1)|+β|Φ′(♭2)|+γ|Φ′(x1)|+|Φ′(x2)|2]. |
Proof. Using Lemma 3.2, property of modulus, and convexity property of |Φ′|, we have
|Φ(α♭1+β♭2+γωx1+x2ω+1)+Φ(α♭1+β♭2+γx1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1γνκ(x2−x1)νκ×[(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)]|≤γ(x2−x1)(ω+1)2∫10τνκ[|Φ′(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))|+|Φ′(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))|]dτ≤γ(x2−x1)(ω+1)2∫10τνκ[α|Φ′(♭1)|+β|Φ′(♭2)|+γ(ω+1−τω+1|Φ′(x1)|+τω+1|Φ′(x2)|)+α|Φ′(♭1)|+β|Φ′(♭2)|+γ(τω+1|Φ′(x1)|+ω+1−τω+1|Φ′(x2)|)]dτ=2γ(x2−x1)(ω+1)2(κν+κ)[α|Φ′(♭1)|+β|Φ′(♭2)|+γ|Φ′(x1)|+|Φ′(x2)|2]. |
This completes the proof.
Corollary 3.4. If we take ν=ω=κ=1 in Theorem 3.2, then
|Φ(α♭1+β♭2+γx1+x22)−1γ(x2−x1)∫α♭1+β♭2+γx2α♭1+β♭2+γx1Φ(u)du|≤γ(x2−x1)4[α|Φ′(♭1)|+β|Φ′(♭2)|+γ|Φ′(x1)|+|Φ′(x2)|2]. |
Corollary 3.5. If we choose α=0=β and γ=1 in Theorem 3.2, then
|Φ(ωx1+x2ω+1)+Φ(x1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1(x2−x1)νκ[(Jν,κ(ωx1+x2ω+1)−Φ)(x1)+(Jν,κ(x1+ωx2ω+1)+Φ)(x2)]|≤(x2−x1)(ω+1)2(κν+κ)[|Φ′(x1)|+|Φ′(x2)|]. |
Theorem 3.3. Under the assumptions of Lemma 3.2, if |Φ′|q is a convex function, then
|Φ(α♭1+β♭2+γωx1+x2ω+1)+Φ(α♭1+β♭2+γx1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1γνκ(x2−x1)νκ×[(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)]|≤γ(x2−x1)(ω+1)2(κνp+κ)1p[(α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(12(ω+1)|Φ′(x1)|q+2(ω+1)−12(ω+1)|Φ′(x2)|q))1q+(α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(12(ω+1)|Φ′(x2)|q+2(ω+1)−12(ω+1)|Φ′(x1)|q))1q], |
where 1p+1q=1 and q>1.
Proof. Using Lemma 3.2, property of modulus, Hölder's inequality and the convexity property of |Φ′|q, we have
|Φ(α♭1+β♭2+γωx1+x2ω+1)+Φ(α♭1+β♭2+γx1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1γνκ(x2−x1)νκ×[(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)]|≤γ(x2−x1)(ω+1)2(∫10τνpκdτ)1p[(∫10|Φ′(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))|qdτ)1q+(∫10|Φ′(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))|qdτ)1q]≤γ(x2−x1)(ω+1)2(κνp+κ)1p[(∫10[α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(ω+1−τω+1|Φ′(x1)|q+τω+1|Φ′(x2)|q)]dτ)1q+(∫10[α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(τω+1|Φ′(x1)|q+ω+1−τω+1|Φ′(x2)|q)]dτ)1q]=γ(x2−x1)(ω+1)2(κνp+κ)1p[(α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(12(ω+1)|Φ′(x1)|q+2(ω+1)−12(ω+1)|Φ′(x2)|q))1q+(α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(12(ω+1)|Φ′(x2)|q+2(ω+1)−12(ω+1)|Φ′(x1)|q))1q]. |
This completes the proof.
Corollary 3.6. If we take ν=ω=κ=1 in Theorem 3.3, then
|Φ(α♭1+β♭2+γx1+x22)−1γ(x2−x1)∫α♭1+β♭2+γx2α♭1+β♭2+γx1Φ(u)du|≤γ(x2−x1)4(1p+1)1p[(α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(14|Φ′(x1)|q+34|Φ′(x2)|q))1q+(α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(14|Φ′(x2)|q+34|Φ′(x1)|q))1q]. |
Corollary 3.7. If we choose α=0=β and γ=1 in Theorem 3.3, then
|Φ(ωx1+x2ω+1)+Φ(x1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1(x2−x1)νκ[(Jν,κ(ωx1+x2ω+1)−Φ)(x1)+(Jν,κ(x1+ωx2ω+1)+Φ)(x2)]|≤(x2−x1)(ω+1)2(κνp+κ)1p[(12(ω+1)|Φ′(x1)|q+2(ω+1)−12(ω+1)|Φ′(x2)|q)1q+(12(ω+1)|Φ′(x2)|q+2(ω+1)−12(ω+1)|Φ′(x1)|q)1q]. |
Theorem 3.4. Under the assumptions of Lemma 3.2, if |Φ′|q is a convex function for q≥1, then
|Φ(α♭1+β♭2+γωx1+x2ω+1)+Φ(α♭1+β♭2+γx1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1γνκ(x2−x1)νκ×[(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)]|≤γ(x2−x1)(ω+1)2(κν+κ)1−1q[(καν+κ|Φ′(♭1)|q+κβν+κ|Φ′(♭2)|q+γ(kω(ω+2κ)+κ2(ω+1)(ν+κ)(ν+2κ)|Φ′(x1)|q+κ(ω+1)(ν+2κ)|Φ′(x2)|q))1q+(καν+κ|Φ′(♭1)|q+κβν+κ|Φ′(♭2)|q+γ(κ(ω+1)(ν+2κ)|Φ′(x1)|q+kω(ν+2κ)+κ2(ω+1)(ν+κ)(ν+2κ)|Φ′(x2)|q))1q]. |
Proof. Using Lemma 3.2, property of modulus, power mean inequality and the convexity property of |Φ′|q, we have
|Φ(α♭1+β♭2+γωx1+x2ω+1)+Φ(α♭1+β♭2+γx1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1γνκ(x2−x1)νκ×[(Jν,κ(α♭1+β♭2+γωx1+x2ω+1)−Φ)(α♭1+β♭2+γx1)+(Jν,κ(α♭1+β♭2+γx1+ωx2ω+1)+Φ)(α♭1+β♭2+γx2)]|≤γ(x2−x1)(ω+1)2(∫10τνκdτ)1−1q[(∫10τνκ|Φ′(α♭1+β♭2+γ(ω+1−τω+1x1+τω+1x2))|qdτ)1q+(∫10τνκ|Φ′(α♭1+β♭2+γ(τω+1x1+ω+1−τω+1x2))|qdτ)1q]≤γ(x2−x1)(ω+1)2(κν+κ)1−1q[(∫10τνκ[α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(ω+1−τω+1|Φ′(x1)|q+τω+1|Φ′(x2)|q)]dτ)1q+(∫10τνκ[α|Φ′(♭1)|q+β|Φ′(♭2)|q+γ(τω+1|Φ′(x1)|q+ω+1−τω+1|Φ′(x2)|q)]dτ)1q]=γ(x2−x1)(ω+1)2(κν+κ)1−1q[(καν+κ|Φ′(♭1)|q+κβν+κ|Φ′(♭2)|q+γ(kω(ω+2κ)+κ2(ω+1)(ν+κ)(ν+2κ)|Φ′(x1)|q+κ(ω+1)(ν+2κ)|Φ′(x2)|q))1q+(καν+κ|Φ′(♭1)|q+κβν+κ|Φ′(♭2)|q+γ(κ(ω+1)(ν+2κ)|Φ′(x1)|q+kω(ν+2κ)+κ2(ω+1)(ν+κ)(ν+2κ)|Φ′(x2)|q))1q]. |
This completes the proof.
Corollary 3.8. If we take ν=ω=κ=1 in Theorem 3.4, then
|Φ(α♭1+β♭2+γx1+x22)−1γ(x2−x1)∫α♭1+β♭2+γx2α♭1+β♭2+γx1Φ(u)du|≤γ(x2−x1)4(12)1−1q[(α2|Φ′(♭1)|q+β2|Φ′(♭2)|q+γ(13|Φ′(x1)|q+16|Φ′(x2)|q))1q+(α2|Φ′(♭1)|q+β2|Φ′(♭2)|q+γ(16|Φ′(x1)|q+13|Φ′(x2)|q))1q]. |
Corollary 3.9. If we choose α=0=β and γ=1 in Theorem 3.4, then
|Φ(ωx1+x2ω+1)+Φ(x1+ωx2ω+1)ω+1−Γκ(ν+κ)(ω+1)νκ−1(x2−x1)νκ[(Jν,κ(ωx1+x2ω+1)−Φ)(x1)+(Jν,κ(x1+ωx2ω+1)+Φ)(x2)]|≤(x2−x1)(ω+1)2(κν+κ)1−1q[(kω(ω+2κ)+κ2(ω+1)(ν+κ)(ν+2κ)|Φ′(x1)|q+κ(ω+1)(ν+2κ)|Φ′(x2)|q)1q+(κ(ω+1)(ν+2κ)|Φ′(x1)|q+kω(ν+2κ)+κ2(ω+1)(ν+κ)(ν+2κ)|Φ′(x2)|q)1q]. |
In this section, we will discuss some applications regarding our results for special means and error estimations.
Let recall the following two special means:
● The arithmetic mean is defined as
A(x1,x2):=x1+x22. |
● The generalized log–mean is given by
Ln(x1,x2):=[xn+12−xn+11(n+1)(x2−x1)]1n,n∈Z∖{−1,0}, |
where 0<x1<x2 are real numbers.
Using above special means we can establish some new inequalities as follows:
Proposition 4.1. Let x1,x2∈[♭1,♭2] with 0<♭1<♭2 and α,β,∈[0,1], γ∈(0,1] be coefficients of sums α+β+γ=1, then for n>1, we have
|A((α♭1+β♭2+γx1)n+2,(α♭1+β♭2+γx2)n+2)−12Ln+2n+2(α♭1+β♭2+γx1,α♭1+β♭2+γx2)|≤γ(n+2)(x2−x1)2[A(α♭1n+1,β♭2n+1)+γ2A(xn+11,xn+12)]. | (4.1) |
Proof. The proof directly follows from Theorem 3.1 applying for Φ(x)=xn+2 and ν=1=κ.
Proposition 4.2. Let x1,x2∈[♭1,♭2] with 0<♭1<♭2 and α,β,∈[0,1], γ∈(0,1] be coefficients of sums α+β+γ=1, then for n>1, we have
|(2A(α♭1,β♭2)+γA(x1,x2))n+2−Ln+2n+2(α♭1+β♭2+γx1,α♭1+β♭2+γx2)|≤γ(n+2)(x2−x1)2[A(α♭1n+1,β♭2n+1)+γ2A(xn+11,xn+12)]. | (4.2) |
Proof. The proof directly follows from Theorem 3.2 applying for Φ(x)=xn+2 and ν=ω=κ=1.
Proposition 4.3. Let x1,x2∈[♭1,♭2] with 0<♭1<♭2 and α,β,∈[0,1], γ∈(0,1] be coefficients of sums α+β+γ=1, then for n>1 with 1p+1q=1 and q>1, we have
|(2A(α♭1,β♭2)+γA(x1,x2))n+2−Ln+2n+2(α♭1+β♭2+γx1,α♭1+β♭2+γx2)|≤γ(n+2)(x2−x1)4(1p+1)1p[(2A(α♭1q(n+1),β♭2q(n+1))+γ2A(xq(n+1)1,3xq(n+1)2))1q+(2A(α♭1q(n+1),β♭2q(n+1))+γ2A(3xq(n+1)1,xq(n+1)2))1q]. | (4.3) |
Proof. The proof directly follows from Theorem 3.3 applying for Φ(x)=xn+2 and ν=ω=κ=1.
Proposition 4.4. Let x1,x2∈[♭1,♭2] with 0<♭1<♭2 and α,β,∈[0,1], γ∈(0,1] be coefficients of sums α+β+γ=1, then for n>1 and q≥1, we have
|(2A(α♭1,β♭2)+γA(x1,x2))n+2−Ln+2n+2(α♭1+β♭2+γx1,α♭1+β♭2+γx2)|≤γ(n+2)(x2−x1)4(12)1−1q[(A(α♭1q(n+1),β♭2q(n+1))+γ3A(2xq(n+1)1,xq(n+1)2))1q+(A(α♭1q(n+1),β♭2q(n+1))+γ3A(xq(n+1)1,2xq(n+1)2))1q]. | (4.4) |
Proof. The proof directly follows from Theorem 3.4 applying for Φ(x)=xn+2 and ν=ω=κ=1.
Remark 4.1. For suitable choices of function Φ, many other interesting inequalities regarding new special means can be derived. We omit here their proofs and the details are left to the interested reader.
Let consider some applications of the integral inequalities obtained above, to find new error bounds for the trapezoidal quadrature formula. First, we fix three parameters α,β,∈[0,1], γ∈(0,1] such that α+β+γ=1.
For ♭2>♭1>0, let U:♭1=χ0<χ1<…<χn−1<χn=♭2 be a partition of [♭1,♭2] and xi,1,xi,2∈[χi,χi+1] for all i=0,1,2,…,n−1.
We denote, respectively,
S(U,Φ):=γn−1∑i=0Φ(αχi+βχi+1+γxi,1+xi,22)ℏi, |
and
∫α♭1+β♭2+γx2α♭1+β♭2+γx1Φ(u)du:=S(U,Φ)+R(U,Φ), |
where R(U,Φ) is the remainder term and ℏi=χi+1−χi.
Using above notations, we are in position to prove the following error estimations.
Proposition 4.5. Under the assumptions of Theorem 3.2, if we take ν=ω=κ=1, then the following inequality holds:
|R(U,Φ)|≤γ4n−1∑i=0ℏ2i[α|Φ′(χi)|+β|Φ′(χi+1)|+γ|Φ′(xi,1)|+|Φ′(xi,2)|2]. |
Proof. Using the Theorem 3.2 on subinterval [χi,χi+1] of closed interval [♭1,♭2] and choosing ν=ω=κ=1, for all i=0,1,2,…,n−1, we have
|γΦ(αχi+βχi+1+γxi,1+xi,22)ℏi−∫αχi+βχi+1+γxi,2αχi+βχi+1+γxi,1Φ(u)du| | (4.5) |
≤γ4ℏ2i[α|Φ′(χi)|+β|Φ′(χi+1)|+γ|Φ′(xi,1)|+|Φ′(xi,2)|2]. |
Summing inequality (4.5) over i from 0 to n−1 and using the properties of the modulus, we obtain the desired inequality.
Proposition 4.6. Under the assumptions of Theorem 3.3, if we take ν=ω=κ=1, then the following inequality holds:
|R(U,Φ)|≤γ4(1p+1)1pn−1∑i=0ℏ2i[(α|Φ′(χi)|q+β|Φ′(χi+1)|q+γ(14|Φ′(xi,1)|q+34|Φ′(xi,2)|q))1q+(α|Φ′(χi)|q+β|Φ′(χi+1)|q+γ(14|Φ′(xi,2)|q+34|Φ′(xi,1)|q))1q]. |
Proof. Applying the same technique as in Proposition 4.5 but using Theorem 3.3 and choosing ν=ω=κ=1.
Proposition 4.7. Under the assumptions of Theorem 3.4, if we take ν=ω=κ=1, then the following inequality holds:
|R(U,Φ)|≤γ4(12)1−1qn−1∑i=0ℏ2i[(α2|Φ′(χi)|q+β2|Φ′(χi+1)|q+γ(13|Φ′(xi,1)|q+16|Φ′(xi,2)|q))1q+(α2|Φ′(χi)|q+β2|Φ′(χi+1)|q+γ(16|Φ′(xi,1)|q+13|Φ′(xi,2)|q))1q]. |
Proof. Applying the same technique as in Proposition 4.5 but using Theorem 3.4 and choosing ν=ω=κ=1.
In this paper, we have established some new Hermite–Hadamard–Mercer type of inequalities by using κ–Riemann–Liouville fractional integrals. Moreover, we have derived two new integral identities as auxiliary results. From the applied identities as auxiliary results, we have obtained some new variants of Hermite–Hadamard–Mercer type via κ–Riemann–Liouville fractional integrals. Several special cases are deduced in details and some know results are recaptured as well. In order to illustrate the efficiency of our main results, some applications regarding special means of positive real numbers and error estimations for trapezoidal quadrature formula are provided as well. To the best of our knowledge these results are new in the literature. Since the class of convex functions have large applications in many mathematical areas, they can be applied to obtain several results in convex analysis, special functions, quantum mechanics, related optimization theory, mathematical inequalities and may stimulate further research in different areas of pure and applied sciences.
Authors are thankful to the editor and the reviewer for their valuable comments and suggestions. This research was funded by Dirección de Investigación from Pontificia Universidad Católica del Ecuador in the research project entitled: Some integrals inequalities and generalized convexity (Algunas desigualdades integrales para funciones con algún tipo de convexidad generalizada y aplicaciones).
The authors declare that they have no competing interests.
[1] | J. Wu, A study on customer acquisition cost and customer retention cost: Review and outlook, Proceedings of the 9th International Conference on Innovation & Management, 2012,799–803. |
[2] |
A. Bilal Zorić, Predicting customer churn in banking industry using neural networks, INDECS, 14 (2016), 116–124. https://doi.org/10.7906/indecs.14.2.1 doi: 10.7906/indecs.14.2.1
![]() |
[3] |
K. G. M. Karvana, S. Yazid, A. Syalim, P. Mursanto, Customer churn analysis and prediction using data mining models in banking industry, 2019 International Workshop on Big Data and Information Security (IWBIS), 2019, 33–38. https://doi.org/10.1109/IWBIS.2019.8935884 doi: 10.1109/IWBIS.2019.8935884
![]() |
[4] |
A. Keramati, H. Ghaneei, S. M. Mirmohammadi, Developing a prediction model for customer churn from electronic banking services using data mining, Financ. Innov., 2 (2016), 10. https://doi.org/10.1186/s40854-016-0029-6 doi: 10.1186/s40854-016-0029-6
![]() |
[5] |
J. Kaur, V. Arora, S. Bali, Influence of technological advances and change in marketing strategies using analytics in retail industry, Int. J. Syst. Assur. Eng. Manag., 11 (2020), 953–961. https://doi.org/10.1007/s13198-020-01023-5 doi: 10.1007/s13198-020-01023-5
![]() |
[6] |
O. F. Seymen, O. Dogan, A. Hiziroglu, Customer churn prediction using deep learning, Proceedings of the 12th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2020), 2020,520–529. https://doi.org/10.1007/978-3-030-73689-7_50 doi: 10.1007/978-3-030-73689-7_50
![]() |
[7] |
A. Dingli, V. Marmara, N. S. Fournier, Comparison of deep learning algorithms to predict customer churn within a local retail industry, Int. J. Mach. Learn. Comput., 7 (2017), 128–132. https://doi.org/10.18178/ijmlc.2017.7.5.634 doi: 10.18178/ijmlc.2017.7.5.634
![]() |
[8] |
M. C. Mozer, R. Wolniewicz, D. B. Grimes, E. Johnson, H. Kaushansky, Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry, IEEE T. Neural Networks, 11 (2000), 690–696. https://doi.org/10.1109/72.846740 doi: 10.1109/72.846740
![]() |
[9] |
J. Hadden, A. Tiwari, R. Roy, D. Ruta, Computer assisted customer churn management: State-of-the-art and future trends, Comput. Oper. Res., 34 (2007), 2902–2917. https://doi.org/10.1016/j.cor.2005.11.007 doi: 10.1016/j.cor.2005.11.007
![]() |
[10] |
K. Coussement, D. Van den Poel, Churn prediction in subscription services: An application of support vector machines while comparing two parameter-selection techniques, Expert Syst. Appl., 34 (2008), 313–327. https://doi.org/10.1016/j.eswa.2006.09.038 doi: 10.1016/j.eswa.2006.09.038
![]() |
[11] |
J. Burez, D. Van den Poel, Handling class imbalance in customer churn prediction, Expert Syst. Appl., 36 (2009), 4626–4636. https://doi.org/10.1016/j.eswa.2008.05.027 doi: 10.1016/j.eswa.2008.05.027
![]() |
[12] |
P. C. Pendharkar, Genetic algorithm based neural network approaches for predicting churn in cellular wireless network services, Expert Syst. Appl., 36 (2009), 6714–6720. https://doi.org/10.1016/j.eswa.2008.08.050 doi: 10.1016/j.eswa.2008.08.050
![]() |
[13] |
A. Idris, A. Khan, Y. S. Lee, Genetic programming and adaboosting based churn prediction for telecom, IEEE international conference on Systems, Man, and Cybernetics (SMC), 2012, 1328–1332. https://doi.org/10.1109/ICSMC.2012.6377917 doi: 10.1109/ICSMC.2012.6377917
![]() |
[14] |
T. Vafeiadis, K. I. Diamantaras, G. Sarigiannidis, K. C. Chatzisavvas, A comparison of machine learning techniques for customer churn prediction, Simulation Modell. Prac. Theory, 55 (2015), 1–9. https://doi.org/10.1016/j.simpat.2015.03.003 doi: 10.1016/j.simpat.2015.03.003
![]() |
[15] |
A. Idris, A. Khan, Churn prediction system for telecom using filter–wrapper and ensemble classification, Comput. J., 60 (2017), 410–430. https://doi.org/10.1093/comjnl/bxv123 doi: 10.1093/comjnl/bxv123
![]() |
[16] |
M. Imani, Customer Churn Prediction in Telecommunication Using Machine Learning: A Comparison Study, AUT J. Model. Simulation, 52 (2020), 229–250. https://doi.org/10.22060/miscj.2020.18038.5202 doi: 10.22060/miscj.2020.18038.5202
![]() |
[17] |
S. Wu, W.-C. Yau, T.-S. Ong, S.-C. Chong, Integrated churn prediction and customer segmentation framework for telco business, IEEE Access, 9 (2021), 62118–62136. https://doi.org/10.1109/ACCESS.2021.3073776 doi: 10.1109/ACCESS.2021.3073776
![]() |
[18] |
Y. Beeharry, R. Tsokizep Fokone, Hybrid approach using machine learning algorithms for customers' churn prediction in the telecommunications industry, Concurrency Comput.: Prac. Exper., 34 (2022), e6627. https://doi.org/10.1002/cpe.6627 doi: 10.1002/cpe.6627
![]() |
[19] | D. W. Hosmer Jr, S. Lemeshow, R. X. Sturdivant, Applied logistic regression, Hoboken: John Wiley & Sons, 2013. https://doi.org/10.1002/9781118548387 |
[20] |
J. R. Quinlan, Induction of decision trees, Mach. Learn., 1 (1986), 81–106. https://doi.org/10.1007/BF00116251 doi: 10.1007/BF00116251
![]() |
[21] | I. Rish, An empirical study of the naive Bayes classifier, IJCAI 2001 workshop on empirical methods in artificial intelligence, 3 (2001), 41–46. |
[22] |
C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn., 20 (1995), 273–297. https://doi.org/10.1007/BF00994018 doi: 10.1007/BF00994018
![]() |
[23] | M. H. Hassoun, Fundamentals of artificial neural networks, Cambridge: MIT press, 1995. |
[24] |
T. K. Ho, Random decision forests, Proceedings of 3rd international conference on document analysis and recognition, 1 (1995), 278–282. https://doi.org/10.1109/ICDAR.1995.598994 doi: 10.1109/ICDAR.1995.598994
![]() |
[25] |
Y. Freund, R. E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, J. Comput. Syst. Sci., 55 (1997), 119–139. https://doi.org/10.1006/jcss.1997.1504 doi: 10.1006/jcss.1997.1504
![]() |
[26] |
J. H. Friedman, Greedy function approximation: a gradient boosting machine, Ann. Statist., 29 (2001), 1189–1232. https://doi.org/10.1214/aos/1013203451 doi: 10.1214/aos/1013203451
![]() |
[27] |
T. Chen, C. Guestrin, Xgboost: A scalable tree boosting system, Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016,785–794. https://doi.org/10.1145/2939672.2939785 doi: 10.1145/2939672.2939785
![]() |
[28] |
Z. Ghasemi Darehnaei, M. Shokouhifar, H. Yazdanjouei, S. M. J. Rastegar Fatemi, SI‐EDTL: Swarm intelligence ensemble deep transfer learning for multiple vehicle detection in UAV images, Concurrency Comput.: Prac. Exper., 34 (2022), e6726. https://doi.org/10.1002/cpe.6726 doi: 10.1002/cpe.6726
![]() |
[29] |
N. Behmanesh-Fard, H. Yazdanjouei, M. Shokouhifar, F. Werner, Mathematical Circuit Root Simplification Using an Ensemble Heuristic–Metaheuristic Algorithm, Mathematics, 11 (2023), 1498. https://doi.org/10.3390/math11061498 doi: 10.3390/math11061498
![]() |
[30] |
A. Amin, S. Anwar, A. Adnan, M. Nawaz, N. Howard, J. Qadir, et al., Comparing oversampling techniques to handle the class imbalance problem: A customer churn prediction case study, IEEE Access, 4 (2016), 7940–7957. https://doi.org/10.1109/ACCESS.2016.2619719 doi: 10.1109/ACCESS.2016.2619719
![]() |
[31] |
I. V. Pustokhina, D. A. Pustokhin, P. T. Nguyen, M. Elhoseny, K. Shankar, Multi-objective rain optimization algorithm with WELM model for customer churn prediction in telecommunication sector, Complex Intell. Syst., 9 (2021), 3473–3485. https://doi.org/10.1007/s40747-021-00353-6 doi: 10.1007/s40747-021-00353-6
![]() |
[32] | N. V. Chawla, Data mining for imbalanced datasets: An overview, In: Data mining and knowledge discovery handbook, Boston: Springer, 2009,875–886. https://doi.org/10.1007/978-0-387-09823-4_45 |
[33] |
D.-C. Li, C.-S. Wu, T.-I. Tsai, Y.-S. Lina, Using mega-trend-diffusion and artificial samples in small data set learning for early flexible manufacturing system scheduling knowledge, Comput. Oper. Res., 34 (2007), 966–982. https://doi.org/10.1016/j.cor.2005.05.019 doi: 10.1016/j.cor.2005.05.019
![]() |
[34] |
H. He, Y. Bai, E. A. Garcia, S. Li, ADASYN: Adaptive synthetic sampling approach for imbalanced learning, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), 2008, 1322–1328. https://doi.org/10.1109/IJCNN.2008.4633969 doi: 10.1109/IJCNN.2008.4633969
![]() |
[35] |
H. Peng, F. Long, C. Ding, Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy, IEEE T. Pattern Anal. Mach. Intell., 27 (2005), 1226–1238. https://doi.org/10.1109/TPAMI.2005.159 doi: 10.1109/TPAMI.2005.159
![]() |
[36] |
S. Barua, M. M. Islam, X. Yao, K. Murase, MWMOTE--majority weighted minority oversampling technique for imbalanced data set learning, IEEE T. Knowl. Data Eng., 26 (2012), 405–425. https://doi.org/10.1109/TKDE.2012.232 doi: 10.1109/TKDE.2012.232
![]() |
[37] |
S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
![]() |
[38] |
A. Shokouhifar, M. Shokouhifar, M. Sabbaghian, H. Soltanian-Zadeh, Swarm intelligence empowered three-stage ensemble deep learning for arm volume measurement in patients with lymphedema, Biomed. Signal Proc. Control, 85 (2023), 105027. https://doi.org/10.1016/j.bspc.2023.105027 doi: 10.1016/j.bspc.2023.105027
![]() |
[39] | J. Pamina, B. Beschi Raja, S. Sathya Bama, M. Sruthi, S. Kiruthika, V. J. Aiswaryadevi, et al., An effective classifier for predicting churn in telecommunication, J. Adv Res. Dyn. Control Syst.s, 11 (2019), 221–229. |
[40] |
N. I. Mohammad, S. A. Ismail, M. N. Kama, O. M. Yusop, A. Azmi, Customer churn prediction in telecommunication industry using machine learning classifiers, Proceedings of the 3rd international conference on vision, image and signal processing, 2019, 34. https://doi.org/10.1145/3387168.3387219 doi: 10.1145/3387168.3387219
![]() |
[41] |
S. Agrawal, A. Das, A. Gaikwad, S. Dhage, Customer churn prediction modelling based on behavioral patterns analysis using deep learning, 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), 2018, 1–6. https://doi.org/10.1109/ICSCEE.2018.8538420 doi: 10.1109/ICSCEE.2018.8538420
![]() |
[42] |
A. Amin, F. Al-Obeidat, B. Shah, A. Adnan, J. Loo, S. Anwar, Customer churn prediction in telecommunication industry using data certainty, J. Bus. Res., 94 (2019), 290–301. https://doi.org/10.1016/j.jbusres.2018.03.003 doi: 10.1016/j.jbusres.2018.03.003
![]() |
[43] |
S. Momin, T. Bohra, P. Raut, Prediction of customer churn using machine learning, EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing, 2020,203–212. https://doi.org/10.1007/978-3-030-19562-5_20 doi: 10.1007/978-3-030-19562-5_20
![]() |
[44] | E. Hanif, Applications of data mining techniques for churn prediction and cross-selling in the telecommunications industry, Master thesis, Dublin Business School, 2019. |
[45] |
S. Wael Fujo, S. Subramanian, M. Ahmad Khder, Customer Churn Prediction in Telecommunication Industry Using Deep Learning, Inf. Sci. Lett., 11 (2022), 24–30. http://doi.org/10.18576/isl/110120 doi: 10.18576/isl/110120
![]() |
[46] | V. Umayaparvathi, K. Iyakutti, Automated feature selection and churn prediction using deep learning models, Int. Res. J. Eng. Tech., 4 (2017), 1846–1854. |
[47] | U. Ahmed, A. Khan, S. H. Khan, A. Basit, I. U. Haq, Y. S. Lee, Transfer learning and meta classification based deep churn prediction system for telecom industry, 2019. https://doi.org/10.48550/arXiv.1901.06091 |
[48] |
A. De Caigny, K. Coussement, K. W. De Bock, A new hybrid classification algorithm for customer churn prediction based on logistic regression and decision trees, Eur. J. Oper. Res., 269 (2018), 760–772. https://doi.org/10.1016/j.ejor.2018.02.009 doi: 10.1016/j.ejor.2018.02.009
![]() |
[49] |
A. Idris, A. Khan, Y. S. Lee, Intelligent churn prediction in telecom: Employing mRMR feature selection and RotBoost based ensemble classification, Appl. Intell, 39 (2013), 659–672. https://doi.org/10.1007/s10489-013-0440-x doi: 10.1007/s10489-013-0440-x
![]() |
[50] |
W. Verbeke, K. Dejaeger, D. Martens, J. Hur, B. Baesens, New insights into churn prediction in the telecommunication sector: A profit driven data mining approach, Eur. J. Oper. Res., 218 (2012), 211–229. https://doi.org/10.1016/j.ejor.2011.09.031 doi: 10.1016/j.ejor.2011.09.031
![]() |
[51] |
Y. Xie, X. Li, E. Ngai, W. Ying, Customer churn prediction using improved balanced random forests, Expert Syst. Appl., 36 (2009), 5445–5449. https://doi.org/10.1016/j.eswa.2008.06.121 doi: 10.1016/j.eswa.2008.06.121
![]() |
[52] |
J.-W. Byun, A. Kamra, E. Bertino, N. Li, Efficient k-anonymization using clustering techniques, International Conference on Database Systems for Advanced Applications, DASFAA 2007, 2007,188–200. https://doi.org/10.1007/978-3-540-71703-4_18 doi: 10.1007/978-3-540-71703-4_18
![]() |
[53] |
A. P. Bradley, The use of the area under the ROC curve in the evaluation of machine learning algorithms, Pattern Recog., 30 (1997), 1145–1159. https://doi.org/10.1016/S0031-3203(96)00142-2 doi: 10.1016/S0031-3203(96)00142-2
![]() |
1. | Muhammad Tariq, Hijaz Ahmad, Soubhagya Kumar Sahoo, Artion Kashuri, Taher A. Nofal, Ching-Hsien Hsu, Inequalities of Simpson-Mercer-type including Atangana-Baleanu fractional operators and their applications, 2022, 7, 2473-6988, 15159, 10.3934/math.2022831 | |
2. | Muhammad Tariq, Soubhagya Kumar Sahoo, Sotiris K. Ntouyas, Some Refinements of Hermite–Hadamard Type Integral Inequalities Involving Refined Convex Function of the Raina Type, 2023, 12, 2075-1680, 124, 10.3390/axioms12020124 | |
3. | Bandar Bin-Mohsin, Muhammad Zakria Javed, Muhammad Uzair Awan, Marcela V. Mihai, Hüseyin Budak, Awais Gul Khan, Muhammad Aslam Noor, Jensen-Mercer Type Inequalities in the Setting of Fractional Calculus with Applications, 2022, 14, 2073-8994, 2187, 10.3390/sym14102187 | |
4. | Soubhagya Kumar Sahoo, Y.S. Hamed, Pshtiwan Othman Mohammed, Bibhakar Kodamasingh, Kamsing Nonlaopon, New midpoint type Hermite-Hadamard-Mercer inequalities pertaining to Caputo-Fabrizio fractional operators, 2023, 65, 11100168, 689, 10.1016/j.aej.2022.10.019 | |
5. | Saad Ihsan Butt, Iram Javed, Praveen Agarwal, Juan J. Nieto, Newton–Simpson-type inequalities via majorization, 2023, 2023, 1029-242X, 10.1186/s13660-023-02918-0 | |
6. | Soubhagya Kumar Sahoo, Ravi P. Agarwal, Pshtiwan Othman Mohammed, Bibhakar Kodamasingh, Kamsing Nonlaopon, Khadijah M. Abualnaja, Hadamard–Mercer, Dragomir–Agarwal–Mercer, and Pachpatte–Mercer Type Fractional Inclusions for Convex Functions with an Exponential Kernel and Their Applications, 2022, 14, 2073-8994, 836, 10.3390/sym14040836 | |
7. | Miguel Vivas–Cortez, Muhammad Zakria Javed, Muhammad Uzair Awan, Muhammad Aslam Noor, Silvestru Sever Dragomir, Bullen-Mercer type inequalities with applications in numerical analysis, 2024, 96, 11100168, 15, 10.1016/j.aej.2024.03.093 | |
8. | Muhammad Uzair Awan, Muhammad Zakria Javed, Huseyin Budak, Y.S. Hamed, Jong-Suk Ro, A study of new quantum Montgomery identities and general Ostrowski like inequalities, 2024, 15, 20904479, 102683, 10.1016/j.asej.2024.102683 | |
9. | Bandar Bin-Mohsin, Muhammad Zakria Javed, Muhammad Uzair Awan, Hüseyin Budak, Awais Gul Khan, Clemente Cesarano, Muhammad Aslam Noor, Unified inequalities of the q-Trapezium-Jensen-Mercer type that incorporate majorization theory with applications, 2023, 8, 2473-6988, 20841, 10.3934/math.20231062 | |
10. | THANIN SITTHIWIRATTHAM, MIGUEL VIVAS-CORTEZ, MUHAMMAD AAMIR ALI, HÜSEYIN BUDAK, İBRAHIM AVCI, A STUDY OF FRACTIONAL HERMITE–HADAMARD–MERCER INEQUALITIES FOR DIFFERENTIABLE FUNCTIONS, 2024, 32, 0218-348X, 10.1142/S0218348X24400164 |