Processing math: 100%
Research article

An algorithm for variational inclusion problems including quasi-nonexpansive mappings with applications in osteoporosis prediction

  • Received: 20 November 2024 Revised: 10 January 2025 Accepted: 15 January 2025 Published: 12 February 2025
  • MSC : 47J25, 49M37, 90C90

  • This paper has proposed a novel algorithm for solving fixed point problems for quasi-nonexpansive mappings and variational inclusion problems within a real Hilbert space. The proposed method exhibits weak convergence under reasonable assumptions. Furthermore, we applied this algorithm for data classification to osteoporosis risk prediction, utilizing an extreme learning machine. From the experimental results, our proposed algorithm consistently outperforms existing algorithms across multiple evaluation metrics. Specifically, it achieved higher accuracy, precision, and F1-score across most of the training boxes compared to other methods. The area under the curve (AUC) values from the receiver operating characteristic (ROC) curves further validated the effectiveness of our approach, indicating superior generalization and classification performance. These results highlight the efficiency and robustness of our proposed algorithm, demonstrating its potential for enhancing osteoporosis risk-prediction models through improved convergence and classification capabilities.

    Citation: Raweerote Suparatulatorn, Wongthawat Liawrungrueang, Thanasak Mouktonglang, Watcharaporn Cholamjiak. An algorithm for variational inclusion problems including quasi-nonexpansive mappings with applications in osteoporosis prediction[J]. AIMS Mathematics, 2025, 10(2): 2541-2561. doi: 10.3934/math.2025118

    Related Papers:

    [1] Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
    [2] Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194
    [3] Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382
    [4] Iqbal Ahmad, Mohd Sarfaraz, Syed Shakaib Irfan . Common solutions to some extended system of fuzzy ordered variational inclusions and fixed point problems. AIMS Mathematics, 2023, 8(8): 18088-18110. doi: 10.3934/math.2023919
    [5] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
    [6] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [7] Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903
    [8] Javid Iqbal, Imran Ali, Puneet Kumar Arora, Waseem Ali Mir . Set-valued variational inclusion problem with fuzzy mappings involving XOR-operation. AIMS Mathematics, 2021, 6(4): 3288-3304. doi: 10.3934/math.2021197
    [9] Wenlong Sun, Gang Lu, Yuanfeng Jin, Zufeng Peng . Strong convergence theorems for split variational inequality problems in Hilbert spaces. AIMS Mathematics, 2023, 8(11): 27291-27308. doi: 10.3934/math.20231396
    [10] Shahram Rezapour, Maryam Iqbal, Afshan Batool, Sina Etemad, Thongchai Botmart . A new modified iterative scheme for finding common fixed points in Banach spaces: application in variational inequality problems. AIMS Mathematics, 2023, 8(3): 5980-5997. doi: 10.3934/math.2023301
  • This paper has proposed a novel algorithm for solving fixed point problems for quasi-nonexpansive mappings and variational inclusion problems within a real Hilbert space. The proposed method exhibits weak convergence under reasonable assumptions. Furthermore, we applied this algorithm for data classification to osteoporosis risk prediction, utilizing an extreme learning machine. From the experimental results, our proposed algorithm consistently outperforms existing algorithms across multiple evaluation metrics. Specifically, it achieved higher accuracy, precision, and F1-score across most of the training boxes compared to other methods. The area under the curve (AUC) values from the receiver operating characteristic (ROC) curves further validated the effectiveness of our approach, indicating superior generalization and classification performance. These results highlight the efficiency and robustness of our proposed algorithm, demonstrating its potential for enhancing osteoporosis risk-prediction models through improved convergence and classification capabilities.



    In this paper, let H represent a real Hilbert space equipped with the inner product , and its corresponding norm ||||. We denote the sets of real numbers and positive integers by R and N, respectively.

    A fixed point problem involves finding

    vH  such that  Uv=v, (1.1)

    where U is a self-mapping on H. We denote the set of solutions to the fixed point problem of the mapping U by Fix(U). Iterative methods are crucial in fixed point theory, especially when dealing with nonlinear mappings. The Picard iteration method [1], developed by Charles Émile Picard in the late 19th century, has long been a cornerstone of mathematical analysis, particularly in solving differential equations. It is highly effective for contractive mappings, where it guarantees convergence to a unique solution by repeatedly applying the function to an initial guess. However, its effectiveness is limited when dealing with nonexpansive mappings, where convergence is not always assured. To address this limitation, Mann [2] introduced the Mann iteration method in 1953, broadening the scope of the Picard method to include nonexpansive mappings. Since then, the Mann iteration has become an essential tool in fixed point theory and numerical analysis, particularly in cases where traditional methods might struggle to converge. In 2013, Khan [3] introduced the Picard-Mann iteration method, a hybrid approach that combines elements of both the Picard and Mann iterations. This method was designed to enhance the convergence of traditional techniques, particularly for certain types of mappings where standard methods can be slow or less effective. By blending the simplicity of Picard's method with the flexibility of Mann's, the Picard-Mann iteration offers a more efficient way to find fixed points, especially in more complex or nonexpansive scenarios. This hybrid approach not only expands its applicability but also has the potential to accelerate convergence compared to using either method alone. Khan further demonstrated both weak and strong convergence results for nonexpansive mappings in uniformly convex Banach spaces using the Picard-Mann iteration method.

    A variational inclusion problem (VIP) is typically formulated as follows:

    Find  vH  such that  0(X+Y)v, (1.2)

    where X:HH is a single-valued mapping and Y:H2H is a multivalued mapping. The set of all solutions to the VIP is denoted by (X+Y)1(0). The study of solving the VIP has garnered significant attention among researchers in optimization and functional analysis. Numerous researchers have focused on developing efficient algorithms for addressing this problem. Two methods that have gained considerable popularity are the forward-backward splitting method [4,5] and the Tseng splitting method [6]. Various researchers have modified and adapted the forward-backward splitting method to solve the VIP (see [7,8,9]). The Tseng splitting method is another approach that has been refined and applied to solve the VIP (see [10,11,12]). Both methodologies are extensively referenced in the academic literature on solving the VIP, owing to their efficiency and flexibility in application to various problem formulations. Researchers have further developed these methods, enhancing convergence rates, computational error tolerance, and the capacity to accommodate additional problem constraints. In 2018, Gibali and Thong [13] developed a modified iterative method aimed at solving the VIP. They enhanced existing techniques by adjusting the step size rule based on Tseng's method and the Mann iteration method, to improve both efficiency and applicability. The method is detailed as follows:

    Under the right conditions, this approach demonstrates strong convergence and provides practical benefits, making it particularly valuable in real-world scenarios. Prior to this, in 1964, Polyak [14] proposed the inertial extrapolation technique, known as the heavy ball method, to accelerate the convergence of iterative algorithms. In 2020, Padcharoen et al. [11] introduced the following splitting method that builds on Tseng's approach and incorporates the inertial extrapolation technique for solving the VIP.

    While weak convergence was confirmed under typical conditions, the method has also proven effective in practical applications, such as image deblurring and recovery.

    In this study, we are interested in investigating the fixed point problem and the VIP, that is,

    vFix(U)(X+Y)1(0), (1.3)

    where Y:H2H is a maximal monotone mapping, X:HH is -Lipschitz continuous and a monotone mapping, and U:HH is quasi-nonexpansive and a demiclosed mapping. We denote the set of solutions to this problem by Ψ. Recently, Mouktonglang et al. [15] introduced the following method for solving fixed points of demicontractive mapping and the VIP in the case where X=f and Y=g, where f:HR and g:HR{+} are two proper, lower semicontinuous, and convex functions.

    The authors proved a weak convergence theorem under specific conditions using this method and demonstrated it with a numerical example in signal recovery. The approach is inspired by the proximal gradient technique, double inertial steps, and Mann iteration.

    In this article, we present a novel algorithm that demonstrates weak convergence to a common solution for fixed point problems involving quasi-nonexpansive mappings and variational inclusion problems within the context of real Hilbert spaces, under reasonable assumptions. The presented algorithm is given along with essential assumptions in Section 3. Additionally, in Section 4, we employ this algorithm in conjunction with an extreme learning machine for data classification, specifically to predict osteoporosis risk.

    We gather essential definitions and lemmas needed to establish our main results. We denote weak and strong convergence as and , respectively. Assume a,bH, and we have

    a+b2=a2+2a,b+b2, (2.1)
    γa+(1γ)b2=γa2+(1γ)b2γ(1γ)ab2, (2.2)

    for any γR.

    Definition 2.1. A self-mapping X:HH is called

    (i) -Lipschitz continuous if there is >0 such that XaXbab for all a,bH;

    (ii) nonexpansive if X is 1-Lipschitz continuous;

    (iii) quasi-nonexpansive if Fix(X) is nonempty and Xarar for all aH and rFix(X);

    (iv) demiclosed if for any sequence {an}H, the following implication holds:

    anr  and  (IX)an0    rFix(X).

    Definition 2.2. Let Y:H2H be a multivalued mapping. Then Y is said to be

    (i) monotone if for all (a,c),(b,d)graph(Y) (the graph of mapping Y), cd,ab0;

    (ii) maximal monotone if for every (a,c)H×H, cd,ab0 for all (b,d)graph(Y) if and only if (a,c)graph(Y).

    Lemma 2.3. [16] Let X:HH be a mapping and Y:H2H a maximal monotone mapping. If Tμ:=(I+μY)1(IμX) with μ>0, then Fix(Tμ)=(X+Y)1(0).

    Lemma 2.4. [17] If Y:H2H is a maximal monotone mapping and X:HH is a Lipschitz continuous monotone mapping, then the sum X+Y is also maximal monotone.

    To analyze the convergence, we assume the following conditions.

    (C1) Y:H2H is a maximal monotone mapping.

    (C2) X:HH is -Lipschitz continuous and a monotone mapping.

    (C3) U:HH is quasi-nonexpansive and a demiclosed mapping.

    (C4) Ψ is nonempty.

    The following algorithm will be employed for Theorem 3.2.

    Remark 3.1. Suppose that U and X are mappings on H and the item (C1) holds. According to Lemma 2.3, if Ubn=bn=cn=dn in Algorithm 4, then it is easy to show that dnΨ.

    Next, we are ready to prove Theorem 3.2.

    Theorem 3.2. Let the sequence {an} be generated by Algorithm 4 satisfying the items (C1)(C4). Assume that the following conditions are satisfied:

    (C5) 0<lim infnαnlim supnαn<1.

    (C6) 0<lim infnμnlim supnμn<1.

    (C7) n=1|θn|anan1<.

    (C8) n=1|δn|an1an2<.

    Then, {an} converges weakly to a solution of Ψ.

    Proof. Let ˜aΨ. By the condition (C6), there are n0N,μ>0, and ˉμ<1 such that μμnˉμ for all nn0. We will now establish the following claims.

    Claim 1. For any nN,

    cn˜a,bndn0.

    By using the definition of cn, we have

    (IμnX)dn(I+μnY)cn.

    Thus, we can write

    yn=1μn(dncnμnXdn),

    where ynYcn. Since X+Y is maximal monotone, we obtain

    cn˜a,Xcn+yn0,

    implying that

    cn˜a,cn+μn(XdnXcn)dn0.

    Claim 2. For each nn0,

    Ubn˜a2dn˜a2[1(ˉμ)2]dncn2.

    From (2.1) and the fact that U is a quasi-nonexpansive mapping, we have

    Ubn˜a2bn˜a2=cn˜a2+2μncn˜a,XdnXcn+μ2nXdnXcn2=cndn2+2cndn,dn˜a+dn˜a2+2μncn˜a,XdnXcn+μ2nXdnXcn2=dn˜a2cndn2+2cn˜a,cndn+2μncn˜a,XdnXcn+μ2nXdnXcn2=dn˜a2cndn2+μ2nXdnXcn2+2cn˜a,bndn.

    Using Claim 1, we get

    Ubn˜a2dn˜a2cndn2+μ2nXdnXcn2.

    By the Lipschitz continuity of X, we have that

    Ubn˜a2dn˜a2cndn2+(ˉμ)2dncn2.

    Thus, Claim 2 is established.

    Claim 3. limnan˜a=limndn˜a=limnen˜a, where en=αndn+(1αn)Ubn.

    Since U is a quasi-nonexpansive mapping and using Claim 2, we have

    an+1˜a=Uen˜aen˜aαndn˜a+(1αn)Ubn˜adn˜a  for all nn0an˜a+|θn|anan1+|δn|an1an2  for all nn0.

    Applying this to Lemma 1 in [18] with the conditions (C7) and (C8), we derive that the sequence {an˜a} converges and hence limnan˜a=limndn˜a=limnen˜a. In particular, {an},{dn}, and {en} are bounded.

    Claim 4. limndnUbn=0.

    From (2.2), U is a quasi-nonexpansive mapping, and using Claim 2, we have

    an+1˜a2en˜a2=αndn˜a2+(1αn)Ubn˜a2αn(1αn)dnUbn2dn˜a2αn(1αn)dnUbn2  for all nn0.

    This together with Claim 3 and the condition (C5) implies that limndnUbn=0.

    Claim 5. limnUbnbn=0.

    Again, using Claim 2, we get, for all nn0,

    [1(ˉμ)2]dncn2dn˜a2Ubn˜a2.

    Thus, we obtain from Claim 4 and 1(ˉμ)2>0 that

    limndncn=0. (3.1)

    By the Lipschitz continuity of X, it follows that

    dnbndncn+cnbn=dncn+μnXdnXcn(1+ˉμ)dncn  for all nn0,

    which by (3.1) yields

    limndnbn=0. (3.2)

    Combining (3.2) and Claim 4, we have that

    UbnbnUbndn+dnbn0  as  n.

    Claim 6. limnanbn=limnancn=0.

    By using the the definition of dn, the following inequalities are obtained:

    anbnandn+dnbn|θn|anan1+|δn|an1an2+dnbn

    and

    ancnandn+dncn|θn|anan1+|δn|an1an2+dncn.

    Combining (3.1)–(3.2) and the conditions (C7)(C8), we deduce that Claim 6 is true.

    Claim 7. Every weak sequential cluster point of {an} belongs to Ψ.

    Let a be a weak sequential cluster point of {an}. Then anka as k for some subsequence {ank} of {an}. This implies by Claim 6 that bnka and cnka as k. It follows, from the fact that U is a demiclosed mapping and Claim 5, that aFix(U). Next, we show that a(X+Y)1(0). Let (v,u)graph(X+Y), that is, uXvYv. It is implied by the definition of cn that 1μnk(dnkcnkμnkXdnk)Ycnk. By the maximal monotonicity of Y, we have

    vcnk,uXv1μnk(dnkcnkμnkXdnk)0.

    Thus, by the monotonicity of X, we get

    vcnk,uvcnk,Xv+1μnk(dnkcnkμnkXdnk)=vcnk,XvXcnk+vcnk,XcnkXdnk+1μnkvcnk,dnkcnkvcnk,XcnkXdnk+1μnkvcnk,dnkcnk.

    This result follows from the Lipschitz continuity of X and (3.1), giving us

    va,u=limkvcnk,u0,

    which, combined with the maximal monotonicity of X+Y, implies that a(X+Y)1(0). Hence, aΨ. Finally, by Opial's lemma in [16], we conclude that the sequence {an} converges weakly to a point in Ψ.

    Osteoporosis is a major global health issue, particularly affecting the elderly population. It leads to weakened bones and a higher risk of fractures, which can result in significant morbidity, loss of mobility, and increased mortality rates. Machine learning (ML) models can analyze large datasets containing complex variables like demographics, lifestyle, genetics, and bone density, identifying individuals at high risk of osteoporosis earlier. This allows for timely interventions, preventing fractures and other complications.

    In this section, we implement our newly proposed algorithm as the optimizer for an extreme learning machine (ELM), originally introduced by Huang et al. [19], to assess osteoporosis risk using a comprehensive dataset from Kaggle*. This dataset provides a detailed overview of health factors that contribute to osteoporosis, including demographic details, lifestyle choices, medical history, and bone health indicators. Its comprehensive nature facilitates the development of machine learning models that can accurately identify individuals at high risk for osteoporosis. By analyzing key factors such as age, gender, hormonal changes, and lifestyle habits, our research significantly advances osteoporosis management and prevention strategies. This predictive capability enables early diagnosis, supporting timely interventions that minimize fracture risk, enhance patient outcomes, and optimize the allocation of healthcare resources. Additionally, the integration of machine learning models with our novel optimizer improves prediction accuracy, representing a significant innovation in the field. Table 1 provides readers with an understanding of the dataset's structure, including a detailed description of its components.

    *https://www.kaggle.com/datasets/amitvkulkarni/lifestylefactors-influencing-osteoporosis

    Table 1.  The characteristics of the Osteoporosis dataset used for risk prediction, including statistics for each feature such as mean (ˉx), maximum (Max), minimum (Min), standard deviation (SD), and coefficient of variation (CV).
    Feature Description ˉx Max Min SD CV
    Age The age of the individual in years 39.1011 90 18 21.3554 0.5461
    Gender The individual's gender: Male or Female 1.5066 2 1 0.5001 0.3319
    Hormonal Changes Indicates if the individual has experienced 1.4989 2 1 0.5001 0.3336
    hormonal changes, such as menopause
    Family History Indicates a family history of osteoporosis 1.5097 2 1 0.5000 0.3312
    or fractures: Yes or No
    Race/Ethnicity The individual's race or ethnicity: 2.0255 3 1 0.8183 0.4040
    e.g., Caucasian, African American, Asian
    Body Weight The individual's body weight status: 1.4754 2 1 0.4995 0.3385
    Normal or Underweight
    Calcium Intake The individual's dietary calcium intake: 1.5127 2 1 0.4996 0.3304
    Low or Adequate
    Vitamin D Intake The individual's vitamin D intake: 1.4836 2 1 0.4998 0.3369
    Insufficient or Sufficient
    Physical Activity Indicates the individual's physical activity level: 1.4785 2 1 0.4996 0.3379
    Sedentary for low activity or Active
    Smoking Indicates if the individual is a smoker: Yes or No 1.4984 2 1 0.5001 0.3337
    Alcohol Consumption Indicates the individual's alcohol consumption: 1.4954 2 1 0.5001 0.3344
    None for non-drinkers or Moderate
    Medical Conditions Any existing medical conditions the individual 2.0158 3 1 0.8226 0.4081
    may have, such as Rheumatoid Arthritis
    Medications Any medications the individual is currently taking, 1.4969 2 1 0.5001 0.3340
    such as Corticosteroids or No Medications
    Prior Fractures Indicates if the individual has previously 1.4979 2 1 0.5001 0.3338
    experienced fractures: Yes or No
    Osteoporosis The target variable indicating 979 instances of the presence of osteoporosis
    979 instances of the absence of osteoporosis

     | Show Table
    DownLoad: CSV

    Osteoporosis occurs when bone density significantly decreases, resulting in fragile and brittle bones. It is diagnosed when the T-score is -2.5 or lower, based on a bone density scan (DEXA scan), see Figure 1. At this stage, bones have become considerably more porous and weaker, making them prone to fractures. Osteoporosis typically progresses from osteopenia, an intermediate stage characterized by lower-than-normal bone density but not as severe as osteoporosis. Factors such as age, hormonal changes (e.g., menopause), nutritional deficiencies (e.g., calcium or vitamin D), and lack of physical activity can contribute to the development of osteoporosis.

    Figure 1.  The figure illustrates three stages of bone density: (1) Normal bone, which has a T-score of -1 or higher, indicating healthy bone density; (2) Osteopenia, which shows mild bone loss with a T-score between -1 and -2.5, representing a condition of lower-than-normal bone density; and (3) Osteoporosis, characterized by significant bone loss and increased porosity, with a T-score of -2.5 or lower, indicating a higher risk of fractures.

    In the process of creating our extreme learning machine (ELM), we consider N distinct samples, where the training set S:={(an,tn):xnRn,tnRm,n=1,2,,N} consists of input data xn and corresponding target outputs tn. The output function of an ELM for a standard single-layer feedforward network (SLFN) with M hidden nodes is mathematically represented as:

    Oj=Mi=1βi11+e(wiaj+bi),

    where wi is a randomly initialized weight, and bi is a randomly initialized bias for the i-th hidden node. The goal is to find the optimal output weights βi. The above system of linear equations can be represented in matrix form as T=Hβ, where

    H=[11+e(w1a1+b1)11+e(wMa1+bM)11+e(w1aN+b1)11+e(wMaN+bM)],

    where H is the hidden layer output matrix, T=[tT1,,tTN]T is the target output matrix, and β=[βT1,,βTM]T is the vector of optimal output weights. These optimal weights can be computed as β=HT, where H is the Moore-Penrose generalized inverse of H, though finding H may be challenging in practice. Therefore, obtaining a solution β via convex minimization can help address this challenge. The least squares problem is particularly effective for this, and regularization is a commonly employed technique in machine learning and statistics to mitigate overfitting, enhance model generalization, and ultimately improve performance in classification tasks. We conducted a series of experiments on a classification problem, explicitly using the well-known least absolute shrinkage and selection operator (LASSO) method [20]. The detailed descriptions of these experiments are provided below: For λ>0,

    minβRM12HβT22+λβ1. (4.1)

    By applying Algorithm 4 to solve the problem (4.1), we are setting Xβ(12HβT22) and Yβ(λβ1) with λ=0.01.

    We evaluate the performance of the classification algorithms using four evaluation metrics: Accuracy, precision, recall, and F1-score [21]. These metrics are defined as follows:

    Accuracy=TP+TNTP+FP+TN+FN×100%;
    Precision=TPTP+FP×100%;
    Recall=TPTP+FN×100%;
    F1-score=2×(Precision×Recall)Precision+Recall.

    In these formulas, TP represents True Positives, TN True Negatives, FP False Positives, and FN False Negatives.

    Additionally, we used binary cross-entropy loss [22] to evaluate the model's ability to distinguish between two classes in binary classification tasks. This loss is computed as the average:

    Loss=Ni=1φilogˉφi+(1φi)log(1ˉφi),

    where ˉφi represents the predicted probability for the i-th instance, φi is the corresponding true label, and N is the total number of instances.

    Next, as illustrated in Figure 2, we partition the dataset using a 5-fold cross-validation approach. In each fold, 80% of the data is used for training (highlighted in purple), and 20% is allocated for validation (highlighted in green). This ensures that every subset of the data is used for validation exactly once, while the rest is used for training.

    Figure 2.  Illustration of 5-fold cross-validation. In each fold, 80% of the data (represented in purple) is used for training, and the remaining 20% (represented in green) is used for validation. Each fold uses a different subset for validation, ensuring that the entire dataset is validated once while the other folds are used for training.

    For the comparison of Algorithms 1–3, we set all parameters as follows: The number of hidden nodes is M=500, with randomly initialized weights wi in the range [50,50] and biases bi in the range [5,5]. Specifically, we define: θn=0.0004 for Algorithm 2:

    θn={1n3,if anan1 and n>N,θ,otherwise

    with

    δn={1n3,if an1an2 and n>N,δ,otherwise

    for Algorithm 3, and

    θn={1anan1n3,if anan1 and n>N,θ,otherwise

    with

    δn={1an1an2n3,if an1an2 and n>N,δ,otherwise

    for Algorithm 4 where N denotes the iteration number at which we decide to stop the algorithm. For further details on the parameter settings, please refer to Table 2.

    Algorithm 1 : Mann Tseng-type method
      Initialization: Let a1H, μ1>0, and μ,αn,βn(0,1).
      Iterative Steps: Given n1, calculate an+1 as follows:
      Step 1. Compute
                  yn=(I+μnY)1(IμnX)an.
      If an=yn, then stop and yn is a solution of the VIP. Otherwise,
      Step 2. Compute
                    zn=ynμn(XynXan)
      and
                  an+1=(1αnβn)an+βnzn.
      Step 3. Update
         μn+1={min{μn,μanynXanXyn}if  XanXyn0;μnotherwise.
      Set n=n+1 and go to Step 1.

    Algorithm 2 : Inertial Tseng method
      Initialization: Let a0,a1H, θn[0,1), and μn(0,1/L) where L is a Lipschitz constant.
      Iterative Steps: Given n1, calculate an+1 as follows:
      Step 1. Compute
                 wn=an+θn(anan1)
      and
               yn=(I+μnY)1(IμnX)wn.
      If wn=yn, then stop and yn is a solution of the VIP. Otherwise,
      Step 2. Compute
               an+1=ynμn(XynXwn).
      Set n=n+1 and go to Step 1.

    Algorithm 3 : Double inertial proximal gradient Mann method
      Initialization: Select arbitrary elements a0,a1H. Let μ(0,1), μ1(0,), {αn}(0,1), {θn}[0,), {δn}[0,), {pn}[0,), and {qn}[1,).
      Iterative Steps: Construct {an} by using the following steps:
      Step 1. Compute
              zn=an+θn(anan1),wn=zn+δn(znan1),yn=proxμng(Iμnf)wn,un=yn+μn(f(wn)f(yn)),
      and
              an+1=(1αn)un+αnUun.
      If wn=yn=un=Uun, then stop and wn is a solution of the problem. Otherwise,
      Step 2. Update
        μn+1={min{μqnwnynf(wn)f(yn),μn+pn}if  f(wn)f(yn);μn+pnotherwise.
      Replace n with n+1 and then repeat Step 1.

    Algorithm 4
      Initialization: Select arbitrary elements a1,a0,a1H. Let {αn}(0,1), {μn}(0,1), {θn},{δn}(,), and set n:=1.
      Iterative Steps: Construct {an} by using the following steps:
      Step 1. Define
                dn=an+θn(anan1)+δn(an1an2).
      Step 2. Compute
                      cn=(I+μnY)1(IμnX)dn,bn=cn+μn(XdnXcn).
      Step 3. Evaluate
                      an+1=U[αndn+(1αn)Ubn].
      If Ubn=bn=cn=dn, then stop and dnΨ. Otherwise, replace n by n+1 and then repeat Step 1.

    Table 2.  Parameter settings for Algorithms 1–4. The parameters include μ1, μn, and μ representing step sizes, αn and βn for the learning rate, θ and δ for adjustment factors, and pn and qn for additional terms related to the iteration process.
    μ1, μn μ αn βn θ δ pn qn
    Algorithm 1 0.999H2 2.5×106 1n+1 1212n+1 - - - -
    Algorithm 2 0.999H2 - - - - - - -
    Algorithm 3 0.999H2 0.5 0.9 - 0.999 0.8 1n2 1+1n
    Algorithm 4 0.999H2 - 0.8 - 0.999 -0.001 - -

     | Show Table
    DownLoad: CSV

    The results for each algorithm, evaluated on Training Box 1, are presented in Table 3.

    Table 3.  Performance comparison of Algorithms 1–4 on the validation set from Training Box 1. These results are based on the model's evaluation using the first fold of the 5-fold cross-validation scheme, where Training Box 1 serves as the validation set.
    Iter. CPU Time Accuracy Test Precision Recall F1-score
    Algorithm 1 54 1.7352 77.44 76.92 77.72 77.31
    Algorithm 2 51 1.6293 77.95 77.94 77.94 77.94
    Algorithm 3 45 2.4402 81.79 96.41 74.60 84.11
    Algorithm 4 54 2.5853 82.05 91.79 76.82 83.64

     | Show Table
    DownLoad: CSV

    Table 4 displays the performance outcomes for each algorithm, assessed using Training Box 2.

    Table 4.  Performance comparison of Algorithms 1–4 on the validation set from Training Box 2, based on the second fold of the 5-fold cross-validation scheme.
    Iter. CPU Time Accuracy Test Precision Recall F1-score
    Algorithm 1 42 1.4179 83.89 84.69 83.41 84.05
    Algorithm 2 42 1.4632 83.63 84.69 83.00 83.83
    Algorithm 3 42 2.2095 85.93 92.85 81.61 86.87
    Algorithm 4 35 2.2713 85.93 92.85 81.61 86.87

     | Show Table
    DownLoad: CSV

    The performance outcomes for each algorithm, after being tested on Training Box 3, are detailed in Table 5.

    Table 5.  Performance comparison of Algorithms 1–4 on the validation set from Training Box 3, based on the third fold of the 5-fold cross-validation scheme.
    Iter. CPU Time Accuracy Test Precision Recall F1-score
    Algorithm 1 36 1.3214 83.89 87.24 81.81 84.44
    Algorithm 2 33 1.2631 83.89 87.24 81.81 84.44
    Algorithm 3 14 0.6169 84.14 87.24 82.21 84.65
    Algorithm 4 28 1.8228 84.40 90.81 80.54 85.37

     | Show Table
    DownLoad: CSV

    The performance outcomes for each algorithm, after being tested on Training Box 4, are detailed in Table 6.

    Table 6.  Performance comparison of Algorithms 1–4 on the validation set from Training Box 4, based on the fourth fold of the 5-fold cross-validation scheme.
    Iter. CPU Time Accuracy Test Precision Recall F1-score
    Algorithm 1 49 1.6230 76.47 75.51 77.08 76.28
    Algorithm 2 46 2.3913 76.98 76.53 77.31 76.92
    Algorithm 3 44 2.2669 79.28 83.16 77.25 80.09
    Algorithm 4 42 2.6810 82.35 93.36 76.56 84.13

     | Show Table
    DownLoad: CSV

    The performance results for each algorithm, evaluated on Training Box 5, are provided in Table 7.

    Table 7.  Performance comparison of Algorithms 1–4 on the validation set from Training Box 5, based on the fifth fold of the 5-fold cross-validation scheme.
    Iter. CPU Time Accuracy Test Precision Recall F1-score
    Algorithm 1 51 1.7199 84.30 84.69 83.83 84.26
    Algorithm 2 37 1.2250 84.56 85.20 83.91 84.55
    Algorithm 3 27 1.4804 85.06 93.36 79.91 86.11
    Algorithm 4 42 2.7039 85.32 94.89 79.48 86.51

     | Show Table
    DownLoad: CSV

    Remark 4.1. 1) From Tables 37, the performance of Algorithms 1–4 was evaluated using a 5-fold cross-validation scheme, with each training box serving as the validation set in turn:

    (ⅰ) Training Box 1: Algorithm 4 achieved the highest accuracy (82.05%) with notable precision (91.79%) and F1-score (83.64%).

    (ⅱ) Training Box 2: Algorithms 3 and 4 tied for the highest accuracy (85.93%), with Algorithm 4 showing marginally better CPU time and recall (81.61%).

    (ⅲ) Training Box 3: Algorithm 4 again performed best, achieving an accuracy of 84.40% and a high precision (90.81%).

    (ⅳ) Training Box 4: Algorithm 4 had the best accuracy (82.35%) and precision (93.36%).

    (ⅴ) Training Box 5: Algorithm 4 tied for the highest accuracy (85.32%), and showed the best precision (94.89%) and marginally better number of iterations.

    In general, Algorithm 4 consistently demonstrated strong performance in accuracy, precision, and F1-score across most of the training boxes.

    2) In practical applications of the proposed algorithm to machine learning problems, the challenge of unknown or difficult-to-estimate Lipschitz constants is mitigated by the finite nature of the feature set. In such cases, the Lipschitz constant can be approximated more quickly due to the boundedness of the feature space. Moreover, the efficiency and convergence of the proposed algorithm are not significantly impacted by the limitations associated with estimating the Lipschitz constant. This is demonstrated through the results presented in Tables 37, where the algorithm achieves effective performance and convergence despite potential uncertainties in the Lipschitz parameter.

    From Figures 37, the accuracy and loss plots for Algorithm 4 across all training boxes show consistent trends, with training and validation accuracy remaining relatively close to each other throughout the iterations, indicating minimal overfitting. The training and validation loss curves also exhibit similar behavior, steadily decreasing over time and stabilizing. However, in some figures (such as Figures 3 and 6), the slight divergence between the training and validation loss can be observed in later iterations, suggesting potential signs of mild overfitting. The model maintains a good balance between training and validation performance, demonstrating that regularization and parameter choices prevent significant overfitting. The performance trends suggest that Algorithm 4 generalizes well to the validation data without overfitting the training set.

    Figure 3.  (Left) Training and validation accuracy (left) and loss (right) for Algorithm 4 from Table 3 over 54 iterations. The blue lines represent training performance, while the red lines show validation performance, demonstrating stable accuracy and decreasing loss.
    Figure 4.  Training and validation accuracy (left) and loss (right) for Algorithm 4 from Table 4 over 35 iterations. The blue lines represent training performance, while the red lines show validation performance, demonstrating stable accuracy and decreasing loss.
    Figure 5.  Training and validation accuracy (left) and loss (right) for Algorithm 4 from Table 5 over 28 iterations. The blue lines represent training performance, while the red lines show validation performance, demonstrating stable accuracy and decreasing loss.
    Figure 6.  Training and validation accuracy (left) and loss (right) for Algorithm 4 from Table 6 over 42 iterations. The blue lines represent training performance, while the red lines show validation performance, demonstrating stable accuracy and decreasing loss.
    Figure 7.  Training and validation accuracy (left) and loss (right) for Algorithm 4 from Table 7 over 42 iterations. The blue lines represent training performance, while the red lines show validation performance, demonstrating stable accuracy and decreasing loss.

    From Figure 8, we see that the ROC curves illustrate the model's classification performance, where higher AUC values reflect stronger class separation.

    Figure 8.  Receiver operating characteristic (ROC) curves for Algorithm 4 across the five training boxes. Each plot represents the ROC curve for the respective training box, with the area under the curve (AUC) values reported as follows: Training Box 1 (AUC = 0.8254), Training Box 2 (AUC = 0.90419), Training Box 3 (AUC = 0.91188), Training Box 4 (AUC = 0.84278), and Training Box 5 (AUC = 0.88094).

    We proposed a novel algorithm (Algorithm 4) to solve variational inclusion problems and fixed point problems involving quasi-nonexpansive mappings in a real Hilbert space. Our main theorem establishes the weak convergence of the proposed algorithm under certain conditions. We also applied the algorithm in conjunction with an extreme learning machine to the problem of data classification, specifically to predict osteoporosis risk. Our algorithm achieves an accuracy of over 82%, a precision of over 91%, a recall of 76%, and an F1-score of over 83% across all training boxes, which demonstrate the effectiveness of the algorithm we developed.

    The data are available on the Kaggle website (https://www.kaggle.com/datasets/amitvkulkarni/lifestylefactors-influencing-osteoporosis).

    This study was conducted in accordance with the Declaration of Helsinki, the Belmont Report, CIOMS guideline for the International Conference on Harmonization in Good Cinical Pratice, or ICH-GCP or 45CFR 46.101(b), and with approval from the Ethics Committee and Institutional Review Board of the Faculty of the University of Phayao (Institutional Review Board (IRB) approval, IRB Number: HREC-UP-HSST 1.102867).

    Raweerote Suparatulatorn: Writing – original draft, formal analysis; Wongthawat Liawrungrueang: Writing – review & editing, data curation; Thanasak Mouktonglang: Writing – review & editing, project administration; Watcharaporn Cholamjiak: Writing – review & editing, software. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research work was partially supported by the CMU Proactive Researcher, Chiang Mai University [grant number 780/2567]. W. Liawrungrueang and W. Cholamjiak would like to thank the Thailand Science Research and Innovation Fund (Fundamental Fund 2025, Grant No. 5025/2567) and the University of Phayao.

    The authors declare no conflicts of interest.



    [1] E. Picard, Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives, J. Math. Pure. Appl., 6 (1890), 145–210.
    [2] W. R. Mann, Mean value methods in iteration, P. Am. Math. Soc., 4 (1953), 506–510. https://doi.org/10.2307/2032162 doi: 10.2307/2032162
    [3] S. H. Khan, A Picard-Mann hybrid iterative process, Fixed Point Theory A., 2013 (2013), 69. https://doi.org/10.1186/1687-1812-2013-69 doi: 10.1186/1687-1812-2013-69
    [4] P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. https://doi.org/10.1137/0716071 doi: 10.1137/0716071
    [5] G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Anal. Appl., 72 (1979), 383–390. https://doi.org/10.1016/0022-247X(79)90234-8 doi: 10.1016/0022-247X(79)90234-8
    [6] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431–446. https://doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
    [7] D. A. Lorenz, T. Pock, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis., 51 (2015), 311–325. https://doi.org/10.1007/s10851-014-0523-2 doi: 10.1007/s10851-014-0523-2
    [8] W. Cholamjiak, P. Cholamjiak, S. Suantai, An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces, J. Fixed Point Theory A., 20 (2018), 42. https://doi.org/10.1007/s11784-018-0526-5 doi: 10.1007/s11784-018-0526-5
    [9] P. Peeyada, R. Suparatulatorn, W. Cholamjiak, An inertial Mann forward-backward splitting algorithm of variational inclusion problems and its applications, Chaos Soliton. Fract., 158 (2022), 112048. https://doi.org/10.1016/j.chaos.2022.112048 doi: 10.1016/j.chaos.2022.112048
    [10] R. Suparatulatorn, A. Khemphet, Tseng type methods for inclusion and fixed point problems with applications, Mathematics, 7 (2019), 1175. https://doi.org/10.3390/math7121175 doi: 10.3390/math7121175
    [11] A. Padcharoen, D. Kitkuan, W. Kumam, P. Kumam, Tseng methods with inertial for solving inclusion problems and application to image deblurring and image recovery problems, Comput. Math. Methods, 3 (2021), e1088. https://doi.org/10.1002/cmm4.1088 doi: 10.1002/cmm4.1088
    [12] P. Cholamjiak, D. V. Hieu, Y. J. Cho, Relaxed forward-backward splitting methods for solving variational inclusions and applications, J. Sci. Comput., 88 (2021), 85. https://doi.org/10.1007/s10915-021-01608-7 doi: 10.1007/s10915-021-01608-7
    [13] A. Gibali, D. V. Thong, Tseng type methods for solving inclusion problems and its applications, Calcolo, 55 (2018), 1–22. https://doi.org/10.1007/s10092-018-0292-1 doi: 10.1007/s10092-018-0292-1
    [14] B. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4 (1964), 791–817.
    [15] T. Mouktonglang, W. Chaiwino, R. Suparatulatorn, A proximal gradient method with double inertial steps for minimization problems involving demicontractive mappings, J. Inequal. Appl., 2024 (2024), 69. https://doi.org/10.1186/s13660-024-02965-y doi: 10.1186/s13660-024-02965-y
    [16] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, CMS Books in Mathematics, Springer, New York, 2011.
    [17] H. Brézis, Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert, Math. Studies 5, North-Holland, Amsterdam, Netherlands, 1973.
    [18] A. Auslender, M. Teboulle, S. Ben-Tiba, A logarithmic-quadratic proximal method for variational inequalities, Comput. Optim. Appl., 12 (1999), 31–40.
    [19] G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: A new learning scheme of feedforward neural networks, In: 2004 IEEE International Joint Conference on Neural Networks, 2004,985–990. https://doi.org/10.1109/IJCNN.2004.1380068
    [20] R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Stat. Soc. B, 58 (1996), 267–288.
    [21] N. W. S. Wardhani, M. Y. Rochayani, A. Iriany, A. D. Sulistyono, P. Lestantyo, Cross-validation metrics for evaluating classification performance on imbalanced data, In: 2019 International Conference on Computer, Control, Informatics and its Applications (IC3INA), 2019, 14–18. https://doi.org/10.1109/IC3INA48034.2019.8949568
    [22] M. Akil, R. Saouli, R. Kachouri, Fully automatic brain tumor segmentation with deep learning-based selective attention using overlapping patches and multi-class weighted cross-entropy, Med. Image Anal., 63 (2020), 101692. https://doi.org/10.1016/j.media.2020.101692 doi: 10.1016/j.media.2020.101692
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(423) PDF downloads(46) Cited by(0)

Figures and Tables

Figures(8)  /  Tables(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog