Research article Special Issues

Kink soliton solution of integrable Kairat-X equation via two integration algorithms

  • In order to establish and assess the dynamics of kink solitons in the integrable Kairat-X equation, which explains the differential geometry of curves and equivalence aspects, the present investigation put forward two variants of a unique transformation-based analytical technique. These modifications were referred to as the generalized (r+GG)-expansion method and the simple (GG)-expansion approach. The proposed methods spilled over the aimed Kairat-X equation into a nonlinear ordinary differential equation by means of a variable transformation. Immediately following that, it was presumed that the resultant nonlinear ordinary differential equation had a closed form solution, which turned it into a system of algebraic equations. The resultant set of algebraic equations was solved to find new families of soliton solutions which took the forms of hyperbolic, rational and periodic functions. An assortment of contour, 2D and 3D graphs were used to visually show the dynamics of certain generated soliton solutions. This indicated that these soliton solutions likely took the structures of kink solitons prominently. Moreover, our proposed methods demonstrated their use by constructing a multiplicity of soliton solutions, offering significant understanding into the evolution of the focused model, and suggesting possible applications in dealing with related nonlinear phenomena.

    Citation: Raed Qahiti, Naher Mohammed A. Alsafri, Hamad Zogan, Abdullah A. Faqihi. Kink soliton solution of integrable Kairat-X equation via two integration algorithms[J]. AIMS Mathematics, 2024, 9(11): 30153-30173. doi: 10.3934/math.20241456

    Related Papers:

    [1] Nodari Vakhania . On preemptive scheduling on unrelated machines using linear programming. AIMS Mathematics, 2023, 8(3): 7061-7082. doi: 10.3934/math.2023356
    [2] Yunfeng Shi, Shu Lv, Kaibo Shi . A new parallel data geometry analysis algorithm to select training data for support vector machine. AIMS Mathematics, 2021, 6(12): 13931-13953. doi: 10.3934/math.2021806
    [3] Frank Rogers . Fuzzy gradient descent for the linear fuzzy real number system. AIMS Mathematics, 2019, 4(4): 1078-1086. doi: 10.3934/math.2019.4.1078
    [4] Zhimin Liu, Ripeng Huang, Songtao Shao . Data-driven two-stage fuzzy random mixed integer optimization model for facility location problems under uncertain environment. AIMS Mathematics, 2022, 7(7): 13292-13312. doi: 10.3934/math.2022734
    [5] Jiaxin Chen, Yuxuan Wu, Shuai Huang, Pei Wang . Multi-objective optimization for AGV energy efficient scheduling problem with customer satisfaction. AIMS Mathematics, 2023, 8(9): 20097-20124. doi: 10.3934/math.20231024
    [6] Ibrahim Attiya, Mohammed A. A. Al-qaness, Mohamed Abd Elaziz, Ahmad O. Aseeri . Boosting task scheduling in IoT environments using an improved golden jackal optimization and artificial hummingbird algorithm. AIMS Mathematics, 2024, 9(1): 847-867. doi: 10.3934/math.2024043
    [7] Ying Sun, Yuelin Gao . An improved composite particle swarm optimization algorithm for solving constrained optimization problems and its engineering applications. AIMS Mathematics, 2024, 9(4): 7917-7944. doi: 10.3934/math.2024385
    [8] Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Muhammad Naeem, Choonkil Park . Improved VIKOR methodology based on q-rung orthopair hesitant fuzzy rough aggregation information: application in multi expert decision making. AIMS Mathematics, 2022, 7(5): 9524-9548. doi: 10.3934/math.2022530
    [9] T. M. Athira, Sunil Jacob John, Harish Garg . A novel entropy measure of Pythagorean fuzzy soft sets. AIMS Mathematics, 2020, 5(2): 1050-1061. doi: 10.3934/math.2020073
    [10] Suthep Suantai, Pronpat Peeyada, Andreea Fulga, Watcharaporn Cholamjiak . Heart disease detection using inertial Mann relaxed CQ algorithms for split feasibility problems. AIMS Mathematics, 2023, 8(8): 18898-18918. doi: 10.3934/math.2023962
  • In order to establish and assess the dynamics of kink solitons in the integrable Kairat-X equation, which explains the differential geometry of curves and equivalence aspects, the present investigation put forward two variants of a unique transformation-based analytical technique. These modifications were referred to as the generalized (r+GG)-expansion method and the simple (GG)-expansion approach. The proposed methods spilled over the aimed Kairat-X equation into a nonlinear ordinary differential equation by means of a variable transformation. Immediately following that, it was presumed that the resultant nonlinear ordinary differential equation had a closed form solution, which turned it into a system of algebraic equations. The resultant set of algebraic equations was solved to find new families of soliton solutions which took the forms of hyperbolic, rational and periodic functions. An assortment of contour, 2D and 3D graphs were used to visually show the dynamics of certain generated soliton solutions. This indicated that these soliton solutions likely took the structures of kink solitons prominently. Moreover, our proposed methods demonstrated their use by constructing a multiplicity of soliton solutions, offering significant understanding into the evolution of the focused model, and suggesting possible applications in dealing with related nonlinear phenomena.



    In the domain of operations research, the parallel machine scheduling problem (PMSP) is important and difficult. The importance of PMSP comes from its ability to optimize the use of several machines and allocate a specific set of jobs to satisfy the customer's demands. This is vital for industries where resource optimization is essential, such as maximizing productivity and minimizing overall machine time. Additionally, a wide range of industries, especially in high-demand environments like data centers or manufacturing facilities, can adapt PMSP models for various scenarios, ranging from small-scale operations to large, complicated systems [1]. The challenges of PMSP stem from its complex nature, which has gained popularity and led to its classification as NP-hard [1,2]. This means that finding an optimal solution is computationally costly and becomes more difficult as the number of jobs and machines increases. Real-world scenarios frequently include dynamic changes and uncertainties, such as machine breakdowns or different job durations. Machine constraints and multiple objectives must be considered and balancing them can complicate the scheduling process. Therefore, the challenge of developing effective algorithms contributes to the complexity of the problem. According to [3], the successful solution of the PMSP requires simultaneous determination of assignment and sequencing policies for the available parallel machines and jobs.

    Academic literature primarily categorizes research on PMSP into three main groups [3]: identical, uniform, and unrelated PMSP (UPMSP). UPMSP can be regarded as a general representation of the other two groups, wherein distinct machines are employed to execute identical jobs but possess varying processing capacities or capabilities. However, addressing real-world UPMSPs poses a significant challenge for both practitioners and researchers, primarily due to their inherent complexity. Notably, UPMSPs are typically classified as NP-hard, even without considering the multi-objective functions [4]. In addition, UPMSPs are considered more realistic than the other groups because machine speeds, technology, and machine types in the shop may differ from one another. Therefore, solving UPMSPs has attracted many researchers and professionals from scientific and engineering disciplines [5]. UPMSPs have found widespread application in production scheduling and manufacturing systems, particularly in semiconductor manufacturing [6], electronic assembly [4], and manufacturing electricity costs [7].

    The most effective method for both exact and approximate solutions in small problem instances intended for single-objective optimization is linear programming [8]. In recent years, considerable attention has been given to multi-objective optimization, as seen by a substantial body of literature, particularly in the years 2019 and 2020 [9]. For instance, Sarçiçek [10] proposed a multi-objective UPMSP model that aims to minimize makespan and maximize machine preferences for jobs with sequence-dependent setup times and introduces a simulated annealing metaheuristic to address large-scale problems. Additionally, Meng et al. [11] presented a mathematical model that integrates identical UPMSPs in a structural metal-cutting facility to minimize total makespan and total tardiness. A new metaheuristic algorithm combines the variable neighborhood structure strategy (VNSGA-III) and the non-dominated sorting genetic algorithm III (NSGA-III). Experimental results show that the suggested algorithm statistically outperforms the comparison algorithms. In the meantime, in [12], the UPMSP model was adapted and expanded based on previous mathematical models by focusing on minimizing makespan and a set of constraints for small instances. Then, different types of simulated annealing were suggested to solve large-scale instances in the packaging industry. The comprehensive evaluation and performance analysis show that the proposed methods often outperform state-of-the-art methods.

    The most multi-objective scheduling challenges across all fields were minimizing makespan and total tardiness, which also happened to be the multi-objective that most researchers looked at [13,14]. The application of fuzzy sets enhanced the accuracy of schedules to evaluate UPMSPs and deal with the complexities of real-world problems [1,15]. For example, in the transport industry, where the operators are the resources and the jobs are the freight delivery of the real-world UPMSP problem, Rivera Zarate [16] employed a fuzzy relational system to represent the decision makers' preferences and solve the three objective functions: minimizing completion time and risk and maximizing delivery reliability. The outranking-based particle swarm optimization algorithm (O-PSO) is a metaheuristic developed in that study. The study demonstrated its ability to generate high-quality solutions in comparison to a commonly used policy. Another study conducted by [17] aimed to minimize makespan and total cost simultaneously within manufacturing production systems, and a fuzzy programming method was employed to solve large-scale instances of the proposed UPMSP problem. Meanwhile, Zhou et al. [18] developed a tri-objective optimization model for UPMSP to minimize the total cost of the order delay and early penalty, makespan, and workload imbalance; the processing time is fuzzy due to many uncertain variables associated with actual production. A metaheuristic of Pareto-based discrete particle swarm optimization (PDPSO) was employed to evaluate the model across various scales. The findings indicate that the proposed PDPSO surpasses the other algorithms, particularly for small and medium-sized instances.

    Prior studies have proposed fuzzy UPMSPs to enhance performance metrics and determine appropriate schedules across a range of scenarios. Such methods encompass exact methods [15,19], heuristic methods [20], and artificial swarm intelligence methods [21]. Therefore, quite a few metaheuristic algorithms have been proposed for UPMSP, with many demonstrating the capability to achieve near-optimal solutions within reasonable time frames [9]. Even though various algorithms have exhibited commendable performance results, the multiple and conflicting objectives remain an open issue [22]. The primary objectives under consideration are the minimization of makespan and total tardiness. Machine scheduling commonly employs makespan, but it can surpass due dates, thereby requiring the inclusion of an additional objective. As a result, even if the optimal makespan solution is identified, many jobs will probably be completed after their due dates. Consequently, simultaneously considering total tardiness and makespan minimization could lead to increased efficiency. While scheduling machines commonly use this multi-objective [14], the information required to define the time parameters and constraint conditions may be vague or not precisely measurable. This is because most real-world problems are fuzzy. For improvement in the quality of the schedule and making it more reflective of scheduling issues that occur in the real world, fuzzy sets can be utilized to evaluate UPMSPs.

    Limited research has been conducted on UPMSP with the multi-objective makespan and total tardiness in the fuzzy environment [15]. Recently, Pourpanah et al. [23] conducted a comprehensive review of the artificial fish swarm algorithm (AFSA). The authors revealed that most modifications of AFSA focus on controlling the parameter values of the algorithm, particularly the visual distance, maximum step length, and crowding factor, along with updates and enhancements to existing behaviors or the development of new ones. AFSA has seen a lot of modifications recently to enhance optimization's effectiveness, most of which aimed for a balance among the exploration and exploitation procedures. Even so, currently used fish swarm algorithms have not yet achieved a global optimum at remarkably high convergence rates. Thus, AFSA development still has a tremendous deal of potential.

    In recent years, the improved and modified AFSA became a valuable tool that can be efficiently applied in a variety of applications. Zhao et al. [24] adopted the improved AFSA for route planning of autonomous vessels by introducing a directional operator to improve efficiency, a probability weight factor to avoid local optima, and an adaptive operator to achieve better convergence performance. Tan & Mohamad-Saleh [25] presented a new algorithm of AFSA to improve the performance of global and local search techniques in optimization. The proposed algorithm includes several improvements, such as hybridizing characteristics, introducing normative communication and memory behaviors, and adapting parameters in terms of visuals and steps. Meanwhile, Huang et al. [26] proposed three adaptive step methods to speed up and address the shortcomings of the standard AFSA for solving sensor layout optimization problems in fiber grating networks. Research by Wang et al. [27] customized four operators and an adaptive factor to enhance the algorithm's convergence performance and suggested a modified AFSA when combined with a local path optimizer to resolve the path planning issue. Jin et al. [28] proposed a modified AFSA for unit commitment optimization to overcome the disadvantages of premature convergence and local extremes in the original algorithm. The improvements include variable vision, adjusting the movement strategy, and combining the mutation operation of genetic algorithms. A study by Gao et al. [29] introduced novel AFSA by using Cauchy mutation to accelerate the algorithm's speed of convergence to solve the parameter selection problem for the twin support vector machine. Li et al. [30] proposed improving AFSA to determine the best scheduling to minimize time delay for the multidimensional knapsack problem. These modifications attempted to improve the algorithm's performance in terms of convergence, optimality, and escape from local optimality, among others. However, the development of the algorithm continues to this day.

    While there is limited research on the use of modified AFSA in machine scheduling, Tirkolaee et al. [31] introduced an improved version of AFSA to solve flow shop scheduling (FSS) problems. The aims were to minimize total cost and total energy consumption by adding self-adaptive behavior to enhance the algorithm's performance. To our knowledge, AFSA has never been used to solve the scheduling problem on unrelated machines for the proposed multi-objective problem under fuzzy environments. Therefore, in this study, a modified AFSA is proposed to minimize the proposed multi-objective problem with considerations of fuzzy triangular numbers to mitigate the uncertainty associated with the processing times and due dates. The proposed modified AFSA is a new algorithm designed to solve UPMSP by exploiting the capabilities of the AFSA, which is modified through three aspects: the addition of new behavior for the four main behaviors, the use of improved visual and step parameters, and converting AFSA from a continuous to a discrete solution space to deal with the discrete proposed model. Therefore, after creating a random fish population to represent UPMSP solutions, the proposed modified AFSA starts with the best individual solution generated by the standard AFSA. Additionally, the AFSA's constant value is replaced with improved visual and step parameters, and the performance is updated based on the best and current individual using aspiration behavior expression. Subsequently, the fitness value of the best individual is calculated using a transformation method to convert the continuous solution into a discrete solution. Therefore, all solutions will be updated according to the previous steps and implemented until the termination criterion is satisfied.

    In short, the main contributions of this research are described as follows:

    • Adapt and extend the linear multi-objective model introduced by [32] for UPMSP by embedding triangular fuzzy parameters.

    • Apply the total integral value defuzzification method that converts a fuzzy output into a crisp output value.

    • Modify standard continuous AFSA to deal with the discrete UPMSP model in solving the proposed UPMSP model.

    • Evaluate the performance of the proposed modified algorithm by conducting computational experiments and comparing the results with AFSA and five versions of modified AFSA algorithms. The evaluation results confirmed the superior performance of the modified algorithm.

    The performance of the proposed algorithm was compared against standard AFSA and three modified versions proposed by Zhao et al. [24] and Tan & Mohamad-Saleh [25] and three versions of methods outlined by Huang et al. [26].

    This paper is structured as follows: Section 2 presents the problem formulation, while Section 3 explains the preliminaries. The standard AFSA is described in Section 4, and the proposed modified AFSA is detailed in Section 5. Section 6 provides computational and statistical results, along with the analysis of the evaluation process. Finally, Section 7 discusses the conclusions and future research directions.

    Based on the classification scheme of scheduling problems by Graham et al. [33], the UPMSP for the multi-objective can be indicated as R//(Cmax+Tj), where R denotes an unrelated machine and Cmax+Tj represents a proposed multi-objective function. The following assumptions are considered before formulating the mathematical model for the proposed multi-objective UPMSP.

    Assumptions:

    1- Every machine and job are available at the start of time.

    2- Parallel machines are unrelated (each job's processing time differs and is determined by the machine).

    3- Preemption is not allowed.

    4- There is only one job that each machine can process at once.

    Indices and sets:

    N: A set of jobs.

    M: A set of machines.

    j,k : Index for jobs, j,kN={1,,n}.

    N0: Indicates a set of jobs that involve a dummy job, N0={0,1,,n}.

    i: Index for machine, iM={1,,m}.

    Parameters and decision variables:

    pij : The processing time of job j on machine i, which could differ depending on different machines.

    dj: The due date of job j.

    Cj: The completion time of job j.

    Cmax: The maximum completion time (makespan).

    w: The value of weight.

    v: Sufficiently large number.

    Tj: The tardiness of job j.

    Xikj={1,if job j immediately follows job k on machine i0,Otherwise

    The notation definitions above and the current model have been inspired by the models that were introduced in papers [34,35]. Meanwhile, the proposed multi-objective model for UPMSP is extended to the linear multi-objective model introduced by [32], which is formulated as follows:

    MinF=wCmax+(1w)nj=1Tj (1)

    Subject to:

    iMkN0,kjXikj=1jN, (2)
    iMjN0,jkXikj=1kN, (3)
    jN0,jkXikjhN0,jkXihk=0iM,kN, (4)
    jNXi0j1iM, (5)
    C0=0, (6)
    CjCk+v(1Xikj)pijiM,kN0,jN:jk, (7)
    CmaxCjjN, (8)
    TjCjdjjN, (9)
    pij,Cj,Tj,Cmax0, (10)
    Xikj{0,1}. (11)

    Equation (1) works to minimize multi-objective functions' makespan and total tardiness simultaneously. Constraint (2) guarantees that every job is done by just one machine. When job jth is executed after the job kth on the machine ith, the Xikj value is equal to 1; otherwise, the value will be 0. Constraint (3) demonstrates that there is only one job that is scheduled first at each machine, which means Xikj=1 when the kth job is the first job on a machine ith, otherwise Xikj=0. Constraint (4) specifies the job's flow balance at each machine, which means only one preceding job and one succeeding job. Constraint (5) determines the machine's initial job. Constraint (6) gives zero completion time at job 0. Constraint (7) ensures that the completion time of job jth equals the completion time of the preceding job plus the processing time of job jth at each machine; this can be performed using a large number v. When Xikj=1 if the job jth is ordered after job kth, then v(1Xikj)=0 and Cj=Ck+pij. Otherwise, when job jth is not ordered after job kth, then Xikj=0, and thus, v(1Xikj)=v. Constraint (8) demonstrates that the maximum completion time for all machines is equal to the makespan. Constraint (9) calculates each job's tardiness value. Constraint (10) ensures that no value for any of the decision variables can be negative. Constraint (11) illustrates binary variables.

    In the literature, fuzzification refers to the process of converting a crisp set into a specific fuzzy set [36]. The primary objective of a fuzzy scheduling process is to identify the "best" solution, or decision, given the presence of uncertain data. Zadeh [37] introduced the fuzzy set theory, which expands the classical set theory's 0–1 integer range to include the values [0, 1]. This section introduces some basic prerequisites for fuzzy sets and triangular fuzzy numbers that will be used later.

    Definition 1 [38]: A fuzzy set ˜A in X is specified by the function μ˜A:X[0,1], a set of ordered pairs if X is a collection of items represented by x:

    ˜A={(x,μ˜A(x)):xX} (12)

    As shown in Figure 1, the membership function μ˜A(x) of the fuzzy set ˜A is the relationship between various crisp values x and values in the interval [0,1].

    Figure 1.  Crisp set A with fuzzy set ˜A.

    In fuzzy theory applications, the most widely used and basic model of fuzzy numbers is called a triangular fuzzy number (TFN), which is represented by ˜A = (a1,a2,a3). The parameter a2 is the maximum possible evaluation data value of x, and parameters a1 and a3 denote the lower and upper bounds of the evaluation data's available area [39]. The graph of the membership function (x) in Figure 2 has a triangular shape, with the base over the interval [a1,a3] and the vertex at x=a2.

    Figure 2.  Membership function for the triangular fuzzy number ˜A=(a1,a2,a3).

    Definition 2: Let ˜A=(a1,a2,a3) and ˜B=(b1,b2,b3) be two TFNs. The main operations are also TFNs and can be performed as follows [40]:

    ˜A+˜B=(a1+b1,a2+b2,a3+b3)
    ˜A˜B=(a1b3,a2b2,a3b1)

    The fuzzy mathematical model is the proposed deterministic model that has been fuzzified using TFNs for the primary parameters of processing time and due date stated here as follows:

    MinF=w~Cmax+(1w)nj=1mi=1˜Tj, (13)
    ˜C0=0, (14)
    ˜Cj˜Ck+v(1Xikj)˜pjiM,kN0,jN:jk, (15)
    ~Cmax˜CjjN, (16)
    ˜Tj˜Cj˜djiM,jN, (17)
    ˜pij,˜Cj,˜Tj,~Cmax0. (18)

    Considering the above model, an optimal solution cannot be obtained in such a fuzzy environment [41]. Defuzzification is the process of transforming a fuzzy output into a single, crisp output value. A study conducted a performance evaluation and comparison of the nine defuzzification methods introduced in [42] to understand the impact of fuzziness on the fuzzy parameters in the UPMSP. The integral method introduced by [43] was found to have significantly better performance than the other methods based on comprehensive testing. To calculate the maximum fuzzy makespan and tardiness, it is necessary to use a method that compares TFNs. According to [43], the total integral value can be used and denoted as:

    IαT(A)=12α(b+c)+12(1α)(a+b)=12[αc+b+(1α)a] (19)

    for triangular fuzzy number A=(a,b,c), and given α [0, 1].

    A novel swarm intelligence-based optimization technique known as the artificial fish swarm algorithm (AFSA) was introduced by Li Xiao Lei [44], which involves randomly searching for a set of solutions to solve NP-hard problems [23]. Compared to other intelligent algorithms, AFSA has a limited number of parameters to manage, making it relatively simple to implement. The amount of time needed to find good solutions is reasonable, and the algorithm is parallel, has a strong global search capability, converges quickly, and is less sensitive to the needs of the objective functions. These benefits have led to the successful application of AFSA and its variants in a diverse array of optimization problems [25], such as flexible job shop scheduling problems [45], logistics distribution [46], network design problems [47], wireless sensor networks [48], cloud computing [49], the traveling salesman problem [50], and the exam timetabling problem [51]. Furthermore, Peraza et al. [52] demonstrated the successful implementation of AFSA in various benchmarking optimization problems. The basic idea behind AFSA is to mimic the various environmental behaviors that schooling fish exhibit in the water, where a fish serves as a fictitious representation of a true fish within a population, and the movements of the swarm are random. This algorithm trains each artificial fish (AF) to react to its current position and teaches it four basic behaviors: prey, swarm, follow, and random. Prey behavior is the foundation of the algorithm's convergence; swarm behavior improves the algorithm's stability and global convergence; follow behavior accelerates the convergence of the algorithm; and random behavior balances the contradiction of the other three behaviors. Here is a description of AFSA's behaviors:

    Preying behavior: This essential behavior enables AF to get to locations where there is a greater concentration of food. This behavior is modeled within a radius of a fish's neighborhoods by using an AF's current location and its nearest neighbors as determined by the AF's field of view. Let Xi be the current position of the ith AF, and Xj be an arbitrary state of an AF as follows:

    Xj=Xi+Visual×R(0,1) (20)

    Visual is the visual length between two AFs that are placed in Xi and Xj, where the Euclidean distance can be defined as distij=∥XiXj, and R is a random vector with each element between 0 and 1.

    In minimization problems, if f(Xj)f(Xi), the ith AF moves a step toward Xj and, using the following formula:

    Xprey=Xi+XjXiXjXi×Step×R(0,1) (21)

    where Xprey is the position of the ith AF after executing praying behavior, which is next to the XjXi, and XpreyXi∥≤S. However, if f(Xj)>f(Xi), another position Xj is randomly selected by applying Eq (1). If the fitness value (objective function), which could be the amount of food the AF is currently concentrating, fails to be satisfied after trying for a given number of iterations, Xi randomizes its next step as follows:

    Xprey=Xi+Visual×R(0,1) (22)

    Swarming behavior: In this behavior, the fish collect and move toward the center position to avoid crowding. Let n denote the total number of AFs, nf be the number of neighbors within the ith AF's visual range with distij<Visual, and Xcenter be the central position of the swarm, which is the average of all AFs' positions:

    Xcenter=1nkn=1Xk (23)

    The ith AF relocates in the direction of Xcenter if f(Xcenter)f(Xi) and nfnμ, where μ[0,1] is the crowding factor; this indicates that there is greater availability of food in the center and that the location is not overcrowded. As a result, Xi takes a movement toward the neighbor's center as follows:

    Xswarm=Xi+XcenterXiXcenterXi×Step×R(0,1) (24)

    otherwise, the ith AF will carry out the preying behavior.

    Following behavior: For ith AF, if at least another jth AF is represented where XjXi∥≤Visual and f(Xj)>f(Xi) with low crowded factor, i.e., nfn<μ, then ith AF move one step toward the jth AF position is as follows:

    XFollow=Xi+XjXiXjXi×Step×R(0,1) (25)

    If there are no neighbors around Xi or if none of them satisfy the condition, then ith AF performs preying behavior.

    Random behavior: Each AF makes its random search process and lets the fish swim freely or follow a swarm in a larger space. The next state of AF is Xnext and defined as follows:

    Xnext=Xi+Visual×R(0,1) (26)

    In standard AFSA, AF typically randomly searches its visual range for the next state before proceeding to the next position. This method is particularly time-consuming. The AF must explore every direction until it finds its position. To address this weakness, we incorporate aspiration behavior into the four primary behaviors of the AFSA, which are influenced by the best position of AFs. The proposed algorithm starts by initializing the population size of a fish swarm, followed by the initiation of the AFSA and the proposed AFSA parameters. Every AF signifies a solution to the problem, receives a fitness value to assess its performance, and records its results on the bulletin board following each iteration. Like standard AFSA, each AF in the proposed algorithm changes its position based on its individual best solution for the whole swarm through each iteration. However, there is a major difference in how the best solution is updated. The modified algorithm compares the current position of an AF with the best position achieved across all behaviors. This enhances the search process and helps AFs to escape from the local optimums. Figure 3 illustrates the flowchart of the modified algorithm, highlighting the modified steps in yellow. Creating new behavior depends on the best, current, and random movement solutions. The update processes are formulated as Eq (27).

    Figure 3.  Flowchart of the proposed modified algorithm.

    Suppose that Xbest represents the potential position of AF in the step range and XbestXiStep, which can be stated as:

    XAspiration=Xi+XbestXiXbestXi×Step×R(0,1) (27)

    The aspiration behavior guarantees that the computing time is greatly reduced because AF can determine the best position in just one cycle of calculations.

    However, through iterative processes, the visual and step parameters in the standard AFSA are fixed. The algorithm's convergence speed is increased at the beginning with large visual and step values. Nevertheless, the large values will result in problems like local optimum or iterative jumps when AFs approach the final position. However, values that are too low reduce effectiveness [24,25]. As a result, there are specifications for the algorithm for the visual's size and step at various phases. To achieve a balance between convergence rate and global search capability, the improved adaptive parameters (visual, step) utilized in [25] were adapted by the modified algorithm and are given as follows:

    visualt+1=visualtvisualt×λ+visualmin (28)
    stept+1=steptstept×λ+stepmin (29)
    λ=e(σ4(tmax)3(tmaxt) (30)

    where visualmin is the minimum visual value; stepmin is the minimum step value; t = (1, 2, …, tmax) is the index number of iterations; 0.5<σ<1 at any stage.

    The AFSA was originally created to address continuous problems; solving combinatorial optimization problems such as UPMSP remains challenging. Therefore, it is crucial to make the algorithm suitable for solving discrete mathematical scheduling problems and improving the original algorithm's search capabilities. The major contribution of the proposed algorithm is the use of the transformation method to transform the continuous solution into a discrete solution. To achieve this, the modified algorithm's continuous solution was encoded, and the fractional parts were sorted in a non-decreasing order. Let us illustrate the proposed procedure as follows: Assume that a UPMSP has five jobs and three machines and let the lower bound for algorithm search space be equal to 0 and the upper bound be equal to 1. Let the generated solution by AFSA, S = [0.23 0.43 0.1 0.7 0.2], be continuous, and then encode the solution by integer numbers N = [1 2 3 4 5]. After a combination of solution and index, the solution turns out to be:

    123450.230.430.10.70.2 continuous solution 

    Now, the solution is sorted by increasing order, and the index is arranged based on the solution. After sorting, the corresponding solution should be:

    351240.10.20.230.430.7 discrete solution 

    Thus, the discrete solution D = [3 5 1 2 4].

    This method is used to compute the best solution for each solution in the population, and then the best solution is selected according to the value of the multi-objective function. The entire optimization process is performed and runs simultaneously by employing the three proposed modification steps, which are interconnected and depend on the quality of the solution.

    The proposed algorithm was applied for each machine and job combination, which were randomly generated, with three types of problem instances: small, medium, and large size. Each instance was repeated 10 times, and the results were recorded. The smallest minimum value is determined to be the optimum value for each instance. The details of the results and comparisons are discussed in the following section.

    Various experiments were conducted on three differently sized problem instances to confirm the proposed algorithm's performance for the multi-objective UPMSP. Each size of the problem consists of 10 instances and was run 10 times. Small size problems consist of the number of jobs N = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50] and the number of machines M = [3, 3, 4, 4, 5, 5, 6, 6, 7, 7]. Medium size problems instances had N = [60, 80,100,120,140,160,180,200,220,240] and M = [8, 8, 9, 9, 10, 10, 11, 11, 12, 12], while large size problems had N = [250,270,290,310,330,350,370,390,410,430] and M = [13, 13, 15, 15, 16, 16, 17, 17]. The proposed AFSA results are compared with standard AFSA, other versions of modified AFSA (presented in previous works by Zhao et al [24] and Tan & Mohamad-Saleh [25]), and three methods outlined by Huang et al [26]. The performance evaluation of the solution methodology is made based on the minimum values, mean, and standard deviation of the objective function obtained. Further analysis of the proposed algorithm's performance was also conducted using the Wilcoxon signed-rank test. The experiments were conducted based on different parameter values, as given in Table 1, using 12th Gen Intel(R) Core (TM) i7-1280P 2.00 GHz and 16 GB RAM under Windows 11 (64-bit) with MATLAB R2023a.

    Table 1.  Parameter settings for the proposed modified algorithm.
    Parameter Values
    Try-number 10
    Fish population (nPop) 40
    visualmin 30
    stepmin1 1
    stepmin2 2
    Crowd factor, μ 0.3
    σ 0.6
    Max-Iterations 1000

     | Show Table
    DownLoad: CSV

    a) Comparison of results based on the minimum values

    In minimization problems, the minimum values are used to determine the best performance of the proposed solution approach. Tables 2, 3, and 4 illustrate the minimum values of the proposed algorithm for 10 instances of different size problems. The results are then compared with the standard AFSA, Modified 1 [24], Modified 2 [25], and three methods presented in Modified 3 [26].

    Table 2.  Minimum values of proposed AFSA and their comparison with other versions of AFSA for small-sized instances.
    N M Proposed AFSA Modified 1 Modified 2 Modified 3 -method 1 Modified 3 -method 2 Modified 3 -method 3
    1 5 3 23.25 23.25 23.25 23.25 23.25 23.25 23.25
    2 10 3 42 42.75 42 42 42 42 42
    3 15 4 47.25 48 50.25 48.75 49.5 51 48
    4 20 4 62.25 60 62.25 59.25 62.25 63.75 58.5
    5 25 5 63.75 63 65.25 64.5 66.75 64.5 62.25
    6 30 5 74.25 72 75 70.5 76.5 78.75 70.5
    7 35 6 83.25 81.75 82.5 78.75 86.25 91.5 78.75
    8 40 6 96.25 94 102.25 94 96.25 100 92.5
    9 45 7 90 87.75 96 88.5 94.5 102 89.25
    10 50 7 91.5 89.25 97.5 90 98.25 99 91.5

     | Show Table
    DownLoad: CSV
    Table 3.  Minimum values of proposed AFSA and their comparison with other versions of AFSA for medium-sized instances.
    N M Proposed AFSA Modified 1 Modified 2 Modified 3 -method 1 Modified 3 -method 2 Modified 3 -method 3
    1 60 8 62.5 63 66 63 67.5 68.5 65
    2 80 8 107.5 101 118.5 102.5 123 133 110
    3 100 9 105.5 106 117.5 108.5 117.5 117.5 113.5
    4 120 9 123.5 141.5 147.5 131.5 173 164 136
    5 140 10 136.5 136.5 201 142.5 193 221 154
    6 160 10 161.5 173.5 230.5 178.5 221.5 348 192
    7 180 11 154 154.5 208.5 163 269.5 326 183.5
    8 200 11 181.5 192.5 269.5 187.5 282.5 372 209.5
    9 220 12 205 238 538 262.5 660.5 697.5 340
    10 240 12 251.5 308.5 621 294.5 664.5 1206 351

     | Show Table
    DownLoad: CSV
    Table 4.  Minimum values of proposed AFSA and their comparison with other versions of AFSA for large-sized instances.
    N M Proposed AFSA Modified 1 Modified 2 Modified 3 -method 1 Modified 3 -method 2 Modified 3 -method 3
    1 250 13 262 287.5 629 287 684 1058 384.5
    2 270 13 270 276 715 347 971.5 1876 429
    3 290 14 409 452.5 1463.5 432.5 1249.5 2580 715.5
    4 310 14 303 404 1191 327.5 1313 3215 514
    5 330 15 559 542.5 1695.5 448.5 1543 4593 716.5
    6 350 15 565 898 2014.5 884 2624.5 8196.5 1121
    7 370 16 449.5 475.5 1510 463.5 1772 6986 668
    8 390 16 847 891 2647.5 801 3150 10149.5 1369.5
    9 410 17 517 649 2584.5 841.5 2437.5 9709 1191
    10 430 17 696.5 938 2752 977 3666.5 11626.5 1411.5

     | Show Table
    DownLoad: CSV

    As shown in Tables 2, 3 and 4, the bold value is the minimum value obtained that represents the best-found solution in comparison with all seven versions of AFSA. The proposed algorithm exhibits relative performance improvements when dealing with small-sized problems, as demonstrated by the results in Table 2. It is notable that the best minimum values that the proposed algorithm yields are 23.25, 42, and 47.25, which happened in the instances of (N = 5, M = 3), (N = 10, M = 3), and (N = 15, M = 4), respectively. However, as the number of jobs and machines increases to medium and large sizes (as seen in Table 3 and Table 4), the algorithm demonstrates an even more pronounced enhancement in its superior performance. We noticed that the proposed algorithm provides the best minimum values in nine instances of medium-sized problems, 62.5,105.5,123.5,136.5,161.5,154,181.5,205,251.5, which happened in the instances of (N = 60, M = 8), (N = 100, M = 9), (N = 120, M = 9), (N = 140, M = 10), (N = 160, M = 10), (N = 180, M = 11), (N = 200, M = 11), (N = 220, M = 12), and (N = 240, M = 12), respectively. Meanwhile, the standard AFSA provides the best minimum values in the second instance when (N = 80, M = 8). In large-sized problems, the proposed algorithm also provides the best minimum values in nine instances and gives the values 262,270,409,303,565,449.5,847,517,696.5, which happened in the instances of (N = 250, M = 13), (N = 270, M = 13), (N = 290, M = 14), (N = 310, M = 14), (N = 350, M = 15), (N = 370, M = 16), (N = 390, M = 16), (N = 410, M = 17), and (N = 430, M = 17), respectively. The standard AFSA provides the best minimum value in the fifth instance when (N = 330, M = 15). We observe that in medium and large size problems, the AFSA algorithm converges with an unsatisfactory solution. However, when modifications are added to the AFSA, the proposed modified algorithm converges to the best solutions in 9 out of 10 instances for each problem for the proposed multi-objective function. This reflects the effectiveness of the proposed modifications, making the modified algorithm more effective in solving the proposed multi-objective UPMSP.

    b) Comparison of results based on statistical values

    A descriptive statistical analysis of the performance of AFSA and several modified AFSA algorithms was conducted to demonstrate the superior accuracy and computational efficiency of the proposed algorithm. The results in Tables 5, 6, and 7 present the numerical outcomes of all algorithms after 10 independent runs of the proposed UPMSP model. In the tables, "Mean" refers to the average value, and "STD" refers to the standard deviation, which was used to verify the reliability of the performance of the proposed algorithm. The results show that as more jobs and machines are added to the problem, the mean and STD of all algorithms increase. A smaller mean value represents the best average solution performed by the algorithms, and a lower STD indicates that the results are closer to the mean, suggesting more consistent or stable performance.

    Table 5.  Statistical values of proposed AFSA and their comparison with other versions of AFSA for small-sized instances.
    N M Proposed AFSA Modified 1 Modified 2 Modified 3 -method 1 Modified 3 -method 2 Modified 3 -method 3
    1 5 3 Mean 23.475 23.475 23.7 24.6 23.925 24.075 23.7
    STD 0.711512 0.711512 0.9486833 1.161895 1.08685326 1.38969421 0.9486833
    2 10 3 Mean 43.275 43.8 44.55 43.95 44.55 44.475 43.575
    STD 1.325236 0.880341 2.12720474 1.665833 2.15638587 2.0630681 1.02774024
    3 15 4 Mean 51.975 53.625 53.175 52.125 54.825 56.025 51.99
    STD 2.646932 3.14742 2.67978544 2.430278 4.09954266 2.69374275 2.97489496
    4 20 4 Mean 64.05 62.55 66.3 62.775 66.375 66.9 61.95
    STD 1.589025 1.627882 2.94108823 2.265226 2.65165043 1.6507574 1.73925271
    5 25 5 Mean 67.075 65.325 70.275 67.875 70.025 72.075 65.3
    STD 1.852363 1.982738 2.84421928 2.430278 3.06741386 4.61586937 1.97132217
    6 30 5 Mean 76.275 73.65 79.275 74.25 78.975 80.625 72.6
    STD 1.371587 1.573213 3.85762232 2.236068 2.39921862 2.3782872 1.6881943
    7 35 6 Mean 87.9 85.125 91.65 83.55 90.85 97.35 83.325
    STD 3.274905 2.378287 7.58122681 3.28253 4.74517299 4.79322438 2.67978544
    8 40 6 Mean 98.875 98.5 108.55 97.75 105.55 109.375 98
    STD 2.243045 3.391165 9.38216393 3.221025 5.61471282 11.3976423 3.61132478
    9 45 7 Mean 95.025 94.725 105.025 95.85 107.025 116.8 96.45
    STD 4.184047 3.708942 8.32453702 3.933828 7.31574596 11.5426744 4.86312657
    10 50 7 Mean 94.65 93.6 108.65 93.125 107.475 112.95 96.375
    STD 1.864135 2.202272 7.22284185 2.118864 4.89961733 9.4647005 3.10745072

     | Show Table
    DownLoad: CSV
    Table 6.  Statistical values of proposed AFSA and their comparison with other versions of AFSA for medium-sized instances.
    N M Proposed AFSA Modified 1 Modified 2 Modified 3 -method 1 Modified 3 -method 2 Modified 3 -method 3
    1 60 8 Mean 65.9 64.65 69 65.45 69.8 72.35 66.45
    STD 1.449138 1.155903 2.47206616 1.403369 1.70293864 4.0828231 1.32182534
    2 80 8 Mean 117.35 112.2 148.5 113.65 155.2 164.8 128.1
    STD 8.3335 6.028635 29.3342803 7.742559 25.2005291 19.3249982 12.8101522
    3 100 9 Mean 110 112.8 133.9 113.9 147.1 141.3 121.25
    STD 3.146427 3.851407 12.8491245 4.677369 24.4151592 14.2150155 9.98123239
    4 120 9 Mean 141.65 148.2 195.85 146.75 218.1 254.9 167.3
    STD 5.869885 10.43245 37.7131351 18.29731 39.0894985 65.5802308 26.462972
    5 140 10 Mean 151.75 156.15 231.25 153.9 238.2 297.3 177
    STD 181 196.2 307.35 193.8 363.1 449.6 240.85
    6 160 10 Mean 17.02449 17.20982 59.1960256 9.724539 90.8407642 105.066064 26.5842999
    STD 177.95 175.15 299.65 181.45 326.7 456.3 217.45
    7 180 11 Mean 12.71143 15.90606 70.8876458 12.38604 46.1923515 94.8976642 23.2360472
    STD 3.274905 2.378287 7.58122681 3.28253 4.74517299 4.79322438 2.67978544
    8 200 11 Mean 203.85 213 310.3 204.5 393.2 580.55 240.6
    STD 18.0909 20.45863 33.4964343 11.19524 101.628736 99.4899353 26.4215821
    9 220 12 Mean 281.1 317.2 752.55 302.9 837.35 1138.65 402.9
    STD 60.7096 50.07949 116.875397 27.08403 141.488133 306.002637 58.3784587
    10 240 12 Mean 324.25 373.45 944.45 376.15 931.85 1702 442.95
    STD 47.46768 47.62379 231.365085 52.06942 162.443845 418.957038 66.6468512

     | Show Table
    DownLoad: CSV
    Table 7.  Statistical values of proposed AFSA and their comparison with other versions of AFSA for large-sized instances.
    N M Proposed AFSA Modified 1 Modified 2 Modified 3 -method 1 Modified 3 -method 2 Modified 3 -method 3
    1 250 13 Mean 344.6 377.95 918.3 356.85 956.35 1705.35 481.45
    STD 56.49916 45.5689 195.281876 67.45412 166.979049 448.824146 65.4393061
    2 270 13 Mean 336.15 430.4 1142.9 410.5 1181.6 2809.5 515.65
    STD 43.07361 78.93943 193.103714 47.35387 157.181919 954.20147 80.3202099
    3 290 14 Mean 573.9 634.95 1753.05 566.2 1790.5 3777.8 866.9
    STD 118.7923 76.78341 138.85473 90.48732 292.947853 681.76899 118.181076
    4 310 14 Mean 432.8 495 1440.4 448.3 1586.95 4748.05 608.55
    STD 106.8431 80.23611 180.931202 79.73156 247.053689 1065.97672 78.5367748
    5 330 15 Mean 637.45 674.55 2187.55 699.15 2123.25 6812.7 985.8
    STD 79.19822 142.6311 339.222104 124.8408 317.492192 1292.51091 178.974579
    6 350 15 Mean 779.25 1068.15 2767.5 1060.1 3106.25 9727.05 1408.65
    STD 144.6231 131.5367 475.518664 148.3487 367.185864 804.673172 154.263852
    7 370 16 Mean 652.7 634.75 1965.45 689.65 2187.8 8790.5 927.1
    STD 133.5825 144.9935 257.429219 135.1053 309.22225 752.913858 208.926064
    8 390 16 Mean 993.9 1304.75 3357.75 1120.05 3575.9 11184.05 1705.4
    STD 117.9894 253.0275 637.852833 199.1058 375.70303 627.130744 325.749034
    9 410 17 Mean 866.35 1003.4 2927.1 983.95 2892.75 10599.05 1325.1
    STD 221.4315 201.1979 212.070193 207.7379 449.840666 611.269465 109.244069
    10 430 17 Mean 1178.25 1325.9 3723.55 1270.55 4417.7 12701.75 1798.85
    STD 269.3216 270.0505 541.578631 241.2811 561.6097 871.22099 266.459508

     | Show Table
    DownLoad: CSV

    Table 5 demonstrates that the proposed algorithm produced the best mean values of the proposed model in the first three instances, with values of 23.475, 43.275, and 51.975. Additionally, the algorithm achieved the best STD value in six instances: 0.711512, 1.589025, 1.852363, 1.371587, 2.243045, and 1.864135, for (N = 5, M = 3), (N = 20, M = 4), (N = 25, M = 5), (N = 30, M = 5), (N = 40, M = 5), and (N = 50, M = 6), respectively, for small-sized problems. Table 6 and Table 7 show that the proposed algorithm achieved the best mean values in six instances, such as 110 and 336.15 for (N = 100, M = 9), and (N = 270, M = 13), respectively, and the best STD values in four instances, such as 3.146427 and 79.19822 for (N = 100, M = 9), and (N = 330, M = 15), respectively, compared to the other algorithms for medium and large-sized problems. The summarized results based on the mean criterion indicate that the superiority and efficiency of the proposed algorithm improves as the number of machines and jobs increases and low values of STD imply more consistency in the performance of the algorithm.

    The best possible solutions for the proposed model are analyzed and illustrated in figures after 10 runs of each algorithm. It is guaranteed that the proposed algorithm will converge to the best domain solutions for each of the three size test problems. The provided graph samples illustrate the performance of the proposed algorithm in Figures 4, 5, and 6, on two instances of each size problem, clarifying the best convergence to the best solution and demonstrating that the proposed algorithm can stably converge when compared to other algorithms. Small-sized problems are illustrated in Figure 4, with (N = 10, M = 3) and (N = 15, M = 4); medium-sized problems are illustrated in Figure 5, with (N = 60, M = 8) and (N = 220, M = 12); and large-sized problems are illustrated in Figure 6, with (N = 290, M = 14) and (N = 370, M = 18). In all figures, the x-axis represents the number of iterations used in the comparison process, and the y-axis represents the function evaluation in terms of getting closer to minimum values. As the number of jobs and iterations increases, the proposed algorithm has a notable impact when approaching the best values. As shown in Figure 4, the proposed algorithm has a slight impact on getting close to the best values. Figures 5 and 6 demonstrate a significant impact, confirming the effectiveness of the proposed algorithm. This is evident from the smaller minimum values of the objective function, which expanded the convergence range and accelerated the algorithm toward the best feasible solutions. Additionally, the update process is guided to enhance the global search ability. As a result, the convergence results and quality of the solutions produced by the proposed algorithm improve as the number of jobs and iterations increase.

    Figure 4.  Performance of modified AFSA for small-size problems.
    Figure 5.  Performance of modified AFSA for medium-size problems.
    Figure 6.  Performance of modified AFSA for large-size problems.

    c) Comparison of results based on the Wilcoxon signed-ranks test

    A common practice in computational intelligence is to use statistical tests as part of the performance evaluation process of a new method [54]. Therefore, the Wilcoxon signed-ranks test [55] is utilized in this research since it is non-parametric, meaning it does not assume a normal distribution of the data and is appropriate for small sample sizes. It is specifically formulated for pairwise comparisons to determine significant differences between the proposed algorithm and the algorithms under comparison, which is advantageous when evaluating two related samples. It is also robust, safe, and easy to use. Tables 11, 12, and 13 summarize the results for small, medium, and large instances, respectively, in comparison of the proposed algorithm with other AFSA versions

    Table 11.  Results of the Wilcoxon signed-rank test for small-size problems.
    Problems 1 2 3 4 5 6 7 8 9 10
    Proposed VS p-value 9.77E-02 1.95E-02 7.81E-02 1.86E-01 3.16E-01 1.39E-01 3.22E-01 4.92E-01 3.22E-01 2.93E-02
    AFSA R+ 14.5 19 50.5 45 47 45 27 45 49 52
    R- 40.5 36 4.5 10 8 10 28 10 6 3
    Ind. -1 -1 1 1 1 1 -1 1 1 1
    Proposed VS
    Modified 1
    p-value 0.021484 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03
    R+ 52 55 55 55 55 55 55 55 55 55
    R- 3 0 0 0 0 0 0 0 0 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS
    Modified 2
    p-value 0.578125 0.125 1.05E-01 8.28E-01 9.80E-01 9.96E-02 4.80E-01 9.22E-01 3.75E-01 3.71E-02
    R+ 33.5 14.5 47 40 40 52 45 45 45 52
    R- 21.5 40.5 8 15 15 3 10 10 10 3
    Ind. 1 -1 1 1 1 1 1 1 1 1
    Proposed VS
    Modified 3
    method 1
    p-value 0.001953 0.001953 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03
    R+ 55 55 55 55 55 55 55 55 55 55
    R- 0 0 0 0 0 0 0 0 0 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS
    Modified 3
    method 2
    p-value 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953
    R+ 55 55 55 55 55 55 55 55 55 55
    R- 0 0 0 0 0 0 0 0 0 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS
    Modified 3
    method 3
    p-value 0.375 0.003906 0.003906 0.013672 0.009766 0.001953 0.009766 0.019531 0.005859 0.009766
    R+ 47 54 54.5 52 54 55 52 52 54 52
    R- 8 1 0.5 3 1 0 3 3 1 3
    Ind. 1 1 1 1 1 1 1 1 1 1

     | Show Table
    DownLoad: CSV
    Table 12.  Results of the Wilcoxon signed-rank test for medium-size problems.
    Problems 1 2 3 4 5 6 7 8 9 10
    Proposed VS AFSA

    p-value 1.00E+00 3.20E-01 2.71E-01 1.56E-02 2.34E-02 1.17E-02 1.13E-01 8.59E-01 9.49E-01 5.16E-01
    R+ 32 50.5 49 18.5 14.5 14.5 30.5 41.5 42.5 26.5
    R- 23 4.5 6 36.5 40.5 40.5 24.5 13.5 12.5 28.5
    Ind. 1 1 1 -1 -1 -1 1 1 1 -1
    Proposed VS Modified 1

    p-value 1 2.34E-01 2.85E-01 2.54E-02 2.54E-02 9.77E-02 4.16E-01 3.91E-03 1.56E-02 1.95E-03
    R+ 36.5 45 50.5 52 54 50.5 40 54.5 54 55
    R- 18.5 10 4.5 3 1 4.5 15 0.5 1 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS Modified 2 p-value 0.0625 0.449219 7.93E-01 2.73E-01 3.11E-01 6.64E-02 4.30E-02 1.95E-01 6.45E-01 9.77E-02
    R+ 47.5 40 42.5 30.5 45 23 14.5 30.5 47 30.5
    R- 7.5 15 12.5 24.5 10 32 40.5 24.5 8 24.5
    Ind. 1 1 1 1 1 -1 -1 1 1 1
    Proposed VS Modified 3
    method 1
    p-value 0.5 0.1875 1.95E-01 3.52E-02 5.86E-03 5.86E-03 6.25E-02 1.17E-02 3.91E-03 1.95E-03
    R+ 37 48.5 48.5 53 54 54 49 54 54 55
    R- 18 6.5 6.5 2 1 1 6 1 1 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS Modified3
    method 2
    p-value 0.5 0.128906 0.015625 0.015625 0.001953 0.001953 0.003906 0.007813 0.001953 0.001953
    R+ 40.5 50.5 54 53 55 55 54.5 53 55 55
    R- 14.5 4.5 1 2 0 0 0.5 2 0 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS Modified 3
    method 3
    p-value 1 0.6875 0.929688 0.039063 0.0625 0.001953 0.027344 0.46875 0.425781 0.080078
    R+ 36.5 41.5 42.5 23 19 0 19 37 42.5 54
    R- 18.5 13.5 12.5 32 36 55 36 18 12.5 1
    Ind. 1 1 1 -1 -1 -1 -1 1 1 1

     | Show Table
    DownLoad: CSV
    Table 13.  Results of the Wilcoxon signed-rank test for large-size problems.
    Problems 1 2 3 4 5 6 7 8 9 10
    Proposed VS AFSA p-value 1.93E-01 1.37E-02 1.31E-01 1.93E-01 6.25E-01 1.95E-03 7.70E-01 5.86E-03 1.60E-01 3.75E-01
    R+ 45 54 49 49 45 55 34 54 52 45
    R- 10 1 6 6 10 0 21 1 3 10
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS Modified 1 p-value 0.001953 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03
    R+ 55 55 55 55 55 55 55 55 55 55
    R- 0 0 0 0 0 0 0 0 0 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS Modified 2 p-value 0.769531 0.005859 9.22E-01 4.92E-01 2.75E-01 3.91E-03 4.32E-01 1.93E-01 5.57E-01 6.95E-01
    R+ 49 54 45 45 45 54 49 49 45 45
    R- 6 1 10 10 10 1 6 6 10 10
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS Modified 3 method 1 p-value 0.001953 0.001953 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03 1.95E-03
    R+ 55 55 55 55 55 55 55 55 55 55
    R- 0 0 0 0 0 0 0 0 0 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS Modified 3 method 2 p-value 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953
    R+ 55 55 55 55 55 55 55 55 55 55
    R- 0 0 0 0 0 0 0 0 0 0
    Ind. 1 1 1 1 1 1 1 1 1 1
    Proposed VS Modified 3 method 3 p-value 0.001953 0.001953 0.001953 0.001953 0.001953 0.001953 0.013672 0.001953 0.001953 0.001953
    R+ 55 55 55 55 55 55 54 55 55 55
    R- 0 0 0 0 0 0 1 0 0 0
    Ind. 1 1 1 1 1 1 1 1 1 1

     | Show Table
    DownLoad: CSV

    Tables 11, 12, and 13 present the results of the Wilcoxon signed-rank test obtained by the proposed algorithm, AFSA, Modified 1, Modified 2, Modified 3-method 1, Modified 3-method 2, and Modified 3- method 3. The p-value is the assessment index between zero and one, carried out with a significance level of 0.05. R+ represents the total ranking indicating that the first method is superior to the second one, while R− represents the total ranking indicating that the second method is superior to the first one. Indicate (Ind) denotes a difference in performance scores between the two algorithms; if Ind = +1, the proposed modified AFSA is superior to the other; if Ind = -1, the other algorithm is superior to the proposed modified AFSA; and if Ind = 0, the performance of the two algorithms is equal.

    Subsequently, Table 11 shows that the p-value in most pairwise comparisons is lower than the significance level of 0.05. For instance, the p-value in all ten instances in the comparison between the proposed algorithm and AFSA is lower than 0.05, which proves that the proposed algorithm outperforms AFSA. The comparison results with other algorithms based on the Wilcoxon signed-ranks test demonstrate that, except for the results with AFSA for small-size problems and the results with AFSA and Modified 3-method 3 for medium-size problems, the proposed algorithm outperforms the other compared algorithms and has great significant differences in performance. Furthermore, the proposed algorithm outperforms all compared algorithms and shows great significant differences for large-sized problems in each pairwise comparison.

    To sum up the discussions above, it is evident that the proposed algorithm's effectiveness stems from leveraging AFSA's advantages, including its multi-stage solution updates and search domain exploration. The algorithm enhances the exploitation and exploration phases by utilizing the best solution extracted from AFSA. Additionally, by adapting improved visual and step parameters, it balances global search capability with convergence speed and effectively applies AFSA to machine scheduling problems. However, the proposed algorithm has limitations, particularly in achieving optimal results for machines 4 to 10. Thus, improvements are necessary to solve small-sized jobs and machine problems. While the stability of the proposed algorithm is commendable, further enhancements are required.

    This research adopts UPMSP to minimize multi-objective makespan and total tardiness. A fuzzy programming model is presented, which is transformed into a deterministic model using the total integral value defuzzification method. The proposed algorithm is developed by adding aspiration behavior to improve the performance of AFSA. Visual and step are crucial parameters for optimizing AFSA; thus, improved visual and step parameters are adopted into the modified algorithm. Additionally, transforming the continuous solution of AFSA into a discrete solution enables the algorithm to handle the discrete model of UPMSP. Three sets of problem sizes, each including 10 randomly generated instances, are used to evaluate the performance of the algorithm. The proposed algorithm's best performance metrics on the test problems are determined by the minimum values in the proposed minimization problem. The first three instances provide good performance for small-sized problems with up to 50 jobs and up to 7 machines, whereas the best performance is achieved in nine instances for medium and large-sized problems with up to 240 jobs and 12 machines, and up to 430 jobs and 17 machines, respectively. According to computational results, the proposed algorithm improves the exploitation capability of the AFSA and demonstrates superior performance compared to the AFSA and other modified AFSA variants in identifying the best solutions for almost all test problems. The Wilcoxon signed-rank test supports the conclusion that the proposed algorithm significantly outperforms the other algorithms. Based on the good performance of the proposed modified algorithm, it might be used in the future to solve more complex scheduling problems, such as job shop machine scheduling or flow shop scheduling, as well as various performance measures.

    Azhar Mahdi Ibadi conceptualized the research methodology and led the development of the algorithm and programming, focusing on results analysis and writing the initial draft. Rosshairy Abd Rahman led the supervision and validation, thoroughly checking it for significant conceptual content. All authors participated in carefully editing the manuscript. All authors have read and approved the final version of the manuscript for publication.

    The authors are deeply grateful to the School of Quantitative Sciences, Universiti Utara Malaysia, for their invaluable support in facilitating this research.

    The authors declare no conflicts of interest.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.



    [1] Y. Swapna, Applications of partial differential equations in fluid physics, Commun. Appl. Nonlinear Anal., 31 (2024), 207–220. https://doi.org/10.52783/cana.v31.396 doi: 10.52783/cana.v31.396
    [2] A. Cheviakov, P. Zhao, Analytical properties of nonlinear partial differential equations: with applications to shallow water models, Vol. 10, Springer Cham, 2024. https://doi.org/10.1007/978-3-031-53074-6
    [3] A. H. Ganie, L. H. Sadek, M. M. Tharwat, M. A. Iqbal, M. M. Miah, M. M. Rasid, et al., New investigation of the analytical behaviors for some nonlinear PDEs in mathematical physics and modern engineering, Partial Differ. Equations Appl. Math., 9 (2024), 100608. https://doi.org/10.1016/j.padiff.2023.100608 doi: 10.1016/j.padiff.2023.100608
    [4] J. L. Kazdan, Applications of partial differential equations to problems in geometry, Graduate Texts in Mathematics, 1983.
    [5] H. Khan, R. Shah, J. F. Gómez-Aguilar, Shoaib, D. Baleanu, P. Kumam, Travelling waves solution for fractional-order biological population model, Math. Model. Nat. Phenom., 16 (2021), 32. https://doi.org/10.1051/mmnp/2021016 doi: 10.1051/mmnp/2021016
    [6] A. P. Bassom, P. A. Clarkson, A. C. Hicks, On the application of solutions of the fourth Painlev equation to various physically motivated nonlinear partial differential equations, Adv. Differ. Equations, 1 (1996), 175–198. https://doi.org/10.57262/ade/1366896236 doi: 10.57262/ade/1366896236
    [7] P. Albayrak, M. Ozisik, M. Bayram, A. Secer, S. E. Das, A. Biswas, et al., Pure-cubic optical solitons and stability analysis with Kerr law nonlinearity, Contemp. Math., 4 (2023), 530-548. https://doi.org/10.37256/cm.4320233308 doi: 10.37256/cm.4320233308
    [8] S. Altun, M. Ozisik, A. Secer, M. Bayram, Optical solitons for Biswas-Milovic equation using the new Kudryashov's scheme, Optik, 270 (2022), 170045. https://doi.org/10.1016/j.ijleo.2022.170045 doi: 10.1016/j.ijleo.2022.170045
    [9] E. M. Zayed, A. H. Arnous, A. Secer, M. Ozisik, M. Bayram, N. A. Shah, et al., Highly dispersive optical solitons in fiber Bragg gratings for stochastic Lakshmanan-Porsezian-Daniel equation with spatio-temporal dispersion and multiplicative white noise, Results Phys., 55 (2023), 107177. https://doi.org/10.1016/j.rinp.2023.107177 doi: 10.1016/j.rinp.2023.107177
    [10] M. S. Islam, K. Khan, M. A. Akbar, The generalized Kudrysov method to solve some coupled nonlinear evolution equations, Asian J. Math. Comput. Res., 3 (2015), 104–121.
    [11] R. Ali, E. Tag-eldin, A comparative analysis of generalized and extended (GG)-Expansion methods for travelling wave solutions of fractional Maccari's system with complex structure, Alexandria Eng. J., 79 (2023), 508–530. https://doi.org/10.1016/j.aej.2023.08.007 doi: 10.1016/j.aej.2023.08.007
    [12] M. Cinar, A. Secer, M. Ozisik, M. Bayram, Derivation of optical solitons of dimensionless Fokas-Lenells equation with perturbation term using Sardar sub-equation method, Opt. Quant. Electron., 54 (2022), 402. https://doi.org/10.1007/s11082-022-03819-0 doi: 10.1007/s11082-022-03819-0
    [13] M. Dehghan, J. Manafian Heris, A. Saadatmandi, Application of the Exp-function method for solving a partial differential equation arising in biology and population genetics, Int. J. Numer. Methods Heat Fluid Flow, 21 (2011), 736–753. https://doi.org/10.1108/09615531111148482 doi: 10.1108/09615531111148482
    [14] A. Bekir, E. Aksoy, A. C. Cevikel, Exact solutions of nonlinear time fractional partial differential equations by sub-equation method, Math. Methods Appl. Sci., 38 (2015), 2779–2784. https://doi.org/10.1002/mma.3260 doi: 10.1002/mma.3260
    [15] M. Kamrujjaman, A. Ahmed, J. Alam, Travelling waves: interplay of low to high Reynolds number and Tan-Cot function method to solve Burgers equations, J. Appl. Math. Phys., 7 (2019), 861. https://doi.org/10.4236/jamp.2019.74058 doi: 10.4236/jamp.2019.74058
    [16] S. Noor, A. S. Alshehry, A. Khan, I. Khan, Analysis of soliton phenomena in (2+1)-dimensional Nizhnik-Novikov-Veselov model via a modified analytical technique, AIMS Math., 8 (2023), 28120–28142. https://doi.org/10.3934/math.20231439 doi: 10.3934/math.20231439
    [17] R. Ali, S. Barak, A. Altalbe, Analytical study of soliton dynamics in the realm of fractional extended shallow water wave equations, Phys. Scr., 99 (2024), 065235. https://doi.org/10.1088/1402-4896/ad4784 doi: 10.1088/1402-4896/ad4784
    [18] M. M. Tariq, M. B. Riaz, M. Aziz-ur-Rehman, Investigation of space-time dynamics of Akbota equation using Sardar sub-equation and Khater methods: unveiling bifurcation and chaotic structure, Int. J. Theor. Phys., 63 (2024), 210. https://doi.org/10.1007/s10773-024-05733-5 doi: 10.1007/s10773-024-05733-5
    [19] X. Yang, Z. Wang, Z. Zhang, Decay mode ripple waves within the (3+1)-dimensional Kadomtsev-Petviashvili equation, Math. Methods Appl. Sci., 47 (2024), 10444-10461. https://doi.org/10.1002/mma.10132 doi: 10.1002/mma.10132
    [20] A. H. Ganie, M. M. AlBaidani, A. M. Wazwaz, W. X. Ma, U. Shamima, M. S. Ullah, Soliton dynamics and chaotic analysis of the Biswas-Arshed model, Opt. Quant. Electron., 56 (2024), 1379. https://doi.org/10.1007/s11082-024-07291-w doi: 10.1007/s11082-024-07291-w
    [21] M. Wang, X. Li, J. Zhang, The (GG)-expansion method and travelling wave solutions of nonlinear evolution equations in mathematical physics, Phys. Lett. A, 372 (2008), 417–423. https://doi.org/10.1016/j.physleta.2007.07.051 doi: 10.1016/j.physleta.2007.07.051
    [22] E. H. M. Zahran, M. M. A. Khater, Modified extended tanh-function method and its applications to the Bogoyavlenskii equation, Appl. Math. Model., 40 (2016), 1769–1775. https://doi.org/10.1016/j.apm.2015.08.018 doi: 10.1016/j.apm.2015.08.018
    [23] Z. Myrzakulova, S. Manukure, R. Myrzakulov, G. Nugmanova, Integrability, geometry and wave solutions of some Kairat equations, arXiv, 2023. https://doi.org/10.48550/arXiv.2307.00027
    [24] M. Awadalla, A. Zafar, A. Taishiyeva, M. Raheel, R. Myrzakulov, A. Bekir, The analytical solutions to the M-fractional Kairat-Ⅱ and Kairat-X equations, Front. Phys., 11 (2023), 1335423.
    [25] S. Roy, S. Raut, R. Myrzakulov, Z. Umurzakhova, A Kairat-X equation and its integrability: shocks, lump-kink and kinky-breather, CC BY 4.0, 2023. https://doi.org/10.13140/RG.2.2.23245.20963 doi: 10.13140/RG.2.2.23245.20963
    [26] S. Ghazanfar, N. Ahmed, M. S. Iqbal, A. Akgül, M. Bayram, M. De la Sen, Imaging ultrasound propagation using the Westervelt equation by the generalized Kudryashov and modified Kudryashov methods, Appl. Sci., 12 (2022), 11813. https://doi.org/10.3390/app122211813 doi: 10.3390/app122211813
    [27] G. H. Tipu, W. A. Faridi, Z. Myrzakulova, R. Myrzakulov, S. A. AlQahtani, N. F. AlQahtani, et al., On optical soliton wave solutions of non-linear Kairat-X equation via new extended direct algebraic method, Opt. Quant. Electron., 56 (2024), 655. https://doi.org/10.1007/s11082-024-06369-9 doi: 10.1007/s11082-024-06369-9
    [28] M. Iqbal, D. Lu, A. R. Seadawy, F. A. H. Alomari, Z. Umurzakhova, R. Myrzakulov, Constructing the soliton wave structure to the nonlinear fractional Kairat-X dynamical equation under computational approach, Mod. Phys. Lett. B, 2024, 2450396. https://doi.org/10.1142/S0217984924503962 doi: 10.1142/S0217984924503962
    [29] S. Sirisubtawee, S. Koonprasert, S. Sungnul, New exact solutions of the conformable space-time Sharma-Tasso-Olver equation using two reliable methods, Symmetry, 12 (2020), 644. https://doi.org/10.3390/sym12040644 doi: 10.3390/sym12040644
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1115) PDF downloads(82) Cited by(2)

Figures and Tables

Figures(10)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog