This paper proposes two jump diffusion models with and without mean reversion, for stocks or commodities, capable to fit highly leptokurtic processes. The jump component is a continuous mixture of independent point processes with Laplace jumps. As in financial markets, jumps are caused by the arrival of information and sparse information has usually more importance than regular information, the frequencies of shocks are assumed inversely proportional to their average size. In this framework, we find analytical expressions for the density of jumps, for characteristic functions and moments of log-returns. Simple series developments of characteristic functions are also proposed. Options prices or densities are retrieved by discrete Fourier transforms. An empirical study demonstrates the capacity of our models to fit time series with a high kurtosis. The Continuous Mixed-Laplace Jump Diffusion (CMLJD) is fitted to four major stocks indices (MSWorld, FTSE, S & P and CAC 40), over a period of 10 years. The mean reverting CMLJD is fitted to four time series of commodity prices (Copper, Soy Beans, Crude Oil WTI and Wheat), observed on four years. Finally, examples of implied volatility surfaces for European Call options are presented. The sensitivity of this surface to each parameters is analyzed.
Abbreviations:
1.
Introduction
Distributed real time system (DRTS) comprises a set of geographically distributed varied-pace heterogeneous processors that are interconnected to each other via fast communication networks. It has become a spectacular stage for computing high efficiency parallel applications. Parallel applications can be split into multiple tasks and executed simultaneously on different processors in the system. To make optimal use of this system, the key challenge lies in generating a task assignment model that assigns each task to the most appropriate processor for parallel execution. Authors Mall [1] and Jin and Tan [2] did explain that tasks in distributed system are performed in two ways, a hard real time system, and a soft real time system. Jobs are created and outcomes are generated without any time delay in hard real time system. For example, missile system, satellite system etc. (there should not be time lag). Whereas the soft real time system does not have any time limit to deliver the result or within a fixed pre-defined time. For example, web searching. Singh et al. [3] mentioned that according to the complexity of the distributed environment we need of system where multiple systems are connected and work together to optimize the goal. Typically, throughput processor usage, tasks waiting time etc., are the execution scales for the task assignment problem. In the system, if scheduling is not executed carefully, processors may take their maximum time to run the calculations. DNS (domain name system) is a simple example of DRTS that is used in a network to translate the domain name to IP address and internet [3]. The utilization of distributed system and multiprocessors is becoming very successful in real time applications. And the reason behind this is to provide the fault tolerance feature and lightning response time to the system and prices of these systems. Real time tasks' assignment in distributed environment subsists of two sub-problems: primarily partition of a set of tasks and secondary assignment of tasks to the worthy processor. Based on the time when scheduling decisions are made, there are two classes of assignment policies in DRTS, namely static and dynamic. By the static assignment of the task the permanent allocation of the task can be achieved whereas, in the dynamic assignment, the task is allotted at the time of node arrival or while the task is running. In order to acquire a simple and quick method, it is required to use approximations that yield the nearest optimum performance in a feasible amount of time. In the present work, the analysis focuses on static task assignment within a heterogeneous environment that provides diverse designing capabilities. This allocation technique can be utilized for a large set of real time applications that are able to plan a method which allows deterministic execution. Since static method does not have run time overhead and can be created applying very complex algorithmic process, it is far better than dynamic method.
As compared to centralized systems, DRTS is way more intricate. Excessive complexity can increase the likelihood of system failures. Task allocation technique and reliability play a key role in the efficient utilization of such multiprocessing system. Reliability of the system can be defined as the possibility of execution of a task having each component in working condition. The intricacy of the assignment issue in DRTS is heightened by various factors, such as the variability in task execution time, the discriminative nature of tasks, inter-dependency among tasks, and the challenge of load balancing. Various articles are assigned with the fundamental objective of reducing the total amount of communication and execution time being one of the performance parameters. In order to decrease operational costs, increase capacity, and optimize resource utilization in data centres, Jeyakrishnan and Sengottuvelan [4] developed the BSO algorithm for resource allocation and load balancing. The CSS task scheduling technique was developed by Xavier and Annadurai [5] to avoid local convergence and find the most optimal VM for tasks. This technique optimized the overall computing cost while increasing the throughput of the cloud system and there has been presented a task scheduling paradigm by Huang et al. [6] for task-VM mapping using several discrete PSO algorithm variations in cloud environment. By taking inspiration from the way crocodiles hunt, Abualigah et al. [7] developed the Reptile Search Algorithm (RSA), a brand-new meta-heuristic optimizer. The primary goal of RSA was to discover strong search techniques that can deliver higher-quality resolutions to challenging real-world issues.
In the presented article, a novel technique based on PSO-GA has been set up to solve the tasks allotment issue in a heterogeneous static environment. The GA framework can effectively deliver promising results in a broad and complex solution space, making it well suited for the scheduling issue. On the other hand, the PSO algorithm boasts easy implementation and impressive global search capabilities. Despite these advantages, the original PSO encounters limitations due to its slower convergence rate, making it unsuitable for directly tackling assignment issues. Therefore, this article has been hybridized with genetic operators to prevent the shortcomings of PSO. To reach an ideal solution and prevent early convergence, the GA employs continuous iteration. The process of convergence is iterated until it is accomplished. In current technique, the convergence of GA has been improved by presenting new genetic operations and population initialization method. Two phases are addressed in the generated approach. In the first phase a hybrid algorithm HPSOGAK, union of particle swarm optimization (PSO), genetic algorithm (GA) and k-means clustering approach has been induced. The HPSOGAK algorithm is used for the formation of clusters of tasks such that exceedingly communicated tasks assembled together in order to decrease inter-task communication time (ITCT). In the second phase, GA is employed to assign task-clusters to processors efficiently, aiming to minimize execution time (ET). The vital assists of the proposed technique in a distributed heterogeneous system are outlined below:
● Proposing a new task allocation model in distributed environment that takes into consideration the execution costs of tasks assigned to processors as well as the communication cost between tasks.
● In the context of task allocation onto worthy processors, PSO and GA based algorithms are presented.
● To improve the performance of GA, new crossover and mutation approaches are introduced.
● Proposed algorithm's performance is evaluated through studies based on assessment criteria like as response time, cost of the system, system reliability, performance improvement ratio (PIR %), efficiency, and resource utilization. It consistently delivers superior outcomes in these assessments.
● Analyze the run-time complexity of the proposed method.
The remainder of this article is organized as follows: The work related to this area is presented in Section 2. The definitions and terminology that will be used throughout the work are discussed in Section 3. The model of task allocation problem is explained in Section 4 and Section 5 provides more information on the proposed model's methodology. Four examples, two crisp and two fuzzy are solved in Section 6 by the proposed technique and Section 7 shows the compare of functioning of the suggested model to other existing techniques. Conclusion and recommendations for further research on this task allocation problem are provided in Section 8 and notations used throughout the paper are defined in the nomenclature section.
2.
Related work
DRTS provides accurate results in both logical and temporal aspects, distinguishing itself from other types of systems. It can be roughly divided into three areas: environment, controller, and controlled object. In this process, input is received by the controller from the environment, and as an output, it provides information to the controlled object. In DRTS, task allotment is an essential phase. The primary objective of real time system is to establish an allocation model that ensures meeting all deadlines while considering execution time. Over the years, a comprehensive study of task assignment in a multiprocessing environment has led to the development of many efficient scheduling mechanisms for distributed system. Authors Davis and Burns [8] and Zhang et al. [9] came up with their work on a multiprocessing system. Casanova et al. [10] did mention that task allotment issue belongs to the category of NP- complete problem which includes the number of tasks and each task executes on the single processor and communicates with other tasks. In the proposed article, to resolve the task allotment complication, three algorithms namely PSO, GA and k-means clustering technique are integrated together. PSO and GA both are driven by procedures happening in nature. Inspired by the movements of bird flocks and schooling fish, Kennedy and Eberhart [11] developed a PSO technique. The particles of this computational technique have two characteristics, velocity, and position. On the basis of these two characteristics personal best and global best positions of particle of PSO are determined. Like PSO, GA [12] is one more familiar optimization technique. It is a metaheuristic approach originated by Charles Darwin's theory of "survival of the fittest". This technique refers to the natural election process, in which the most eligible individuals are elected for breeding in order to generate the succeeding generation of offspring. Clustering is a process where a group of data is assigned into smaller groups while considering that the objects in the same groups are more familiar than those in other groups. It is a heuristic approach, used to identify groups of similar objects in datasets with two or more variable quantities. It can be classified into two types viz. soft clustering and hard clustering. K-means clustering technique is a popular hard clustering technique where k determines how many predetermined clusters must be formed during the procedure. The task is an event which dictates the course of action and when task occurs, processing and responding done by the system accordingly. Periodic tasks, aperiodic tasks and sporadic tasks are the three categories of tasks. In a distributed environment, task assignment system is one of the elemental and exacting problems. This system plays a key role in using the resources efficiently in an economic way. Actually, allotment of tasks onto proper processors is the arrangement of tasks in such manner that several efficient constraints like system cost (CS), system reliability (RS), response time (RT) etc. are optimized. There are essentially several algorithms to achieve optimization; these algorithms are categorized into two types: dynamic and static priority algorithms. In dynamic priority model, the preference changes dynamically while in static priority model preference assigned to static nature. This article focuses on static prioritization. The solution for the task allotment issue is given by various researchers in their articles by using different techniques. Some of them evolved well organized task scheduling algorithms using heuristic approaches and meta heuristic approaches. The detail study of various techniques that are used to solve the assignment issues have shown in the Table 1.
The meta heuristic techniques GA and PSO are based on the principles of biological evolution and swarms, respectively. These algorithms have been applied to address optimization problems across various domains, such as remote monitoring systems, energy-storage optimization, industrial engineering, and more. There have been numerous attempts to use these metaheuristic algorithms to solve a tasks allocation problem under various assumptions and limitations. Such as, in [29], the authors proposed a hybrid PSOGA model to enhance task scheduling in cloud computing, utilizing GA to refine solutions within PSO through genetic operations. However, a significant drawback of this approach was that GA tends to be slow and computationally intensive, due to the evaluation of many functions and the slow convergence of its operators. Unlike PSOGA, which relies on single-point crossover, the proposed method employs novel crossover and mutation techniques, to ensure a more diverse set of offspring. Additionally, a dynamic inertia weight balances local and global search components of PSO. This approach synergizes PSO and GA strengths, achieving faster convergence and superior optimization results. Table 2 includes an overview of some task scheduling approaches that comprise PSO, GA, and hybridization of these with other meta-heuristic or heuristic techniques, along with a list of all the salient characteristics.
Here, we are going to discuss some of the works done by the authors aimed at maximizing reliability without repetition. Reliability, a crucial factor to raise the system's performance, is used for examining fault-tolerant designs under significant unpredictability. If the program is carefully allocated to suitable processors, while considering the probability of failure for both communication lines and processors, a distributed system can achieve enhanced reliability in executing a program. Processor outages and communication errors have an impact on system reliability and user service quality. To raise the distributed system's reliability, Attiya and Hamam [46] approached a simulated annealing model in a heterogeneous environment and compared it to the B&B technique. Minghua et al. [47] came up with a model which gives the approximation for the upper and lower boundary of RAND 5 with a single run of the model. Authors Kang et al. [48] suggested HBMO algorithm, the power combination of simulated annealing and GA, to increase the distributed system's reliability. Other than these researchers Donight et al. [49] determined a novel method for fuzzy reliability that uses the beta type distribution as the membership function. In intelligent transportation systems, Noori and Jenab [50] established a model for rail vehicles with speed sensors on the basis of fuzzy reliability. Authors addressed a clustering based mathematical approach to get system's optimal reliability and cost by assigning tasks to the processors [19].
Although the use of a multiprocessor was expected to offer suitable solutions for the problem of task scheduling in DRTS, doing so caused several challenges. When a failure occurs in DRTS, we expect it to be noticed right away, and the distributed operation reverts to the previous checkpoint it had reached. Due to some such failures in system, uncertain results can be obtained. To rescue from this some authors had been dealing with the algorithms for the vague computation in which some of them got their best result and some got failed. Table 3 describes work of such authors with their drawbacks.
3.
Definitions and terminology
Appropriate task arranging onto relevant processors can enhance reliability of the system. Various performance specifications impact scheduling strategies. The ⌢ei,k, where 1⩽i⩽r,1⩽k⩽m, is determined by the task's efficiency and the processor's attributes. If tasks ti and tjare on separate processors, ⌢ci,j symbolizes the fuzzy communication time between them. Under this report, it is assumed that data exist about ⌢ei,k and ⌢ci,jtimes, and these times are demonstrated as FETMand FITCTM in the form of matrix respectively and by the process of defuzzification FETM and FITCTMare renewed into crisp matrix.
3.1. Defuzzification
Robust's ranking approach has been used in this present work to defuzzify the FET and FITCT. If (aLα,aUα) represents the α-cut for fuzzy (triangular/trapezoidal) communication/execution time, then the defuzzification is performed as:
After defuzzification, crisp values of communication time are stored in ITCTM=[ci,j]matrix and values of execution time are stored in ETM=[ei,k]matrix.
Moreover, the following are some key terms that will be used all across the article:
3.2. Communication time (CT)
If the tasks are on multiple processors, then the total amount of time which needed to transmit data among tasks ti and tjis known as communication time. TITCT may be prescribed as:
where, the inter-processor distance dk,l represents the communication time per unit of data transferred between processor pk and processor pl. In scenarios where tasks ti and tjare assigned to different processors, the communication time is calculated as (dk,l⋅⌢ci,j), where dk,l = 0 if k = l [39]. Therefore, under this approximation, the communication time for tasks on different processors corresponds directly to the amount of data exchanged between them.
3.3. Execution time (ET)
The execution time is the time of executing so every task tion processorpk. Total execution time (TET)is calculated as:
3.4. System cost (CS)
In terms of time units, cost of the system is the total sum of execution and communication times. It can be determined as:
3.5. Response time (RT)
The amount of computation to be performed by each processor determines the RT. Response time is inversely proportional to stability of the system. It can be calculated by using the following:
3.6. System reliability (RS)
The probability that each engaged component will be operational during the execution process is referred to as RS. The stability of a system is improved by having a reliable system. The following equation is used to determine the RS:
where Rkand Rk,l are the processor pk reliability and the kl link reliability, respectively. In a given time interval, let's call it t, processor pk reliability pursues a Poisson distribution,
where λk remains constant all across the procedure. In terms of ET, in which task ti is assigned to pk processor, reliability of pk can be computed as
Correspondingly, in a specified interval of time t, the reliability of the kl path is e−μk,lt and in terms of communication time the reliability between kl paths is given by
3.7. Performance improvement ratio (PIR%)
Term PIR refers to the ability of a suggested algorithm, determined by the decrease in execution time. It is one of the essential parameters that ascertain the effectiveness of the proposed algorithm. It can be computed as per Eq (7), as follows:
where RTproposed and RTi represent the obtained RT for the proposed and ith algorithm, respectively.
4.
Model of tasks allocation problem
DRTS comprises a diverse set of heterogeneous processing units interconnected via advanced networks. These processing units and communication networks may possess different resource capabilities and varying levels of bandwidth utilization. This model can perform all functions simultaneously and communicate at any point during the program's execution. The following assumptions are employed to generate a model for solving the task allocation problem.
● In a real time structure, for every processor has a varying processing capability.
● On various processors, a task's execution time may vary.
● The allocated task persists on the processor while the program is executing.
● RS, RT, and CS are determined by the times of communication and execution.
● Heterogeneous processors are used in this DRTS paradigm. Consequently, the processors may be restricted by different memory and compute units and they have varying failure rates and processing times. Additionally, the communication links' failure rates may vary.
● While executing, a module consumes a certain amount of its designated processor's compute resources. It is possible for two modules that are running on different processors to communicate and incur a certain amount of intermodule communication time in terms of data quantity.
In this article, task allocation issue is preferred and timing-constrained tasks are placed on processors in a way that optimises RS, CS and RT under the following conditions:
● In DRTS all the tasks are non-preemptive.
● In a certain amount of time each and every processor execute single task.
The object of the task assignment problem in DRTS, where ETM and ITCTM are assumed to be given, is to minimize the RT and maximize the RS. The task allocation problem addressed with system resource constraints is as follows:
Such that,
Each and every module must be allotted to precisely single processor, according to constraint (8) and xj,l must be binary variables, according to constraint (9). With a quadratic objective function, the aforementioned formulation is an NP-hard 0–1 programming problem.
5.
The proposed method
This section elucidates the developed method for addressing the task scheduling problem, aiming to minimize CS and RT while maximizing RS. This paper introduces a novel scheduling method developed through the integration of a PSO-based swarm technique with GA and k-means techniques. In recent decades, GA and PSO have been widely utilized in a number of scientific domains. The study of computing systems motivated by collective intelligence is known as swarm intelligence. Large numbers of homogeneous agents in the environment work together to form collective intelligence. That includes the examples of flocks of birds, fish schooling, and ant colonies. Swarm intelligence can be used to find the best solutions for the issues like task scheduling. PSO is used in this paper because it is the best strategy for all size problems. An approximate solution to an optimization or search problem can be found using the GA search approach. Since GA considers all potential solutions, it takes longer to find a solution to any given problem. The procedure of selecting which task should be carried out at each moment in time is called task scheduling. The assignment algorithm prioritizes each active task at runtime and allots the highest priority task to the processor. In the present study task-clusters are formed using hybrid particle swarm optimization genetic algorithm k-means (HPSOGAK) and their assignment onto appropriate processor is done by using genetic algorithm. The main drawback of the traditional PSO algorithm is to establish a balance between local and global search, which leads to local best or optima in large number of optimization issues. Additionally, the standard PSO algorithm suffers from slow convergence rates, rendering it unsuitable for directly addressing assignment problems. Numerous attempts have been made to mitigate these issues, as detailed in Section 2. As an illustration, the approach in [30], devised a hybrid PSO-based method for task scheduling in heterogeneous systems, faced restrictions on operators' actions, potentially made parts of the search space unreachable. The motivation behind developing the proposed HPSOGAK approach is to overcome the drawbacks and limitations associated with the standard PSO algorithm. This work combines PSO and GA techniques to enhance the capabilities of conventional PSO, with the goal of effectively tackling the crucial task scheduling problem in heterogeneous computing systems. By combining the exploration capabilities of PSO with the refinement strengths of GA, the proposed hybrid method effectively balances the search process, leading to improved optimization results. Even though, using only genetic operators to determine the optimal solution is difficult. Therefore, the convergence rate of GA is improved by providing new population initialization, encoding and genetic operations. This article introduces new crossover and mutation techniques that help GA function better. In addition to initializing particles with HPSOGAK, a global-local best inertia weight is used to balance the local and global search components of the standard PSO algorithm. In the hybrid HPSOGAK algorithm, k-means clustering technique is implemented to produce a finite number of task-clusters as the conception of k-means is quite straightforward and comfortable to accomplish. The fundamental disadvantage of the k-means clustering method is that it may produce vacant clusters. Hence, PSO-GA approach is employed to address this shortcoming. After using the PSO-GA approach, non-empty clusters of tasks are obtained within a low number of iterations of k-means. Now, the fundamental characteristics of PSO and GA are as follows.
5.1. Particle swarm optimization
In an effort to address issues related to optimization, Kennedy and Eberhart [11] introduced the basic version of PSO. It is a meta-heuristic optimizing technique that draws inspiration from animal social behavior and information-sharing strategies, such as that of soaring birds and fish schooling. A particle swarm is a gathering of particles that can all move around in the problem space and be drawn to advantageous locations. Each and every particle of PSO in a population has two characteristics, velocity and position. On the basis of these two factors all the particles look for their food in that available search space. Influenced by the natural phenomena of schooling and flocking, PSO particles are distinguished not only by their position but also by a velocity that allows them to move within the search space. Each and every particle in PSO represents a location in the specified search space and a potential answer to the problem. Its goal is to optimize the problem-dependent fitness function, a function that provides each particle of the population a specific value demonstrating the superiority of the g best and p best in the solution. Centroids of task-clusters are represented as swarm particles in the present article, and the following Eq (10) can be used to get the fitness value for each particle.
where, \left({c_{i, j}^{\hat n}} \right) represents the element of the matrix NIT{C_T}M and for the centroid of the {k^{th}} cluster C{d_k} ; C{d_k} = \left({c{d_{k, 1}}, c{d_{k, 2}}, ..., c{d_{k, r}}} \right) .
In the search space each particle's position is influenced by both its best position (p best) and the position of the next best particle (g best). All the particles migrate to the ideal solution, updating their p best and g best results. All of these particles have now reached their destination in the best possible manner. Each bird in this process is viewed as a distinct particle. Therefore, each and every particle has its own position and velocity. PSO is an iterative procedure in which each particle modifies its position and velocity in accordance with its prior experience as well as that of its neighbors. Determine the p best and g best for upgrading every particle's velocity and position i.e., cluster centroids based on its fitness values by the Eqs (11) and (12) respectively.
where, \omega stands for the inertia weight and the values of cognitive constant {\eta _1} and social constant {\eta _2} typically fall between [0, 2]. {\phi _1} , {\phi _2} are two random numbers, have values in the range from [0, 1]. The inertia component is the first, cognitive component is the second, and social component is the third component in the velocity upgrade calculation in Eq (11). Here, an algorithm's two successive iterations are represented by (t) and (t+1).
The three components that make up the velocity vector, which controls how a particle moves through a given search space, are as follows: momentum, also known as inertia, keeps a particle from abruptly changing direction; cognitive component, that is accountable for the trend to return the particles to their previous foremost position; and social component, which assists a particle in moving through the swarm's best position. These factors influence how the velocity of the {k^{th}} particle updates according to Eq (11). The iterative procedure described in Eqs (11) and (12) will continue until the halting condition is satisfied.
The graphic depiction of the PSO is shown in Figure 1. The particle changes direction with each iteration, and often, the new path is optimal. This decision is based on both the individual's personal best position and the global best position.
5.2. Genetic algorithm
Like PSO, GA is also a sort of meta-heuristic optimization approach. Inspired from Charles Darwin's theory of "Survival of the fittest", the GA is capable of producing excellent solutions for a variety of issues, including optimization and search, by imitating the processes of natural selection, breeding, and mutation. As robust stochastic search in algorithms, GA has lately been used to solve the job-shop assignment problem and task allocation issue. Based on a concept of natural genetics and selection, this category of techniques combines the concept of the fittest surviving, random yet structured search, and parallel assessment of locations in the search space. A population of specific solutions is continuously modified by the GA. The population develops towards the best option through subsequent generations. GA can be employed to address a variety of optimization problems, such as those involving stochastic, discontinuous, highly nonlinear, or non-differentiable objective functions. The following subsections of GA cover the novel encoding, population initiation approach and genetic processes.
5.2.1. Evolution of an initial population
Presenting chromosome like a string of integers is particularly beneficial for task clustering and allocating task-clusters onto processors. Each sole chromosome demonstrates the allocation of one of the possible modes of each task into a cluster or to a task-cluster on a processor. The number of tasks 'r' (or processors 'm') involved in a program, determines each chromosome's length used to cluster of tasks (or schedule task-clusters onto processors). Any of the program's total participating processors could be a chromosome's digit and each and every gene connected with a chromosome provides information on scheduling and clustering. Here, an example of an encoded chromosome is presented, where the 'r' tasks are randomly clustered to form 'm' clusters, and their allocation is assigned across 'm' processors. Chromosomes are encoded as follows in Figure 2 for task clustering and scheduling:
5.2.2. Fitness function
The optimization-related objective function is the fitness function in GA. The accuracy of tasks allocation on processors is expressed by the fitness function, which assigns a value to every chromosome in the population. Optimizing RT, RS and CS are the main goals of the task allocation issue. By clustering the tasks that are highly communicated via HPSOGAK technique and then allocating them to appropriate processors via GA technique, the assignment problem has been solved. For any random number \varepsilon \in (0, 1] , the following is the fitness function for the allocation algorithm GA:
The purpose of this fitness function is to effectively evaluate the quality of solutions, guide the optimization process towards optimal results, and helps to improve the convergence speed.
5.2.3. Selection
Pick the best, dismiss the rest, is the guiding idea of the selection operator. The process of selection determines which chromosomes should be maintained and allowed to regenerate and which ones should be eliminated. A selection operator's primary goal is to maintain the population dimension while decreasing the proportion of low-quality chromosomes and increasing the proportion of high-quality chromosomes in a population. The roulette-wheel selection approach is used in this research article, where each and every time a single chromosome is chosen for a new population. This procedure takes place on a spinning wheel, and the spinning numbers are commensurate with the size of the population. The following is the given selection probability for task scheduling onto processor:
From the foremost population, eliminate the chromosomes with the lowest selection probability. Furthermore, include a chromosome by replicating one chromosome with higher selection probability to new population.
5.2.4. Crossover
The most essential part of a genetic algorithm is crossover. By combining information from the two chromosomes of parents, it produces the two chromosomes of the offspring. In GA, the crossover operator is employed on the chromosomes with the lowest probability rather than the chromosomes with the highest probability in the population. The crossover probability, which ranges between 0 and 1, determines how frequently crossover occurs between chromosomes in each generation, meaning the likelihood of two chromosomes exchanging parts. A 100% crossover rate implies that all offspring are produced through crossover, while a 0% rate means no crossover occurs, resulting in a new generation nearly identical to the old one. The present article proposes a novel crossover strategy with a probability of 0.8, based on several experimental studies such as [38,39]. Tests show that a 0.8 crossover probability offers a balance between rapid convergence and high-quality results. Lower probabilities slow convergence, whereas higher one risk losing genetic diversity or causing premature convergence. This impacts GA and PSO convergence rates and aids in achieving optimal clustering outcomes. The presented crossover operator selects two parents with equal probability for the next generation. The goal of this crossover operator is to produce offspring who inherit groups with a significant level of variety from two chosen parents. In Algorithm 1, the suggested crossover operator is described as follows:
Algorithm 1: Proposed crossover operator.
In this Algorithm 1, (t) represents the generation count and the image of any bit in a chromosome is the bit after it. An example of the developed crossover technique is shown below. Through use of the procedures in Algorithm 1, the two offsprings C and D that were produced from the parents' chromosomes A and B are as follows in Figure 3:
5.2.5. Mutation
Following crossover, the chromosomes undergo a mutation operation. GA uses mutations in chromosome populations as a genetic operator to preserve genetic diversity from one generation to the succeeding. A mutation operator's major purpose is to prevent chromosomes from becoming too identical to each other as well after a certain number of iterations. A novel mutation technique with a probability of 0.1 is used in this article. This technique can be comprehended with the help of the subsequent Algorithm 2.
Algorithm 2: Proposed mutation operator.
In this Algorithm 2, (t+1) represents the generation count and the image of any bit in a chromosome is the bit after it. Below is an example that demonstrates this operator's process. Through use of the procedures in Algorithm 2, the mutated offspring E that was produced is as follows in Figure 4:
5.3. Stopping criteria
The stopping condition is used to prevent PSO and GA from running indefinitely. Both the algorithms run until they converge in order to obtain a better quality of solution, and the result is the best solution as far obtained. A terminating criterion is needed to determine this convergence behavior. Convergence happens when the most ideal reported value does not change during the maximum number of generations. In this work, two distinct categories of stopping criteria are used.
● An upper bound on the maximum number of iterations (generations).
● The algorithm keeps running for a specified number of generations until the best result obtained throughout the evolution process does not improve.
5.4. Developed algorithms
To address the task allocation issue, the suggested model uses a PSO-based hybrid model HPSOGAK to generate task-clusters and GA to find the task-clusters' assignment onto processors. The proposed technique is appropriate for both fuzzy and crisp time. The proposed task scheduling method is divided into two phases. The HPSOGAK and GA techniques are discussed in the first and second phases, respectively.
5.4.1. Phase Ⅰ
The HPSOGAK algorithm is developed in this phase by integrating PSO, GA and k-means. In the suggested approach, centroids are taken as particles that are improved using the PSO-GA technique and then used like initial centroids in the k-means algorithm. The method elevates the 'p best and g best' position of particles determined from PSO using genetic operators and then k-means clustering approach is employed to acquire a finite number of task-clusters. To create the task-clusters using k-means clustering approach IT{C_T}M = \left[{{c_{i, j}}} \right] is renewed from FIT{C_T}M = \left[{{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{c} }_{i, j}}} \right] . To provide the best outcomes for all data points, the k-means algorithm repeatedly reduces the distances between each data point and its centroid. To make a one-to-one correlation, the number of processors 'm' and task- clusters 'k' must be the same. Here {n_k} and C{d_k} indicates the tasks' number in {k^{th}} cluster and the centroid of the {k^{th}} cluster respectively, where
By implementing the HPSOGAK, IT{C_T} is minimized by clustering together the extremely communicative tasks.
To address various allocation issues based on PSO, several authors have proposed their mechanisms, wherein the inertia weight remains either constant or decreases linearly during the process. Based on existing knowledge, the assignment of tasks in DRTS has been categorized as NP-hard problems, and the complexity of these assignments further increases with the increase in the number of deployed processors and the tasks that have been submitted. So, balancing the local search and the global search is important, and for that, in the present work, the inertia weight, \omega is determined based on the calculation in Eq (15).
where,
Max iter. = Maximum iteration number.
The inertia weight is set to decrease linearly from a high initial value \left({{\omega _{\max }} = {\text{0}}{\text{.9}}} \right) to a lower final value \left({{\omega _{\min }} = {\text{0}}{\text{.4}}} \right) as the algorithm progresses through its iterations. This approach enhances global exploration at the start of the iterative process and promotes a more localized, fine-tuned search towards the end of the iterations.
Phase Ⅰ is described in detail as in Algorithm 3.
Algorithm 3: Formation of task-clusters by employing HPSOGAK.
5.4.2. Phase Ⅱ
Throughout this phase, GA has been implemented to schedule task-clusters into processors. During the program's execution, the clusters obtained from the preceding Algorithm 3 will remain unchanged. Identical clustered tasks perform similarly to a single task and might be assigned to the same processor. The {i^{th}} and {j^{th}} rows of the F{E_T}M must be added and represented as a new row in the NF{E_T}M if tasks {t_i} and {t_j} do belong to the same cluster. Only one processor may be assigned to all of the tasks that are present in one cluster at once. According to the task-clusters that are acquired from algorithm HPSOGAK, {E_T}M = \left[{{e_{i, k}}} \right] is updated into N{E_T}M = \left[{e_{i, k}^{\hat n}} \right] . Phase Ⅱ is described in detail as an algorithm. Algorithm 4 provides a summary of this stage's numerous steps:
Algorithm 4: To determine the assignment of task-clusters onto processors.
In Figure 5, the whole processes of Algorithms 3 and 4 are shown, providing a succinct understanding of the algorithms' processes.
5.5. Performance parameter
If the setting of PSO and GA algorithm parameter is improper, it will slow down the solution speed and have an impact on the outcomes' quality. Each population-based technique may converge to the global best location in a realistic period of time by maintaining a proper balance between local search and global search. Inertia weight ( \omega ) and crossover-mutation, respectively, are the coefficients that keep this balance in the PSO and GA algorithms. Therefore, the developed technique yields new crossover and mutation strategies for GA and an updated \omega -equation (Eq (15)) for PSO. Where the value of \omega steadily decreased throughout the iteration and balanced the rate of PSO convergence. This section provides details on a few significant parameters that the proposed task assignment algorithm considers for analytical evaluation. The articles by Kumar and Tyagi [39], Agarwal and Srivastava [41], and Shatz et al. [61] are taken into consideration for determining the parameters. Following Table 4 presents the parameters:
6.
Performance assessment
Let's assess the technique which is presented by considering the four real-applications-based examples. All four examples are drawn from proven existing models. In two examples, time is regarded in the crisp form, however in the other two examples; time is viewed as a fuzzy number. Fuzzy numbers can be of any form, including Gaussian, bell-shaped, triangular, trapezoidal, and so on. Table 5 illustrates the test setup, employing the presented HPSOGAK and GA algorithms for solving task scheduling problems. It represents the solutions of RS, RT, and CS of the system, along with the allocation onto processors for all four analyzed examples. Additionally, the effectiveness of the presented PSO-based task scheduling technique has been assessed in terms of resource utilization (Abdelkader and Omara [62]). The resource utilization is carried out in terms of RT and is contrasted with other models. According to Table 5, it is vivid that in order to reduce the RT and CS and maximize the RS and utilization, the proposed model produces a better quality of results than any existing models for allocation.
Example 1. Here, the problem is grabbed from Djigal et al. [63] model. In which DRTS consist ten tasks \left\{ {{t_1}, {t_2}, ......, {t_{10}}} \right\} that have to be allocated on three processors \left\{ {{p_1}, {p_2}, {p_3}} \right\} . In this problem {e_{i, k}} and {c_{i, j}} have been assessed as crisp number shown in Figures 6 and 7 respectively.
The following chromosome in Figure 8 gives the final task-clusters as a result of carrying out the steps of Algorithm 3:
The optimal allocation of task-clusters onto worthy processors has been achieved employing the steps in Algorithm 4, that is displayed by the following chromosome in Figure 9:
The task-clusters and their allotment on processors can be written as follows using the two chromosomes mentioned above:
Task-clusters assignments are shown in Figure 10.
In Figure 10, the processors are enclosed by solid circles and depict the corresponding {e_{i, k}} on them, while the figures with dotted lines in parenthesis depict {c_{i, j}} . Now, based on the information gleaned from the optimal allocation; the RT is 162 and the CS is 275 of the system. Equation (6) determines the system's reliability (RS), that is 0.98074.
Example 2. Here, the problem is acquired from Kumar et al. [64] model. In which DRTS consist five tasks \left\{ {{t_1}, {t_2}, ..., {t_5}} \right\} that have to be allocated on three processors \left\{ {{p_1}, {p_2}, {p_3}} \right\} . In this problem {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{e} _{i, k}} and {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{c} _{i, j}} have been assessed as fuzzy triangular number shown in Figures 11 and 12 respectively.
To resolve the aforementioned example, first use Robust's ranking technique to defuzzify the {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{e} _{i, k}} and {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{c} _{i, j}} times into crisp times {e_{i, k}} and {c_{i, j}} respectively, and then generate {E_T}M and IT{C_T}M by inserting {e_{i, k}} and {c_{i, j}} , respectively in them.
The following chromosome in Figure 13 gives the final task-clusters as a result of carrying out the steps of Algorithm 3:
The optimal allocation of task-clusters onto worthy processors has been achieved employing the steps in Algorithm 4, that is displayed by the following chromosome in Figure 14:
The task-clusters and their allotment on processors can be written as follows using the two chromosomes mentioned above:
Task-clusters assignments are shown in Figure 15.
Now, based on the information gleaned from the optimal allocation in Figure 15; the RT is (95,165,240) and the CS is (145,230,320) of the system. Eq (6) determines the system's reliability (RS) that is 0.98780.
Example 3. Here, the problem is grabbed from Sharma et al. [65] model. In which DRTS consist five tasks \left\{ {{t_1}, {t_2}, ..., {t_5}} \right\} that have to be allocated on three processors \left\{ {{p_1}, {p_2}, {p_3}} \right\} . In this problem {e_{i, k}} and {c_{i, j}} have been assessed as crisp number shown in Figures 16 and 17 respectively.
The following chromosome in Figure 18 gives the final task-clusters as a result of carrying out the steps of Algorithm 3:
The optimal allocation of task-clusters onto worthy processors has been achieved employing the steps in Algorithm 4, that is displayed by the following chromosome in Figure 19:
The task-clusters and their allotment on processors can be written as follows using the two chromosomes mentioned above:
Task-clusters assignments are shown in Figure 20.
In Figure 20, the processors are enclosed by dotted circles and depict the corresponding {e_{i, k}} on them, while the figures with solid lines depict {c_{i, j}} . Now, based on the information gleaned from the optimal allocation; the RT is 68 and the CS is 81 of the system. Eq (6) determines the system's reliability (RS), that is 0.9979.
Example 4. Here, the problem is acquired from Chauhan et al. [43] model. In which DRTS consist nine tasks \left\{ {{t_1}, {t_2}, ..., {t_9}} \right\} that have to be allocated on five processors \left\{ {{p_1}, {p_2}, ..., {p_5}} \right\} . In this problem {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{e} _{i, k}} and {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{c} _{i, j}} have been assessed as fuzzy triangular number shown in the matrices F{E_T}M and FIT{C_T}M respectively.
To resolve the aforementioned example, first use Robust's ranking technique to defuzzify the {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{e} _{i, k}} and {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{c} _{i, j}} times into crisp times {e_{i, k}} and {c_{i, j}} respectively, and then generate {E_T}M and IT{C_T}M by inserting {e_{i, k}} and {c_{i, j}} , respectively in them.
The following chromosome in Figure 21 gives the final task-clusters as a result of carrying out the steps of Algorithm 3:
The optimal allocation of task clusters onto worthy processors has been achieved employing the steps in Algorithm 4, that is displayed by the following chromosome in Figure 22:
The task-clusters and their allotment on processors can be written as follows using the two chromosomes mentioned above:
Task-clusters assignments are shown in Figure 23.
In Figure 23, the processors are enclosed by rounded rectangles and depict the corresponding {e_{i, k}} on them, while the figures with solid lines in parenthesis depict {c_{i, j}} . Now, based on the information gleaned from the optimal allocation; the RT is (32, 53, 78) and the CS is (403.8,634.8,965.8) of the system. Eq (6) determines the system's reliability (RS), that is (0.9349, 0.9506, 0.9693).
7.
Comparison
In several situations with variable numbers of tasks and processors, we compared the proposed meta-heuristic based method to well-established approaches in order to assess its effectiveness. CS, RT, RS, PIR, (discussed in Subsections 3.4.–3.7.) efficiency [62] and, resource utilization is being taken into account as performance measures for analytical assessment to analyze the performance of the proposed task assignment algorithm in DRTS. Efficiency is defined as the ratio of sequential computation time to scheduling time, taking the number of processors into consideration. Efficiency and PIR are incredibly beneficial in assessing the accuracy of the outcomes produced by the given approach. In terms of these metrics, the given approach performs better than existing techniques. Also, to examine the quality of outcomes from a statistical perspective, Friedman's test is carried out. Data of different task assignment problems have been accumulated from the articles of Kumar et al. [38], Kumar and Tyagi [39], Chauhan et al. [43], Ilavarasan et al. [66], Kumar and Tyagi [67], Topcuoglu et al. [68]. The following five scenarios have been taken into consideration in this study.
7.1. Scenario 1
In this scenario, 30 real-world issues from the existing methods have been taken into consideration in order to evaluate the proposed technique. Table 6 shows a comparison of the outcomes of these issues based on CS and RT which is also depicted in Figure 24. As per shown in Figure 24, the developed method provides superior quality outcomes than other available task scheduling methods used to reduce CS and RT.
7.2. Scenario 2
In this subsection, we will assess the performance of the proposed algorithm by comparing it to that of other well-known algorithms, focusing on PIR %. Table 7 displays the results acquired for PIR %, and it is clear that the developed PSO-based algorithm is superior to these other existing approaches. The hybrid PSO's capability to attain the minimum value for RT in this study, coupled with the close correlation between PIR% and RT, constitutes the primary factors contributing to its performance. In most instances, the proposed algorithm significantly outperforms its well-known competitors, as demonstrated by the PIR values presented in Table 7 and illustrated in Figure 25. The overall average PIR % for the proposed approach, GA-B&B, Clustering approach, GA, and B&B are 22.64%, 19.90%, 17.03%, 20.21%, and 16.19%, respectively.
7.3. Scenario 3
The performance study of the suggested method based on RS and CS is presented in this sub-section. By resolving the running problem from Topcuoglu et al. [68] article, the effectiveness of the developed approach has been demonstrated. In Table 8, which is provided below, are the outcomes for this problem using the proposed algorithm, the Chauhan et al. [43] algorithm, the Kumar et al. [19] algorithm, the Kumar and Tyagi [39] algorithm, and the algorithm of Ilavarasan et al. [66]. In light of this, it can be said that the proposed algorithm produces optimal allocation when compared to the other algorithms listed in Table 8.
7.4. Scenario 4
This subsection presents the performance analysis of the specified approach based on RT, efficiency, and resource utilization. The article by Chauhan et al. [43] has been used to consider various task assignment issues where GA-based mechanisms were generated. These problems are also implemented on the proposed model and the obtained results are tabulated in Table 9. From the study of Table 9, it can be concluded that the developed method produces finer results in terms of considered parameters. Table 9 concludes that proposed PSO-GA algorithm generates results of higher quality than to GA algorithm having RT with an average 170.16 and 181.04 respectively. Results are also depicted in Figure 26 with respect to efficiency. The outcomes of Figure 26 shows that the suggested PSO-GA based model performs better than the GA based model in terms of efficiency for the majority of the processor quantities, while for the remaining processor quantities, both models deliver similar results. The average efficiency of the GA and PSO-GA algorithms is 0.6387 and 0.7013, respectively.
7.5. Scenario 5
In this subsection, to examine the quality of outcomes from a statistical perspective, Friedman test is carried out. This test analysis whether there are statistically significant differences between the dependent groups. The Friedman's test is a non-parametric statistical test which is used to assess the significance of the data given in Tables 7 and 10 displays the results of this test.
It is noticeable that the developed method yields the best average of ranks as compared to B&B, GA, Clustering method, and GA-B&B. It is clear proof that the proposed PSO-GA based algorithm is a promising technique for tasks scheduling. In the outcomes of Table 7, the \chi _r^2 statistic is 10.686 (4, N = 15) and the p-value is 0.00226. Hence the results of the proposed method are significant at p < 0.05.
8.
Conclusions
The problem of task scheduling is resolved in this article using a PSO-based approach. The purpose of this technique is to use evolutionary algorithms for static task scheduling onto heterogeneous processors in DRTS. The present research demonstrates an efficient method for handling the scheduling issue with different objectives simultaneously, such as response time, total cost, and system reliability optimization. So far, hybrid PSO-GA model has not been applied in this manner to solve this type of problem to handle all these objectives simultaneously. Throughout this article, two algorithms have been developed: HPSOGAK, a combination of PSO, GA, and k-means for task-cluster formation to minimize IT{C_T} , and GA for allocation of tasks to processors to minimize total CS, RT and maximize RS of the system. The following are the essential contributions of this work:
(ⅰ) In this study, a scheduling mechanism based on the evolutionary algorithms PSO and GA is designed to deal with the real-life applications.
(ⅱ) PSO is employed in this study as it is the best approach for all size problems, and PSO and GA are integrated to enhance traditional PSO's functionality and address its primary shortcomings.
(ⅲ) By providing new encoding, population initialization, new crossover and mutation approaches, the convergence rate of GA is improved.
(ⅳ) Both crisp and fuzzy times can be used with the model presented in heterogeneous system.
(ⅴ) Numerous real-world problems have been solved in order to compare the proposed model's performance to that of existing methods.
(ⅵ) In terms of PIR %, RT, CS, RS, efficiency, and resource utilization, the accuracy of the presented model is examined. In all of these, as discussed in the comparison section, the proposed technique delivers superior results when compared to different meta-heuristics and traditional algorithms.
(ⅶ) Friedman's test is applied to statistically evaluate the quality of results.
(ⅷ) The run time complexity of various existing techniques, including those by Elsadek and Wells [86], Kumar et al. [19], Kumar and Tyagi [39], are O (r2+m2+r2m), O (r2+rm) and O (rm+m2+m) respectively. Whereas the proposed technique's run time complexity is O (m2 +rm), which is lower than that of existing approaches, when (r > m).
(ⅸ) In DRTS, the described model is applicable any number of jobs/tasks and processors.
Implementation results demonstrate that the developed model is more effective than existing methods. From the obtained results, it can be concluded that proposed approach is suitable to deal with the issues of tasks scheduling. According to these results, the proposed technique is a worthwhile substitute for resolving the task allocation issue but the given model also has some limitations. The PSO can easily fall into the local optimum because of its slow convergence rate during the iterative procedure. Consequently, the HPSOGAK algorithm also has certain limitations. In future study we shall address the comparison of the algorithm's performance using both fuzzy and crisp times. Specifically, case studies will be conducted to compare the same tasks and processors with and without defuzzification applied. This approach will help to further evaluate the robustness and versatility of the algorithm, enhancing the comprehensiveness of the findings. Additionally, the advantages of various algorithms, such as DE algorithm, WOA, and GWO, will be utilized for further algorithm enhancement. By employing these techniques, the algorithm's overall scheduling performance can be improved. To continually enhance the task assignment strategy, the task scheduling policy in the dynamic environment of the DRTS will be considered, and the implications of PSO parameters and local search techniques on the system's overall performance will be examined.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
The authors are grateful to the Editor-in-Chief, associate editor, and learned reviewers for their critical comments and valuable suggestions, which led to a fine version of the manuscript.
Conflict of interest
All authors declare no conflicts of interest in this paper.