Loading [MathJax]/jax/output/SVG/jax.js
Research article

Single machine and group scheduling with random learning rates

  • Received: 20 March 2023 Revised: 23 May 2023 Accepted: 30 May 2023 Published: 08 June 2023
  • MSC : 68M20, 90B36

  • This study mainly considers the scheduling problems with learning effects, where the learning rate is a random variable and obeys a uniform distribution. In the first part, we introduce a single machine model with location-based learning effects. We have given the theoretical proof of the optimal solution for the five objective functions. In the second part, we study the problem with group technology. Both intra-group and inter-group have location-based learning effects, and the learning rate of intra-group jobs follows a uniform distribution. We also give the optimal ranking method and proof for the two problems proposed.

    Citation: Dingyu Wang, Chunming Ye. Single machine and group scheduling with random learning rates[J]. AIMS Mathematics, 2023, 8(8): 19427-19441. doi: 10.3934/math.2023991

    Related Papers:

    [1] Xiaolong Li . Asymptotic optimality of a joint scheduling–control policy for parallel server queues with multiclass jobs in heavy traffic. AIMS Mathematics, 2025, 10(2): 4226-4267. doi: 10.3934/math.2025196
    [2] Nodari Vakhania . On preemptive scheduling on unrelated machines using linear programming. AIMS Mathematics, 2023, 8(3): 7061-7082. doi: 10.3934/math.2023356
    [3] Samer Nofal . On the time complexity of achieving optimal throughput in time division multiple access communication networks. AIMS Mathematics, 2024, 9(5): 13522-13536. doi: 10.3934/math.2024659
    [4] Ibrahim Attiya, Mohammed A. A. Al-qaness, Mohamed Abd Elaziz, Ahmad O. Aseeri . Boosting task scheduling in IoT environments using an improved golden jackal optimization and artificial hummingbird algorithm. AIMS Mathematics, 2024, 9(1): 847-867. doi: 10.3934/math.2024043
    [5] Cunlin Li, Wenyu Zhang, Hooi Min Yee, Baojun Yang . Optimal decision of a disaster relief network equilibrium model. AIMS Mathematics, 2024, 9(2): 2657-2671. doi: 10.3934/math.2024131
    [6] Mohammed Jalalah, Lyu-Guang Hua, Ghulam Hafeez, Safeer Ullah, Hisham Alghamdi, Salem Belhaj . An application of heuristic optimization algorithm for demand response in smart grids with renewable energy. AIMS Mathematics, 2024, 9(6): 14158-14185. doi: 10.3934/math.2024688
    [7] Xiaoxiao Liang, Lingfa Lu, Xueke Sun, Xue Yu, Lili Zuo . Online scheduling on a single machine with one restart for all jobs to minimize the weighted makespan. AIMS Mathematics, 2024, 9(1): 2518-2529. doi: 10.3934/math.2024124
    [8] Fazeel Abid, Muhammad Alam, Faten S. Alamri, Imran Siddique . Multi-directional gated recurrent unit and convolutional neural network for load and energy forecasting: A novel hybridization. AIMS Mathematics, 2023, 8(9): 19993-20017. doi: 10.3934/math.20231019
    [9] Jiaxin Chen, Yuxuan Wu, Shuai Huang, Pei Wang . Multi-objective optimization for AGV energy efficient scheduling problem with customer satisfaction. AIMS Mathematics, 2023, 8(9): 20097-20124. doi: 10.3934/math.20231024
    [10] Azhar Mahdi Ibadi, Rosshairy Abd Rahman . Modified artificial fish swarm algorithm to solve unrelated parallel machine scheduling problem under fuzzy environment. AIMS Mathematics, 2024, 9(12): 35326-35354. doi: 10.3934/math.20241679
  • This study mainly considers the scheduling problems with learning effects, where the learning rate is a random variable and obeys a uniform distribution. In the first part, we introduce a single machine model with location-based learning effects. We have given the theoretical proof of the optimal solution for the five objective functions. In the second part, we study the problem with group technology. Both intra-group and inter-group have location-based learning effects, and the learning rate of intra-group jobs follows a uniform distribution. We also give the optimal ranking method and proof for the two problems proposed.



    Research on single machine scheduling problems with location-based learning effects has long been the focus of scholars. Early pioneering research can be found in Biskup [1] and Moshiov [2]. In their research, they assumed that the processing time of the job was constant and a decreasing function of its position respectively. Moshiov [3] described the parallel machine case. Mosheiov and Sidney [4] introduced job-dependent learning effects and thought that the problem could be transformed into an allocation problem. Since then, many scholars have studied similar or improved models. There are many related achievements, and the research perspective is becoming more and more open. Mosheiov and Sidney [5] studied the case that all tasks have a common due date, and proved that the model is polynomial solvable. Bachman and Janiak [6] provided a method of proof using diagrams. Lee, Wu and Sung [7] discussed the two-criteria problem, and put forward the skills of searching for the optimal solution. The authors followed the methods of the non-increasing order of ωj and the shortest processing time sequence achieves the optimal solution. Lee and Wu [8] solved a flow shop scheduling problem with two machines using a heuristic algorithm. Janiak and Rudek [9] discussed the complexity results of a single machine scheduling problem that minimizes the number of delayed jobs. Zhao, Zhang and Tang [10] investigated the polynomial solutions of some single machine problems, parallel machine problems, and flow shop problems in environments with learning effects. Cheng, Sun and Yu [11] considered some permutation flow shop scheduling problems with a learning effect on no-idle dominant machines. Eren and Ertan [12] studied the problem of minimizing the total delay in the case of learning effects, and used the 0-1 integer programming model to solve this problem. Zhang and Yan [13], Zhang et al. [14], Liu and Pan [15] and Liu, Bao and Zheng [16] have all studied scheduling problems based on learning effects from the perspective of problem model innovation or improvement.

    With the deepening of research, many scholars found that p is not always a constant and there will be different processing times in different environments, different machines or different workers. Therefore, some scholars put forward the stochastic scheduling problem. Pinedo and Rammouz [17], Frenk [18] and Zhang, Wu and Zhou [19] have done a lot of pioneering work. Based on research results of the above scholars, Zhang, Wu and Zhou [20] studied the single machine stochastic scheduling problem based on the learning effect of location, and studied the optimal scheduling strategy for stochastic scheduling problems with and without machine failures. Ji et al. [21] considered the parallel machine scheduling caused by job degradation and the learning effect of DeJong. They proved that the proposed problem is polynomial solvable, and provided a fully polynomial time approximation solution. A labor scheduling model with learning effect is proposed by Qin, Liu and Kuang [22]. By piecewise linearizing the curve, the mixed 0-1 nonlinear programming model (MNLP) was transformed into the mixed 0-1 linear programming model (MLP) for solution. Zhang, Wang and Bai [23] proposed a group scheduling model with both degradation and learning effects. Xu et al. [24] proposed a multi machine order scheduling problem with learning effect, and use simulated annealing and particle swarm optimization to obtain a near optimal solution. Wu and Wang [25] consider a single machine scheduling problem with learning effects based on processing time and truncation of job delivery times related to past sequences. Vile et al. [26], Souiss, Benmansour and Artiba [27], Liu et al. [28] and Liu et al. [29] applied scheduling models separately to emergency medical services, supply chain, manufacturing management and graph theory. Li [30] studied the processing time of the job as random and uses a job-based learning rate. At the same time, he provided a method to deal with problems by using the difference between EVPI and EVwPI. Toksari and Atalay [31] studied four problems of the coexistence of learning effect and homework refusal. In order to reduce production costs, Chen et al. [32] focused on multi-project scheduling and multi-skilled labor allocation. Shi et al. [33] applied a machine learning model to medical treatment, estimate service level and its probability distribution, and used various optimization models to solve scheduling programs. Wang et al. [34] improved and studied several existing problems.

    Ham, Hitomi and Yoshida [35] first proposed the "group technology" (GT). According to type or characteristics, jobs are divided into different groups. Each group of jobs is produced by the same means and, once the jobs are put into production, they cannot be stopped. Lee and Wu [36] proposed a group scheduling learning model where the learning effect not only depends on the work location, but also on the group location. We have demonstrated that the problem is polynomial solvable under the proposed model. Yang and Chand [37] studied a single machine group scheduling problem with learning and forgetting effects to minimize the total completion time of tasks. Zhang and Yan [38] proposes a group scheduling model with deterioration and learning effects. Under the proposed model, the completion time and total completion time problems are polynomial optimal solvable. Similarly, Ma et al. [39], Sun et al. [40] and Liu et al. [41] provided appropriate improvements to the model. Li and Zhao [42] considered the group scheduling problem on a single machine with multiple expiration window assignments. They also divided the homework into several groups to improve production efficiency and save resources. However, this work is only an improvement on some problems or solutions and has not achieved groundbreaking results. Liang [43] inquired into the model with deteriorating jobs under GT to minimize the weighted sum of manufacturing time and resource allocation costs. Wang et al. [44] considered the issue of maturity allocation and group technology at the same time. They determined the best sequence and the best deadline allocation of the group and intra-group jobs by minimizing the weighted sum of the absolute values of the lateness and deadline allocation costs. Wang and Ye [45] established a stochastic grouping scheduling model. Based on SDST and preventive maintenance, Jain [46] proposed a method based on a genetic algorithm to minimize the performance measurement of completion time.

    The above literature provides assistance in establishing problem models and solving methods for classical sorting and random sorting with learning effects. In the learning effect, the learning factor is a very important quantity which often affects the actual processing time of the job. There are many factors that affect learning factors, which can be internal or external and often have randomness. In this case, the learning factor is no longer a constant–it is variable–and sometimes the probability density can be calculated. These aspects were not considered in previous works. Therefore, the discussion of the randomness of learning factors in this study has practical significance. Based on the above ideas, this study establishes a new stochastic scheduling model. In the model, workers participate in production, the processing of jobs has a position based learning effect and the learning index is random. The method to solve the problem is to use heuristic algorithms to find the optimal seqencing of the problems.

    n independent jobs are processed on one machine. The job can be processed at any time and cannot be stopped during processing.

    First model is

    1|prj=pjraj,ajU(0,λj)|E[f(Cj)],

    where f(Cj) is function of processing time of job Jj.

    First, we give some lemmas.

    Lemma 2.1. [16] X is a random variable, continuous on the definition field, and fX(x) is density function, Y=g(X), there is E(Y)=E[g(X)]=+g(x)fX(x)dx.

    Lemma 2.2. [16] The meaning of X is the same as Lemma 2.1, function g(x) is everywhere derivable and monotonous, Y=g(X), then,

    fY(y)={fX[h(y)]|h(y)|,0,α<y<β,others, (2.1)

    in which α,β is the minimum and maximum value of g(),g(+) respectively, h(y)=g1(x).

    Lemma 2.3. When x>1, y(x)=1lnx(11x) is monotonic and does not increase, z(x)=11lnx(11x) is a monotone increasing function, and z(x)>0.

    It is easy to prove the conclusion by using the method of derivation and function limit.

    Lemma 2.4. If the random variable aU(0,λ), X=pra, where p is constant, then fX(x)=1λ1xlnr.

    Proof. From (2.1), when aU(0,λ), we can get X(prλ,p), and

    fX(x)=fX[h(a)]|h(a)|=1λ|(lnplnxlnr)|=1λ1xlnr. (2.2)

    Second, we give the symbols and their meanings in the theorems (see Table 1).

    Table 1.  The symbols and their meanings.
    Symbol Description
    J A collection of independent jobs
    Jj A job in J
    r Job position in the sequenc
    a Learning rate, a>0
    ωi Weight of job i
    pj The p of Jj, pjU(0,λj)
    S,S Job sequence
    di The due date of Ji
    Lmax Maximum delay
    Tj Delay of job Jj
    Uj Penalty of job j
    pri(S) Random processing time when Ji in S is r-position
    π1,π2 Jobs without Ji,Jj in the sequence
    E() Mathematical expectation of random variable
    t0 Completion time in the sequence except for Ji,Jj
    t0 Time required for the (r1)-th job in the sequence to finish processing
    Cj(S) Time spent on completion of Jj in S
    ωjCj Weighted completion time when all jobs are processed

     | Show Table
    DownLoad: CSV

    Theorem 2.1. For 1|prj=pjraj,ajU(0,λj)|E(Cmax), if p is consistent with the parameters λ, that is, for all Ji,Jj, if there is pipjλiλj, if the λ is large, arrange it first, we can obtain the optimal ranking of the problem.

    Proof. (1) There is the first job in the exchanged jobs, the job in the first and second positions is exchanged.

    According to hypothesis and Lemma 2.1, we have

    E[C2(S)] = E(t0)+E[p11(S)]+E[p22(S)]=E(t0)+p1+p2ln(2λ2)(112λ2), (2.3)
    E[C1(S)] = E(t0)+E[p12(S)]+E[p21(S)]=E(t0)+p2+p1ln(2λ1)(112λ1). (2.4)

    Note that p1p2λ1λ2, 2λ2>2λ1>1, from Lemma 2.3, we can get

    1ln(2λ2)(112λ2)<1ln(2λ1)(112λ1),11ln(2λ1)(112λ1)>0. (2.5)

    From (2.3)–(2.5), it can be obtained that

    E[C2(S)]E[C1(S)]=p1p2+p2ln(2λ2)(112λ2)p1ln(2λ1)(112λ1)<p1p2+p2ln(2λ1)(112λ1)p1ln(2λ1)(112λ1)=[11ln(2λ1)(112λ1)](p1p2)<0.

    (2) When there is no first job in the exchanged jobs, that is r2. We compare E[Cj(S)] of Jj in S with E[Ci(S)] of Ji in S.

    From hypothesis and Lemma 2.1, we get

    E[Cj(S)] = E(t0)+E[pri(S)]+E[pr+1j(S)]=E(t0)+E[pirai]+E[pj(r+1)aj]=E(t0)+pilnrλi(1rλi)+pjln(r+1)λj[1(r+1)λj], (2.6)
    E[Ci(S)] = E(t0)+E[prj(S)]+E[pr+1i(S)]=E(t0)+E[pjrai]+E[pi(r+1)aj]=E(t0)+pjlnrλi(1rλi)+piln(r+1)λj[1(r+1)λj]. (2.7)

    Notice the hypothesis of the Theorem 2.1, for all  Ji,Jj, if there is pipjλiλj, and when r2, rλi>1. From Lemmas 2.2, 2.3 and (2.6), (2.7), we get

    E[Cj(S)]E[Ci(S)]={1ln(rλi)(11rλi)1ln[(r+1)λj](11(r+1)λj)}(pipj)<0.

    Proof complete.

    Theorem 2.2. For model 1|prj=pjraj,ajU(0,λj)|E(ωjCj), if p and its weight meet pipjmin{1,ωiωj}, the optimal order is obtained by the larger λjωj first.

    Proof. Suppose pipj, according to the previous assumptions, we have

    E[Ci(S)]=E(t0)+E[pri]=E(t0)+pilnrλi(1rλi), (2.8)
    E[Cj(S)]=E(t0)+E[pri]+E[pr+1j]=E(t0)+pilnrλi(1rλi)+pjln(r+1)λj[1(r+1)λj], (2.9)
    E[Cj(S)]=E(t0)+E[prj]=E(t0)+pjlnrλi(1rλi), (2.10)
    E[Ci(S)]=E(t0)+E[prj]+E[pr+1i]=E(t0)+pjlnrλi(1rλi)+piln(r+1)λj[1(r+1)λj]. (2.11)

    Notice pipjmin{1,ωiωj}, ωjλjωiλi and (2.8)–(2.11), we can get

    ωiE[Ci(S)]+ωjE[Cj(S)]ωiE[Ci(S)]ωjE[Cj(S)]=(ωi+ωj)(pipj)1lnrλi(1rλi)+(ωjpjωipi)1ln(r+1)λj[1(r+1)λj](ωi+ωj)(pipj)1lnrλi(1rλi)+(ωjpjωipi)1lnrλi(1rλi)=(ωjpiωipj)1lnrλi(1rλi)0.

    Theorem 2.2 is proved.

    Theorem 2.3. When λiλjdidj for Ji,Jj, the EDD rule, that is non-decreasing of dj is the optimal algorithm of problem 1|prj=pjra,pjU(0,λj)|E(Lmax).

    Proof. First, we consider λiλj. The parameters of Ji,Jj in S satisfy the relationship λiλjdidj, but they violate the EDD rule, that is, Jj is processed before Ji. In fact, Ji is processed before Jj in S. Then we prove that the sequence S can be transformed into a non-decreasing of dj sequence S and this transformation makes the maximum delay non-increasing. Further more, we suppose the last job in π1 is on the (r1)-th position and the expected completion time of it is E(t0). From (2.8)–(2.11), we get the expected delays of Ji,Jj in sequence S are

    E[Li(S)]=E[Ci(S)]di=E(t0)+12λjra+12λi(r+1)adi, (2.12)
    E[Lj(S)]=E[Cj(S)]dj=E(t0)+12λjradj. (2.13)

    Similarly, the expected delays of Ji,Jj in sequence S are

    E[Li(S)]=E[Ci(S)]di=E(t0)+12λiradi, (2.14)
    E[Lj(S)]=E[Cj(S)]dj=E(t0)+12λira+12λj(r+1)adj. (2.15)

    Because λiλj, didj and (2.12)–(2.15), so

    E[Li(S)]E[Lj(S)]=12(λjλi)[ra(r+1)a]+(djdi)0, (2.16)
    E[Lj(S)]E[Li(S)]=12(λjλi)ra+12λi(r+1)a>0. (2.17)

    Then from (2.16) and (2.17), we have

    max{E[Li(S)],E[Lj(S)]}max{E[Li(S)],E[Lj(S)]}. (2.18)

    Thus, exchanging the position of Ji and Jj does not cause an increase in the maximum delay. After a limited number of similar processing, the optimal sequence can be transformed into the order of non-decreasing of due time dj and the expected maximum delay will not increase.

    Theorem 2.4. For 1|prj=pjra,pjU[0,λj]|E(Tj), when λiλjdidj, we have the optimal algorithm by non-decreasing order of dj.

    Proof. First we assume didj. The necessary condition for the theorem proof is the same as that of Theorem 2.3. Now we show that exchanging the position of Ji and Jj does not increase the objective function value. Then we repeat the exchange of adjacent jobs to get the optimal sequence of problem.

    We discuss it in two cases:

    Case 1. E[Cj(S)]dj, in which E[Ti(S)]0, E[Tj(S)]0. If one of them is zero, the conclusion is obviously true.

    From Theorem 2.3 and didj, we have

    (E[Ti(S)]+E[Tj(S)])(E[Ti(S)]+E[Tj(S)])=max{E[Ci(S)]di,0}+max{E[Cj(S)]dj,0}max{E[Ci(S)]di,0}-max{E[Cj(S)]dj,0}=E[Ci(S)]E[Ci(S)](E[Cj(S)]dj)0. (2.19)

    Case 2. E[Cj(S)]>dj. Notice didj, λiλj, we have

    (E[Ti(S)]+E[Tj(S)])(E[Ti(S)]+E[Tj(S)])=E[Ci(S)]+E[Cj(S)]E[Ci(S)]E[Cj(S)]0. (2.20)

    From (2.19) and (2.20), we get E[Ti(S)]+E[Tj(S)]E[Ti(S)]+E[Tj(S)].

    In the following discussion, we assume that dj=d,j=1,2,,n.

    Theorem 2.5. For 1|prj=pjra,pjU(0,λj,)di=d|E(ωjUj), there are the following results:

    (1) When dt>min(λ1,λ2), according to the non-decreasing order of λj, we can get the smaller number of expected weighted tardy jobs;

    (2) When dt<λ1, dt<λ2, ω1ω2dt2(λ2λ1)(r+1)aω1+ω2ω1ω2+ω1λ1ω2λ2ω1ω2, we can get the minimum expected value of the objective function according to the non-decreasing rules of job weight.

    Proof. (1) First we assume the machine is idle at time t and then there are two jobs J1,J2 without processing. Suppose processing times of J1,J2 are p1 and p2, J1,J2 are processed at the r-th position and (r+1)-th position respectively. The weights of J1,J2 are ω1 and ω2. When dt>min(λ1,λ2), we should process the job for which λj is smaller first, so that we can guarantee the target value is smaller.

    (2) In the following we assume ω1ω2, dt<λ1 and dt<λ2.

    Let E[ωU(1,2)] denotes the expected weighted number of tardy jobs when we process J1 first and then process J2, then

    E[ωU(1,2)]=(ω1+ω2)P(pi1>dt)+ω2P(pi2<dt,pi2+pi+11>dt)=(ω1+ω2)λ1dtia1λ1dx+ω2dt0dxλ2dtxia(i+1)aλ1λ2dy=(ω1+ω2)ia[11λ1(dt)]+ω2ia(i+1)aλ1λ2[λ2(dt)12(dt)2]. (2.21)

    The meaning of E[ωU(2,1)] is similar to E[ωU(1,2)], then

    E[ωU(2,1)]=(ω1+ω2)P(pi2>dt)+ω1P(pi2<dt,pi2+pi+11>dt)=(ω1+ω2)λ2dtia1λ2dx+ω1dt0dxλ1dtxia(i+1)aλ1λ2dy=(ω1+ω2)ia[11λ2(dt)]+ia(i+1)aλ1λ2[λ1(dt)12(dt)2]. (2.22)

    According to (2.21) and (2.22), we obtain

    E[ωU(1,2)]E[ωU(2,1)]=ra(r+1)a2λ1λ2(ω1ω2)(dt)[(dt)+2(λ1λ2)(r+1)aω1+ω2ω1ω2+ω2λ2ω1λ1ω1ω2].

    When condition (2) of the theorem is satisfied we can get E[ωU(1,2)]<E[ωU(2,1)]. The proof of Theorem 2.5 is completed.

    Now, we will discuss the case of group scheduling. All jobs in the model can be processed at zero time. In a group, the job can be worked continuously. The machine must have preparation time before entering the next group, and it is subject to the classical learning effect hypothesis. p is a random with learning effect based on the position in the group, and the learning rate follows a uniform distribution.

    Our model is expressed as

    1|pkij=pijkaij,aijU(0,λij),G,sri=sira|E[f(Cj)].

    The assumptions and symbols of the model (see Table 2) are as follows:

    Table 2.  Symbols of the model.
    Symbol Description
    Gi The i-th group
    ni Number of jobs in group Gi
    n Number of total jobs, n1+n2++ni=n
    Jij The j-th job in group Gi, i=1,2,,m;j=1,2,,ni
    a Learning rate for group installation, a<0
    si Normal installation time of group Gi, i=1,2,,m
    sri Actual installation time of group Gi in number r, sri=sira
    aij Learning rate of job j in group Gi, aij>0
    pij The random processing time of the job Jij,
    obeys the uniform distribution in the interval (0,λij)
    pkij Random processing time of job Jij in position k of group Gij,
    pkij=pijkaij
    Q,Q Job group sequence
    σ1,σ2 Partial workpiece sequence
    E[Cjnj(Q)] Expected completion time of the last job of group Gj in sequence Q
    E[Cini(Q)] Expected completion time of the last job of group Gi in sequence Q
    E(T0) Expected completion time of the last job of σ1

     | Show Table
    DownLoad: CSV

    Theorem 3.1. For 1|pkij=pijkaij,aijU(0,λij),G,sri=sira|E(Cmax), when the jobs in the group satisfy: For all Jik,Jil, pikpilλikλil, the internal parts of the group are in accordance with λik large rule, the group is on the basis of the non-decreasing order of si, we can get the optimal solution.

    Proof. Suppose sisj, and a>0. We can get

    E[Cjnj(Q)]E[Cinj(Q)]=E(T0)+sira+E(nik=1pikkaik)+sj(r+1)a+E(njk=1pjkkajk)E(T0)sjraE(njk=1pjkkajk)si(r+1)aE(nik=1pikkaik)=(sisj)[ra(r+1)a]<0.

    Then, we can get the sorting rules of the group by repeating the exchange operation.

    The scheduling problem of jobs in a group can be summed up as

    1|pkij=pijkaij,aijU(0,λij)|E(Cmax),

    Theorem 2.1 has been proved.

    We have completed the proof of Theorem 3.1.

    Theorem 3.2. For 1|pkij=pijkaij,aijU(0,λij),G,sri=sira|E(Cj), if sinisjnj, the larger λij of the job in the group, the better the arrangement. The group is arranged by the non-decreasing order of nil,k=1pikln(kλij)(11kλij)ni, we can get the optimal sort.

    Proof. Assumptions are the same as Theorem 3.1.

    And sinisjnj, nil,k=1pikln(kλik)(11kλik)ninjl,k=1pjkln(kλjk)(11kλjk)nj, then

    {E[nil=1Cil(Q)]+E[njl=1Cjl(Q)]}{E[nil=1Cil(Q)]+E[njl=1Cjl(Q)]}=niE(t0)+nisira+E[nik=1,l=1(nil+1)pilkail]+njE[Cini(Q)]+njsj(r+1)a+E[njk=1,l=1(njl+1)pjlkajl]niE[Cjnj(Q)]nisi(r+1)aE[nik=1,l=1(nil+1)pilkail]njE(t0)njsjraE[njk=1,l=1(njl+1)pjlkajl]=(ni+nj)(sisj)[ra(r+1)a]+(njsinisj)(r+1)a+[njnil,k=1pikln(kλik)(11kλik)ninjl,k=1pjkln(kλjk)(11kλjk)]0.

    Then, we just need to repeat the previous method to complete the proof of the group order rule.

    Secondly, for the scheduling problem of jobs in a group, we can reduce it to problem 1|prj=pjraij,aijU(0,λij)|E(ωjCj), and ωj=1, Theorem 2.2 has been proved.

    Therefore, we have completed the proof.

    The following are heuristic algorithms and examples for Theorems 3.1 and 3.2.

    Algorithm 1

    Step 1: When pikpilλikλil, the priority principle shall be followed λij in group;

    Step 2: Priority arrangement for groups with small si.

    The group Gi time complexity is O(nilogni), the total time complexity of step 1 is mi=1O(nilogni); the optimal order time complexity of step 2 is O(mlogm). The time complexity of Algorithm 1 is O(nlogn).

    An example of Algorithm 1:

    Example 1. m=2, G1={J11,J12,J13}, G2={J21,J22}, s1=2, s2=3, a=2, a11U(0,2), a12U(0,4), a13U(0,3), a21U(0,2), a22U(0,4), p11=1, p12=3, p13=2, p21=4, p22=5.

    solution. Step 1: In G1, because λ11=2<λ13=3<λ12=4, p11=1<p13=2<p12=3; λ21=2<λ22=4, p21=4<p22=5, satisfy the consistency assumption, the order is J12J13J11, J22J21;

    Step 2: Because s1=2<s2=3, so G1G2.

    The solution of this example is E(Cmax)=13.07.

    Algorithm 2

    Step 1: Arrange the production of jobs in the group by the priority of λij;

    Step 2: When the requirements are met sinisjnj in the group, the group processing shall be arranged according to the non-decreasing order of nil,k=1pikln(kλik)(11kλik)ni.

    The complexity analysis of Algorithm 2 is the same as Algorithm 1.

    We give an example of Algorithm 2.

    Example 2. Same as Example 1.

    solution. Step 1: Compare the values of λij, we can get J12J13J11, J22J21;

    Step 2: Because s1n1=23<s2n2=32, p1jln(kλ1j)(11kλj)n1=0.866<p2jln(kλ2j)(11kλj)n2=2.844. The group order is G1G2.

    After calculation, we have E(Cmax)=50.15.

    For the single machine stochastic scheduling and group stochastic scheduling problems established in this paper, we consider that the learning rate is a random variable, which is not available in previous literature. We find that such problems are very general, and we can also find examples in real life. For example, if workers participate in learning a new project training, their mastery time of new technology may be random, which can be expressed by random variables. After transforming the problem, we give corresponding theoretical assumptions for the proposed problem, and then give the solution of the optimal order. For the case of group technology, we give a numerical example to verify the theoretical results.

    Next, we can combine it with intelligent algorithms to solve the problems of large number of jobs, machine maintenance, waiting time for job processing, and multi-stage processing of jobs.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was funded by Shanghai Philosophy and Social Sciences General Project (2022BGL010); the Post-funded Project of the National Social Science Fund Project of China (22FGLB109); Laboratory of Philosophy and Social Sciences for Data lntelligence and Rural Revitalization of Dabie Mountains, Lu'an, Anhui 237012, China; Key projects of Humanities and Social Sciences in Colleges and Universities of Anhui Province (SK2018A0397).

    The authors declare no conflicts of interest.



    [1] D. Biskup, Single-machines scheduling with learning considerations, Eur. J. Oper. Res., 115 (1999), 173–178. https://doi.org/10.1016/S0377-2217(98)00246-X doi: 10.1016/S0377-2217(98)00246-X
    [2] G. Moshiov, Scheduling problems with a learning effect, Eur. J. Oper. Res., 132 (2001), 687–693. https://doi.org/10.1016/S0377-2217(00)00175-2 doi: 10.1016/S0377-2217(00)00175-2
    [3] G. Moshiov, Parallel machine scheduling with a learning effect, J. Oper. Res. Soc., 52 (2001), 1165–1169. https://doi.org/10.1057/palgrave.jors.2601215 doi: 10.1057/palgrave.jors.2601215
    [4] G. Mosheiov, J. B. Sidney, Scheduling with general job-dependent learning curves, Eur. J. Oper. Res., 147 (2003), 665–670. https://doi.org/10.1016/S0377-2217(02)00358-2 doi: 10.1016/S0377-2217(02)00358-2
    [5] G. Mosheiov, J. B. Sidney, Note on scheduling with general learning curves to minimize the number of tardy jobs, J. Oper. Res. Soc., 56 (2005), 110–112. https://doi.org/10.1057/palgrave.jors.2601809 doi: 10.1057/palgrave.jors.2601809
    [6] A. Bachman, A. Janiak, Scheduling jobs with position-dependent processing times, J. Oper. Res. Soc., 55 (2004), 257–264. https://doi.org/10.1057/palgrave.jors.2601689 doi: 10.1057/palgrave.jors.2601689
    [7] W. C. Lee, C. C. Wu, H. J. Sung, A bi-criterion single-machine scheduling problem with learning considerations, Acta Inform., 40 (2004), 303–315. https://doi.org/10.1007/s00236-003-0132-9 doi: 10.1007/s00236-003-0132-9
    [8] W. C. Lee, C. C. Wu, Minimizing total completion time in a two-machine flowshop with a learning effect, Int. J. Prod. Econ., 88 (2004), 85–93. https://doi.org/10.1016/S0925-5273(03)00179-8 doi: 10.1016/S0925-5273(03)00179-8
    [9] A. Janiak, R. Rudek, Complexity results for single-machine scheduling with positional learning effects, J. Oper. Res. Soc., 59 (2008), 1430. https://doi.org/10.1057/palgrave.jors.2602622 doi: 10.1057/palgrave.jors.2602622
    [10] C. L. Zhao, Q. L. Zhang, H. Y. Tang, Machine scheduling problems with a learning effect, Dyn. Contin. Discrete Impuls. Syst. Ser. A Math. Anal., 11 (2004), 741–750.
    [11] M. B. Cheng, S. J. Sun, Y. Yu, A note on flow shop scheduling problems with a learning effect on no-idle dominant machines, Appl. Math. Comput., 184 (2007), 945–949. https://doi.org/10.1016/j.amc.2006.05.206 doi: 10.1016/j.amc.2006.05.206
    [12] T. Eren, E. Güner, Minimizing total tardiness in a scheduling problem with a learning effect, Appl. Math. Model., 31 (2007), 1351–1361. https://doi.org/10.1016/j.apm.2006.03.030 doi: 10.1016/j.apm.2006.03.030
    [13] X. G. Zhang, G. L. Yan, Single-machine scheduling problems with a sum-of-processing-time-based learning function, Int. J. Comb., 2009 (2009), 1–8. https://doi.org/10.1155/2009/624108 doi: 10.1155/2009/624108
    [14] X. G. Zhang, G. L. Yan, W. Z. Huang, G. C. Tang, A note on machine scheduling with sum-of-logarithm-processing-time-based and position-based learning effects, Inform. Sci., 187 (2012), 298–304. https://doi.org/10.1016/j.ins.2011.11.001 doi: 10.1016/j.ins.2011.11.001
    [15] J. B. Liu, X. F. Pan, Minimizing Kirchhoff index among graphs with a given vertex bipartiteness, Appl. Math. Comput., 291 (2016), 84–88. https://doi.org/10.1016/j.amc.2016.06.017 doi: 10.1016/j.amc.2016.06.017
    [16] J. B. Liu, Y. Bao, W. T. Zheng, Analyses of some structural properties on a class of hierarchical scale-free networks, Fractals, 30 (2022), 2250136. https://doi.org/10.1142/S0218348X22501365 doi: 10.1142/S0218348X22501365
    [17] M. L. Pinedo, Scheduling: theory, algorithms and systems, Cham: Springer, 2016. https://doi.org/10.1007/978-3-319-26580-3
    [18] M. Pinedo, E. Rammouz, A note on stochastic machine scheduling subject to breakdown and repair, Probab. Eng. Inform. Sci., 2 (1988), 41–49. https://doi.org/10.1017/S0269964800000619 doi: 10.1017/S0269964800000619
    [19] J. B. G. Frenk, A general framework for stochastic one-machine scheduling problems with zero release times and no partial ordering, Probab. Eng. Inform. Sci., 5 (1991), 297–315. https://doi.org/10.1017/S0269964800002102 doi: 10.1017/S0269964800002102
    [20] Y. B. Zhang, X. Y. Wu, X. Zhou, Stochastic scheduling problems with general position-based learning effects and stochastic breakdowns, J. Sched., 16 (2013), 331–336. https://doi.org/10.1007/s10951-012-0306-9 doi: 10.1007/s10951-012-0306-9
    [21] M. Ji, X. Y. Tang, X. Zhang, T. C. E. Cheng, Machine scheduling with deteriorating jobs and DeJong's learning effect, Comput. Indust. Eng., 91 (2016), 42–47. https://doi.org/10.1016/j.cie.2015.10.015 doi: 10.1016/j.cie.2015.10.015
    [22] S. J. Qin, S. X. Liu, H. B. Kuang, Piecewise linear model for multiskilled workforce scheduling problems considering learning effect and project quality, Math. Probl. Eng., 2016 (2016), 1–11. https://doi.org/10.1155/2016/3728934 doi: 10.1155/2016/3728934
    [23] X. Zhang, Y. Wang, S. K. Bai, Single-machine group scheduling problems with deteriorating and learning effect, Int. J. Syst. Sci., 47 (2016), 2402–2410. https://doi.org/10.1080/00207721.2014.998739 doi: 10.1080/00207721.2014.998739
    [24] J. Y. Xu, C. C. Wu, Y. Q. Yin, C. L. Zhao, Y. T. Chiou, W. C. Lin, An order scheduling problem with position-based learning effect, Int. J. Syst. Sci., 74 (2016), 175–186. https://doi.org/10.1016/j.cor.2016.04.021 doi: 10.1016/j.cor.2016.04.021
    [25] Y. B. Wu, J. J. Wang, Single-machine scheduling with truncated sum-of-processing-times-based learning effect including proportional delivery times, Neural Comput. Appl., 27 (2016), 937–943. https://doi.org/10.1007/s00521-015-1910-3 doi: 10.1007/s00521-015-1910-3
    [26] J. L. Vile, J. W. Gillard, P. R. Harper, V. A. Knight, Time-dependent stochastic methods for managing and scheduling emergency medical services, Oper. Res. Health Care, 8 (2016), 42–52. https://doi.org/10.1016/j.orhc.2015.07.002 doi: 10.1016/j.orhc.2015.07.002
    [27] O. Souissi, R. Benmansour, A. Artiba, An accelerated MIP model for the single machine scheduling with preventive maintenance, IFAC, 49 (2016), 1945–1949. https://doi.org/10.1016/j.ifacol.2016.07.915 doi: 10.1016/j.ifacol.2016.07.915
    [28] J. B. Liu, J. Zhao, J. Min, J. D. Cao, The Hosoya index of graphs formed by a fractal graph, Fractals, 27 (2019), 1950135. https://doi.org/10.1142/S0218348X19501354 doi: 10.1142/S0218348X19501354
    [29] J. B. Liu, C. X. Wang, S. H. Wang, B. Wei, Zagreb indices and multiplicative Zagreb indices of Eulerian graphs, Bull. Malays. Math. Sci. Soc., 42 (2019), 67–78. https://doi.org/10.1007/s40840-017-0463-2 doi: 10.1007/s40840-017-0463-2
    [30] H. T. Li, Stochastic single-machine scheduling with learning effect, IEEE Trans. Eng. Manag., 64 (2017), 94–102. https://doi.org/10.1109/TEM.2016.2618764 doi: 10.1109/TEM.2016.2618764
    [31] M. D. Toksari, B. Atalay, Some scheduling problems with job rejection and a learning effect, Comput. J., 66 (2023), 866–872. https://doi.org/10.1093/comjnl/bxab201 doi: 10.1093/comjnl/bxab201
    [32] J. C. Chen, Y. Y. Chen, T. L. Chen, Y. H. Lin, Multi-project scheduling with multi-skilled workforce assignment considering uncertainty and learning effect for large-scale equipment manufacturer, Comput. Indust. Eng., 169 (2022), 108240. https://doi.org/10.1016/j.cie.2022.108240 doi: 10.1016/j.cie.2022.108240
    [33] Y. Shi, S. Mahdian, J. Blanchet, P. Glynn, A. Y. Shin, D. Scheinker, Surgical scheduling via optimization and machine learning with long-tailed data, Health Care Manag. Sci., 2022, In press.
    [34] J. B. Wang, X. Jia, J. X. Yan, S. H. Wang, J. Qian, Single machine group scheduling problem with makespan objective and a proportional linear shortening, RAIRO Oper. Res., 56 (2022), 1523–1532. https://doi.org/10.1051/ro/2022078 doi: 10.1051/ro/2022078
    [35] I. Ham, K. Hitomi, T. Yoshida, Group technology: applications to production management, Kluwer-Nijhoff Publishing, 1985. https://doi.org/10.1007/978-94-009-4976-8
    [36] W. C. Lee, C. C. Wu, A note on single-machine group scheduling problems with position-based learning effect, Appl. Math. Model., 33 (2009), 2159–2163. https://doi.org/10.1016/j.apm.2008.05.020 doi: 10.1016/j.apm.2008.05.020
    [37] W. H. Yang, S. Chand, Learning and forgetting effects on a group schedeuling problem, Eur. J. Oper. Res., 187 (2008), 1033–1044. https://doi.org/10.1016/j.ejor.2006.03.065 doi: 10.1016/j.ejor.2006.03.065
    [38] X. G. Zhang, G. L. Yan, Single-machine group scheduling problems with deteriorated and learning effect, Appl. Math. Comput., 216 (2010), 1259–1266. https://doi.org/10.1016/j.amc.2010.02.018 doi: 10.1016/j.amc.2010.02.018
    [39] W. M. Ma, L. Sun, L. Ning, N. N. Lin, Group scheduling with deterioration and exponential learning effect processing times, Syst. Eng. Theory Pract., 37 (2017), 205–211.
    [40] L. Sun, L. Ning, J. Z. Huo, Group scheduling problems with time-dependent and position-dependent Dejong's learning effect, Math. Probl. Eng., 2020 (2020), 1–8. https://doi.org/10.1155/2020/5161872 doi: 10.1155/2020/5161872
    [41] J. B. Liu, Y. Bao, W. T. Zheng, S. Hayat, Network coherence analysis on a family of nested weighted n-polygon networks, Fractals, 29 (2021), 2150260. https://doi.org/10.1142/S0218348X21502601 doi: 10.1142/S0218348X21502601
    [42] W. X. Li, C. L. Zhao, Single machine scheduling problem with multiple due windows assignment in a group technology, J. Appl. Math. Comput., 48 (2015), 477–494. https://doi.org/10.1007/s12190-014-0814-1 doi: 10.1007/s12190-014-0814-1
    [43] X. X. Liang, M. Q. Liu, Y. B. Feng, J. B. Wang, L. S. Wen, Solution algorithms for single-machine resource allocation scheduling with deteriorating jobs and group technology, Eng. Optim., 52 (2020), 1184–1197. https://doi.org/10.1080/0305215X.2019.1638920 doi: 10.1080/0305215X.2019.1638920
    [44] L. Y. Wang, M. Q. Liu, J. B. Wang, Y. Y. Lu, W. W. Liu, Optimization for due-date assignment single-machine scheduling under group technology, Complexity, 2021 (2021), 1–9. https://doi.org/10.1155/2021/6656261 doi: 10.1155/2021/6656261
    [45] D. Y. Wang, C. M. Ye, Group scheduling with learning effect and random processing time, J. Math., 2021 (2021), 1–6. https://doi.org/10.1155/2021/6685149 doi: 10.1155/2021/6685149
    [46] A. Jain, A. Jain, An approach for optimisation of flexible flow shop group scheduling with sequence dependent set-up time and preventive maintenance, Int. J. Comput. Aided Eng. Technol., 16 (2022), 40–66. https://doi.org/10.1504/IJCAET.2022.119537 doi: 10.1504/IJCAET.2022.119537
  • This article has been cited by:

    1. Xuyin Wang, Weiguo Liu, Single machine group scheduling jobs with resource allocations subject to unrestricted due date assignments, 2024, 1598-5865, 10.1007/s12190-024-02216-y
    2. Yaru Yang, Man Xiao, Weidong Li, Semi-Online Algorithms for the Hierarchical Extensible Bin-Packing Problem and Early Work Problem, 2024, 12, 2079-3197, 68, 10.3390/computation12040068
    3. Dan-Yang Lv, Ji-Bo Wang, Single-machine group technology scheduling with resource allocation and slack due window assignment including minmax criterion, 2024, 0160-5682, 1, 10.1080/01605682.2024.2430351
    4. Lotfi Hidri, Flexible flow shop scheduling problem with removal times, 2025, 23071877, 10.1016/j.jer.2025.01.010
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1484) PDF downloads(66) Cited by(4)

Figures and Tables

Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog