- Email: [email protected]

Contents lists available at ScienceDirect

European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor

Discrete Optimization

List scheduling and beam search methods for the ﬂexible job shop scheduling problem with sequencing ﬂexibility E. G. Birgin a,∗, J. E. Ferreira b, D. P. Ronconi b a Department of Computer Science, Institute of Mathematics and Statistics, University of São Paulo, Rua do Matão, 1010, Cidade Universitária, 05508-090, São Paulo, SP, Brazil b Department of Production Engineering, Polytechnic School, University of São Paulo, Av. Prof. Almeida Prado, 128, Cidade Universitária, 05508-900, São Paulo SP, Brazil

a r t i c l e

i n f o

Article history: Received 8 October 2014 Accepted 10 June 2015 Available online 25 June 2015 Keywords: Scheduling Flexible job shop Makespan List scheduling Beam search

a b s t r a c t An extended version of the ﬂexible job shop problem is tackled in this work. The considered extension to the classical ﬂexible job shop problem allows the precedences between the operations to be given by an arbitrary directed acyclic graph instead of a linear order. Therefore, the problem consists of allocating the operations to the machines and sequencing them in compliance with the given precedences. The goal in the present work is the minimization of the makespan. A list scheduling algorithm is introduced and its natural extension to a beam search method is proposed. Numerical experiments assess the eﬃciency of the proposed approaches. © 2015 Elsevier B.V. and Association of European Operational Research Societies (EURO) within the International Federation of Operational Research Societies (IFORS). All rights reserved.

1. Introduction The classical job shop (JS) problem consists of scheduling n jobs on an environment with m machines. Each job is composed by several operations with a linear precedence structure and has a predetermined route through the machines. The ﬂexible job shop scheduling (FJS) problem is a generalization of the JS problem in which there may be several machines, not necessarily identical, capable of processing each operation. The processing time of each operation on each machine is known and no preemption is allowed. The objective is to decide on which machine each operation will be processed, and in what order the operations will be processed on each machine so that a certain criterion is optimized. This paper considers the extended version of the FJS problem that allows the precedences between the operations to be given by an arbitrary directed acyclic graph instead of a linear order. Therefore, the problem consists of allocating the operations to the machines and sequencing them in compliance with all given precedences. An example of a job with this general type of precedences is presented in Fig. 1. This problem appears in practical and industrial environments, such as the printing industry (Zeng, Jackson, Lin, Gustafson, Hoarau, & Mitchell, 2010), where assembling and disassembling operations are part of the production process. Printing processes can be di∗

Corresponding author. Tel.: +55 11 3091 6135. E-mail addresses: [email protected] (E.G. Birgin), [email protected] (J.E. Ferreira), [email protected] (D.P. Ronconi).

vided into three major tasks: prepress steps, printing, and postpress steps (Printers National Environmental Assistance Center, 2015). Prepress steps include composition and typesetting, graphic arts photography, image assembly, color separation, and image carrier preparation. Printing can be performed by six separate and distinct processes: lithography, letterpress, ﬂexography, gravure, screen printing, and plate-less technologies. Postpress operations consist of four major processes: cutting, folding, assembling, and binding. There are many additional lesser postpress ﬁnishing processes such as varnishing, perforating, drilling, etc. In-line ﬁnishing may also be considered as a ﬁnal step of the postpress operations. These three major steps of the printing process have an obvious precedence constraint. However, within each major step, there are operations with no precedence constraint among them. Pages of a book are divided into signatures (bunch of 8, 16, or 32 individual pages) that can be printed, cut, and folded in separate. The book cover is also an element that can be prepared in separate. Then, all printed and non-printed elements need to be gathered in order to continue the process. See Fig. 2. It is easy to see that this arbitrary-precedences issue of the printing process may be found in most of the practical industrial applications, making the considered problem a problem with a potential wide range of applications. The scheduling performance measure considered in the present work is the makespan minimization. The ﬂexibility of representing the precedences between the operations of a job with an arbitrary directed acyclic graph instead of a linear order is known as sequencing ﬂexibility, while routing ﬂexibility refers to the possibility of an operation to be performed by a

http://dx.doi.org/10.1016/j.ejor.2015.06.023 0377-2217/© 2015 Elsevier B.V. and Association of European Operational Research Societies (EURO) within the International Federation of Operational Research Societies (IFORS). All rights reserved.

422

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

Fig. 1. Example of the precedence constraints of a single job of an instance of the extended version of FJS problem, in which precedences between operations are given by an arbitrary directed acyclic graph.

subset of machines instead of a single machine (this is the ﬂexibility that transform a JSP into an FJSP). Other types of ﬂexibility exist, like producing the same manufacturing feature with alternative operations or sequences of operations, known as processing ﬂexibility. The effects of sequencing ﬂexibility on the performance of dispatching rules used to schedule operations in manufacturing systems was analyzed in Lin and Solberg (1991); Rachamadugu, Nandkeolyar, and Schriber (1993) (see also the references therein). In Sabuncuoglu and Karabuk (1998), a ﬂexible manufacturing system with ﬁnite buffer capacities and that considers automated guided vehicles is tackled. Different performance criteria are considered (mean ﬂow time, mean tardiness, and makespan) and an ad hoc ﬁltered beam search method is developed. The results of the method are analyzed in order to investigate the effects in the performance of the manufacturing system of incorporating different types of ﬂexibilities. A more recent study can be found in Joseph and Sridharan (2011). The extended FJS problem considered in the present work is NPhard, since it has the JS problem (that is known to be NP-hard Garey, Johnson, & Sethi, 1976) as a particular case. Due to its complexity, the number of publications concerned with the exact solution of the FJS problem is very small. Fattahi, Mehrabad, and Jolai (Fattahi, Mehrabad, & Jolai, 2007) proposed a mixed integer linear

programming (MILP) model for the FJS problem and used it to solve small and medium-sized instances with a commercial software. A more concise MILP model, that modiﬁes an earlier one presented in Manne (1960) in order to incorporate routing ﬂexibility, was introduced in Özgüven, Özbakır, and Yavuz (2010). More recently, a new MILP model for the extended version of FJS considered in the present work was presented in Birgin, Feoﬁloff, Fernandes, de Melo, Oshiro, and Ronconi (2014). This model was analyzed using instances from the literature and instances inspired by the printing industry. According to the numerical experiments, the software CPLEX produced better results with the new model than with the one presented in Özgüven et al. (2010). Several works from the literature proposed heuristic methods to address the makespan minimization in the classical FJS problem. Brandimarte (Brandimarte, 1993), one of the pioneers of this approach, applied dispatching rules to assign each operation of each job to a machine and, in a second phase, employed a tabu search heuristic to deﬁne the sequence of the operations on each machine. This kind of strategy is known as hierarchical approach. Tabu search (TS) based heuristics to solve this problem, but in an integrated way (i.e. considering simultaneously the assignment and schedule of the operations), were also developed in Dauzère-Pérès and Paulli (1997); Mastrolilli and Gambardella (2000). A recent literature also includes genetic algorithms (GA) to deal with the FJS problem in an integrated approach. The learnable genetic architecture (LEGA), proposed in Ho, Tai, and Lai (2006), provides an integration between evolution and learning methodologies within a random search framework. In this context, the learning module is used to inﬂuence the diversity and quality of offsprings. On the other hand, a traditional GA with improved components selected from the literature and a new mutation assignment operator named intelligent mutation was introduced in Pezzella, Morganti, and Ciaschetti (2008). According to the presented computational tests the proposed GA outperformed other known GAs from the literature and obtained solutions comparable with the ones

Fig. 2. Illustrative scheme including operations with no precedence constraints among them (the different signatures and the cover) in the printing-industry task of producing a book.

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

obtained by the TS described in Mastrolilli and Gambardella (2000). A hybrid GA that follows the hierarchical approach can be found in Gutiérrez and García-Magariño (2011). First the algorithm uses GA to ﬁnd the assignments and crude schedules (feasibility is not guarantee), and next it applies repairing heuristics. In Gutiérrez and García-Magariño (2011), a quantitative comparison with recent works of the literature, including (Mastrolilli & Gambardella, 2000) and considering benchmark problems from Brandimarte (1993), was presented to illustrate the performance of the proposed method. Aiming to join the best characteristics of different approaches Yuan and Xu (Yuan & Xu, 2013) proposed an integrated search heuristic composed by a hybrid harmony search (HHS) and a large neighborhood search (LNS). In ﬁrst place, the HHS algorithm runs until a solution from which no signiﬁcant improvement can be done is reached. Then, the operation-machine assignment information from the elite solutions is extracted. Finally, the LNS method is executed on the reduced space to further improve the best solution obtained by the HHS. The authors presented experiments with four different benchmarks from the literature and concluded that the HHS/LNS integrated search shows a competitive performance with respect to the state-ofthe-art methods. Additionally, several works have considered the FSJ with multiple objectives that include the makespan criterion. See, for example, (Kacem, Hammadi, & Borne, 2002; Li, Pan, & Liang, 2010; Shao, Liu, Liu, & Zhang, 2013; Tay & Ho, 2008; Zhang, Shao, Li, & Gao, 2009), and the references therein. A few works, most of them inspired in practical applications, deal with the extension of the FJS in which precedences are given by an arbitrary directed acyclic graph. An environment coming from a glass factory, that requires an even more general variant of the FJS problem including, for example, no-wait constraints, is described in Alvarez-Valdés, Fuertes, Tamarit, Giménez, and Ramos (2005). The authors proposed heuristic methods based on priority rules and a local search to minimize a criterion based on earliness and tardiness penalties. A class of instances that includes arbitrary precedence relations among operations (with the constraint of having an ending assembling operation in each job) of a problem from the printing and boarding industry is also tackled in Vilcot and Billaut (2008). Considering the makespan and the maximum lateness criteria, TS and GA are applied with the aim of building an approximation of the Pareto frontier. A symbiotic evolutionary algorithm that considers routing, sequencing, and processing ﬂexibility in a job shop scheduling problem was introduced in Kim, Park, and Ko (2003). Another extended version of the FJSP, inspired in real manufacturing environments, in which precedence constraints are arbitrary and can be of AND/OR type, is studied in Lee, Moon, Bae, and Kim (2012). A real scheduling problem in a mould manufacturing shop problem with several kind of ﬂexibilities is described in Gan and Lee (2002). The considered problem possesses process planning ﬂexibility that includes sequencing ﬂexibility. In the proposed method, process planing and scheduling are tackled in an integrated way. Due to the limited amount of works that approach the extended version of the FJS problem and its practical applicability, the purpose of this paper is to contribute to the development of heuristic techniques able to produce reasonable results in acceptable time. First, a list scheduling algorithm is proposed, motivated by its simplicity and applicability to job scheduling in production environments (see, for example, Kim, 1995; Lee, Kim, & Choi, 2004; Mainieri & Ronconi, 2013). The natural extension of the list scheduling algorithm to a ﬁltered beam search method is also investigated. The ﬁltered beam search is a technique for searching decision trees that involves systematically developing a small number of solutions in parallel so as to attempt to maximize the probability of ﬁnding a good solution with minimal search effort (Ow & Morton, 1988). The evaluation process that determines which partial solutions are the promising ones is a crucial component of this method (Pinedo, 2008). The main idea of the presented ﬁltered beam search method is to apply

423

a customized version of the list scheduling algorithm to locally and globally evaluate partial solutions in an effective way. After the pioneer work of Ow and Morton (Ow & Morton, 1988), other authors addressed scheduling environments with this approach. Sabuncuoglu and Bayiz (Sabuncuoglu & Bayiz, 1999) tackled the JS problem minimizing the makespan and conducted several numerical experiments to analyze the inﬂuence of some of the ﬁltered beam search parameters on its performance. Moreover, a comparison considering metaheuristic algorithms and dispatching rules was also conducted to expose the competitive performance of the proposed method. More recently, a ﬁltered beam search approach for the ﬂexible job shop minimizing the weighted sum of three objectives (the makespan, the total workload of machines, and the workload of the most loaded machine) was introduced in Shi-Jin, Bing-Hai, and Li-Feng (2008). Considering the component-based view of metaheuristics suggested in Sörensen (2015), it can be said the beam search method is a matheuristic that combines the search-tree strategy of branchand-bound methods with a constructive heuristic used for potentially pruning apparently fruitless nodes of the search tree. Some of the highlights of the introduced methods are that they both have ﬁnite termination and that they depend on a very reduced set of parameters whose meaning is very simple to interpret. In fact, the list scheduling algorithm has no parameters while the beam search method has only three parameters. Other than the promising numerical results, additional main features of the heuristic methods presented in this work are (a) their precise description (all possible ties are decided with explicitly given rules) and (b) availability of their C/C++ implementation. Moreover, together with a complete description of the obtained solutions, these facts allow full reproducibility of the presented results and open the possibility of using them as a benchmark for future developments. The remaining of this work is organized as follows. Section 2 gives the description of the tackled problem by presenting its mixed-integer linear programming formulation. Section 3 presents a list scheduling algorithm while Section 4 introduces the proposed beam search method. Section 5 is devoted to numerical experiments. Section 6 presents some ﬁnal remarks and lines for future research.

2. Extended ﬂexible job shop scheduling problem A precise description of the considered problem can be given by the MILP formulation introduced in Birgin et al. (2014) that we reproduce in this section for completeness and to introduce the notation that will be used in the present work. Let n, o, and m be the number of jobs, operations, and machines, respectively. The number of jobs will not play any explicit role in the problem formulation. For each operation i (i = 1, . . . , o), let Fi ⊆ {1, 2, . . . , m} (Fi = ∅) be the subset of machines that can process operation i and let pik (i = 1, . . . , o, k ∈ Fi ) be the corresponding processing times. Moreover, let A be a set of pairs (i, j) with i, j ∈ {1, . . . , o} such that, if (i, j) belongs to A, this means that operation i precedes operation j, i.e. operation j cannot start to be processed until operation i ends to be processed. This set of precedences A is the place where jobs are implicitly deﬁned, since it is expected to exist precedences between operations of the same job but not to exist precedences between operations of different jobs. Moreover, it is assumed that if the elements of A represent the arcs of a directed graph with vertices labeled from 1 to o then this graph is a directed acyclic graph, i.e. there are no cyclic precedences and, in consequence, the problem is well deﬁned. The problem consists in assigning each operation i to a machine k ∈ Fi and to determine a starting processing time si such that precedences are satisﬁed. Of course, a machine cannot process more than an operation at a time and preemption is not allowed. The objective is to minimize the makespan, i.e. the completion time of the last operation (or, equivalently, last job).

424

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

The model uses binary variables xik (i = 1, . . . , o, k ∈ Fi ) to indicate whether operation i is allocated to be processed by machine k (in this case xik = 1) or not (in this case xik = 0). It also considers binary variables yij (i, j = 1, . . . , o, Fi ∩ Fj = ∅) to indicate, whenever two operations are allocated to the same machine, which one is processed ﬁrst. Assume, for example, that there are two operations i and j such that k ∈ Fi ∩ Fj and that both operations are allocated to machine k, i.e. xik = x jk = 1. In this case, we must have exactly one between yij and yji equal to 1. If yi j = 1 and y ji = 0 then operation i is processed on machine k before operation j. On the other hand, if yi j = 0 and y ji = 1 then operation i is processed on machine k after operation j. Finally, the model uses variables si (i = 1, . . . , o), to represent the starting time of operation i (on the machine to which it was allocated) and a variable Cmax to represent the makespan. With these variables, the MILP model of the extended ﬂexible job shop problem introduced in Birgin et al. (2014) can be written as

Minimize Cmax subject to

(1)

xik = 1,

i = 1, . . . , o,

(2)

k∈Fi

pi =

xik pik ,

i = 1, . . . , o,

(3)

Cmax ≥ si + pi ,

i = 1, . . . , o,

(4)

k∈Fi

si + pi ≤ s j ,

i, j = 1, . . . , o such that (i, j) ∈ A,

yi j + y ji ≥ xik + x jk − 1,

i, j = 1, . . . , o, i = j, and k ∈ Fi ∩ Fj ,

si + pi − (1 − yi j )L ≤ s j ,

i, j = 1, . . . , o and i = j such that Fi ∩ Fj = ∅,

si ≥ 0,

i = 1, . . . , o.

(5) (6)

3. List scheduling algorithm In this section we describe a non-hierarchical list scheduling algorithm. It is non-hierarchical in the sense that, at each iteration, it selects an operation, assigns it to a machine, and determines a starting time. This is in contrast with hierarchical methods that in a ﬁrst phase assign the operations to the machines and in a second phase determine the starting times. A similar but simpler heuristic (named EST) was very brieﬂy described in (Birgin et al., 2014, p.1426), where it was used, in the context of evaluating an MILP model, to provide an upper bound on the solution (i.e. an initial feasible solution) to the exact solver CPLEX. The proposed list scheduling algorithm iterates o times and, at each iteration, selects an operation i, assigns it to a machine k ∈ Fi , and determines the starting time st of operation i on machine k. This process is guided by customized rules for the extended version of the FJS problem. The decision is recorded by setting s[i] ← st (starting time of operation i) and w[i] ← k (machine to which operation i was assigned). In the context of the algorithm, if the values of s[i] and w[i] were deﬁned, we say that operation i was already handled (by the algorithm) or scheduled. Otherwise, if s[i] and w[i] are undeﬁned, we say that operation i is still unhandled. At a given iteration, operations that are candidates to be handled are those that are still unhandled and such that do not have unhandled predecessors. Parameters n, m, o, Fi (i = 1, . . . , o), pik (i = 1, . . . , o, k ∈ Fi ), and A, that determine an instance of the FJS problem, are the input data of the list scheduling algorithm. From them, some auxiliary instance data that aid to apply the algorithm can be computed. Those quantities, that are deﬁned for each operation i (i = 1, . . . , o), are (a) The predecessors and successors sets P i and Si given by

Pi = { j ∈ {1, . . . , o} | ( j, i) ∈ A} and

(7) (8)

Constraint (2) says that each operation i must be assigned to exactly one machine k ∈ Fi . Constraint (3) deﬁnes the actual processing time pi of each operation i (that depends on the machine to which it was assigned). In fact, there is no need to consider pi in (3) as a variable of the model. It can be seen as an auxiliary value that simpliﬁes the model presentation, while it can be avoided by removing constraint (3) and replacing each appearance of pi in the other constraints with the expression on the right hand side of (3). Constraint (4), together with the minimization of the objective function in (1) deﬁnes Cmax as the makespan. Constraint (5) represents the precedence constraints. For every pair of operations assigned to the same machine, constraints (6,7) say that both operations cannot be processed at the same time and determine which one is processed ﬁrst. If two operations i and j that could have been both assigned to a machine k (i.e. k ∈ Fi ∩ Fj = ∅) are not then we have that at most one between xik and xjk is equal to 1. In this case, constraints (6,7) are trivially satisﬁed with yi j = y ji = 0. In (7), L represents a suﬃciently large positive constant (see (Birgin et al., 2014) for a suggested value that may be used in practice). Finally, constraint (8) says that the starting times of the operations must be not smaller than the beginning of the considered planning horizon that, without loss of generality, was set to 0. Model (1–8) was introduced in Birgin et al. (2014), where a comparison with a simple extension of the model presented in Özgüven et al. (2010) was given. Up to the authors acknowledge these two models are the only published ones that include the possibility of the precedences between the operations to be given by an arbitrary directed acyclic graph.

Si = { j ∈ {1, . . . , o} | (i, j) ∈ A}. (b) The average processing times p¯ i given by

p¯ i =

1 pik . |Fi | k∈Fi

(c) The remaining (or blocked) work RWi given by the longest (directed) path from i to any other operation j in the digraph with set of arcs A, set of nodes {1, . . . , o}, and nodes’ weights p¯ 1 , . . . , p¯ o. 3.1. Characterization of partial solutions To support, at each iteration, the selection of an operation (plus the machine that will process it and its corresponding starting time), the algorithm keeps track of the following information: s[i] that saves, for each handled operation i, its processing starting time. The value is undetermined if operation i is still unhandled. w[i] that saves, for each handled operation i, the machine to which it was assigned. The value is undetermined if operation i is still unhandled. Cmax the maximum among the completion times of the handled operations, i.e. the makespan of the partial solution (where by “partial solution” we mean the solution, not yet complete, composed by the already handled operations). u[i] that saves, for each operation i, the maximum among the completion times of its handled predecessors. v[k] that saves, for each machine k, the maximum among the completion times of the handled operations that were assigned to it.

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

L[k] that saves, for each machine k, the sum of the processing times of the unhandled operations that can potentially be assigned to it (this is an upper bound on the future load of the machine). η[i] that says, for each operation i, how many unhandled predecessors it has. the set of unhandled operations that have no unhandled predecessors (i.e. operations that are natural candidates to be handled). A partial solution with handled operations is characterized by the 9-tuple (, s, w, Cmax , u, v, L, η, ) that carries all the information needed (by the list scheduling algorithm that will be presented below) to generate a new partial solution with + 1 handled operations. For the particular case of the partial solution with = 0 scheduled operations, we have that

⎧ s[i] ⎪ ⎪ ⎪ ⎪ ⎪ w[i] ⎪ ⎪ ⎪ ⎪ ⎪ Cmax ⎪ ⎪ ⎪ ⎪ ⎨ u[i] ⎪ v[k] ⎪ ⎪ ⎪ ⎪ ⎪ L[k] ⎪ ⎪ ⎪ ⎪ ⎪ η[i] ⎪ ⎪ ⎪ ⎩

undetermined for i = 1, . . . , o, undetermined for i = 1, . . . , o, =

0,

=

0 for i = 1, . . . , o,

= =

0 for k = 1, . . . , m, {i | k∈Fi } pik for k = 1, . . . , m,

=

|Pi | for i = 1, . . . , o,

=

{i ∈ {1, . . . , o} | Pi = ∅}.

⎪ v [k] ⎪ ⎪ ⎪ ⎪ ⎪ L [r] ⎪ ⎪ ⎪ ⎪ ⎪ η [ j] ⎪ ⎪ ⎪ ⎩

Rule 1: Pairs operation/machine with the earliest starting time. In accordance with the minimization of the makespan, the ﬁrst attribute used to select a pair (i, k) from 0 is based on the earliest starting time stik for operation i on machine k, given by the maximum between two quantities: (a) the maximum between the completion times of the (already handled) predecessors of operation i, given by u[i], and (b) the time at which machine k ends to process all the already handled operations assigned to it, given by v[k]. Thus, we have that

stik = max{u[i], v[k]}

(10)

for all (i, k) ∈ 0 . Let

= min{stik st

| (i, k) ∈ 0 }

(11)

be the smallest possible starting time among the candidate pairs (i, k) remain as a possibility for the ∈ 0 . Only pairs (i, k) such that stik = st and w[i] = k), iteration assignment (with the natural choice s[i] = st while the other pairs are discarded. Thus, let

}. 1 = {(i, k) | (i, k) ∈ 0 and stik = st

(12)

Rule 2: Fastest machine for each operation. Let

1i = {( j, k) | ( j, k) ∈ 1 and j = i} for i = 1, . . . , o,

Let (, s, w, Cmax , u, v, L, η, ) be a partial solution with 0 ≤ < o. If an operation i ∈ is assigned to a machine k ∈ Fi and scheduled to start its processing at time st, the 9-uple that characterizes the new partial solution with + 1 handled operation is given by , u , v , L , η , ), where s , w , C , u , v , L , η , and ( + 1, s , w , Cmax max receive, respectively, the values of s, w, Cmax , u, v, L, η, and and then are updated as follows:

⎧ s [i] ⎪ ⎪ ⎪ ⎪ ⎪ ⎪w [i] ⎪ ⎪ ⎪ ⎪ Cmax ⎪ ⎪ ⎪ ⎪ ⎨u [ j]

425

i.e. 1i is the subset of pairs in 1 that correspond to the same operation i (associated with different machines in Fi ). The focus of this rule is on the operations i such that |1i | > 1 and the objective is to select, for each one of those operations, a single machine among the several possibilities. Let i be such that |1i | > 1 and let (i, k1 ), (i, k2 ), . . . be the elements of 1i . Each machine k1 , k2 , . . . is associated with the processing time pik1 , pik2 , . . . and the upper bound on the machine load L[k1 ], L[k2 ], . . . Consider the triplets

( pik1 , L[k1 ], k1 ), ( pik2 , L[k2 ], k2 ), . . .

←

st,

←

k,

←

, st + p }, max{Cmax ik

←

max{u [ j], st + pik } for all j ∈ Si ,

i = {(i, k )} be the singleton such that ( p and let 1.5 ν ikν , L[kν ], kν ) is the smallest triplet in the lexicographical order. It means that, among all possible machines that can process operation i, we select the one with the smallest processing time. In the case of a tie, the smallest upper bound on the machine load is used as a tie-break and, in the case of a second tie, the machine with the smallest index is chosen with no purpose other than deﬁning a deterministic rule. Deﬁne

←

max{v [k], st + pik },

i 2 = {∪1i | |1i | ≤ 1} ∪ {∪1.5 | |1i | > 1}.

←

L [r] − pir for all r ∈ Fi ,

←

η [ j] − 1 for all j ∈ Si ,

←

\ {i} ∪ { j ∈ Si | η [ j] = 0}.

We have that 2 ⊆ 1 contains no more than one pair operation/machine for each operation and, for the operations that had more than one possible machine, it preserves only the most promising one (fastest and, in case of tie, less loaded). Rule 3: Operation with the largest remaining or blocked work. The next attribute used to reduce the number of candidate pairs (i, k) ∈ 2 is based on an estimate of the remaining (or blocked) work RWi that is associated with an operation i. Recall that RWi is deﬁned as the longest path from i to any other operation j in the digraph with set of arcs A, set of nodes {1, . . . , o}, and nodes’ weights p¯ 1 , . . . , p¯ o. Note that if the longest path starting at operation i ends at an operation j then S j = ∅ and i and j belong to the same job t. Moreover, stik + RWi may be seen as an estimate of the completion time of job t, computed from the perspective of the candidate pair (i, k). However, for all (i, k) ∈ 2 , we simple consider the value of RWi since stik = st as an estimate of the remaining or blocked work associated with operation i. For the pairs (i1 , k1 ), (i2 , k2 ), . . . ∈ 2 consider the triplets

3.2. Selecting rules The list scheduling algorithm introduced in this section starts with the partial solution with = 0 handled operations, handles a single operation per iteration and ends after o iterations with a feasible solution to the given instance of the FJS problem. We now describe the rules to choose, at each iteration, the operation to be scheduled, the machine to which the operation is assigned, and the operation’s starting time. At each iteration, the set of candidate pairs operation/machine 0 is given by

0 = {(i, k) | i ∈ and k ∈ Fi }.

(9)

The three rules below are sequentially applied until a single pair operation/machine (i, k) ∈ 0 is selected and a processing starting time st is determined for the schedule of operation i on machine k.

(−RWi1 , −L[k1 ], i1 ), (−RWi2 , −L[k2 ], i2 ), . . . and let ( − RWiν , −L[kν ], iν ) be the smallest triplet in the lexicographical order, associated with the pair (iν , kν ). This means that a ﬁnal

426

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

selection was made and that it consists in assigning operation iν to In this way, we selected the opermachine kν with starting time st. ation with the largest remaining or blocked work as the one to be scheduled. The idea behind this choice is to rapidly handle operations that impair the processing of a large amount of work . In the case of a tie, the upper bound on the future load of the machines is used as a tie-break, in order to assign as soon as possible operations to a machine that has a large expected future load (trying to minimize its idle time). Finally, in the case of a second tie, the operation with the smallest index is chosen with no purpose other than deﬁning a deterministic rule. 3.3. Pseudo-code and complexity The computation of the auxiliary instance data (items (a–c) at the beginning of Section 3), the initialization of the empty partial solution (Section 3.1), and the iterative application of rules 1–3 (Section 3.2) plus the update of the partial solution characterization (Section 3.1) deﬁne the method introduced in this section, that is fully described in Algorithm 1. Sets Pi and Si are computed in lines 2–3, the average processing times p¯ i are computed in lines 4–7, the remaining work estimates RWi are computed in lines 8–16 by Dijkstra’s shortest path method (Cormen, Leiserson, Rivest, & Stein, 2009, pp.655–658) (adapted to compute the longest path and to the case in which there may be several sources and several targets). The values of Cmax , u, v, L, η, and the set are initialized in lines 17–21. The main loop, from line 22 to line 41, executes o times rules 1–3. The selected pair at each iteration The is named (λ, θ ) and the determined starting time is named st. starting time s[λ] for operation λ is set at line 36, as well as the the machine θ to which operation λ is assigned is recorded, in the same line, in w[λ]. The quantities Cmax , u, v, L, η, and the set are updated in lines 36–41. A few words regarding the worst-case time complexity of Algorithm 1 are in order. Initializations from line 2 to 7 and from line 17 to 21 are O(|A| + m + oi=1 |Fi |). Dijkstra’s algorithm implemented in lines 8–16 has complexity O(|A| + o). It remains to analyze the main loop that goes from line 22 to line 41 and is executed o times. If we consider that at each iteration we have that || ≤ o then we have that the loop is O(o oi=1 |Fi | + |A|). Thus, summing up, we have that Algorithm 1 is

O

|A| + m + o

o

|Fi | .

i=1

On the other hand, a better bound for || can be given. Consider the directed acyclic graph D = ({1, . . . , o}, A). An antichain in D is a set of nodes, no two of which are included in any path of D. Noting that operations in are operations that have no precedence constraints among each other, it is not hard to see that the number of elements in is limited by the size of a maximum antichain in D (that can be computed in polynomial time Fulkerson, 1956). Let w be the size of a maximum antichain in D and let

q = |Fi1 | + |Fi2 | + · · · + |Fiw |,

(13)

where Fi1 , Fi2 , . . . , Fiw are the w largest sets among the sets Fi , i = 1, . . . , o. We can say that the main loop (from line 22 to line 41) is O(oq + |A|). Then, we have that Algorithm 1 is

O(|A| + m + oq), that provides a tighter bound on the worst-case time complexity of the algorithm. 4. Beam search method At each iteration of the list scheduling algorithm described in the previous section, an operation is selected, assigned to a machine, and

its starting time is determined. On the one hand, all decision are made based on heuristic rules. On the other hand, even if we were able to make a decision based on a local optimal strategy, it is wellknown that greedy algorithms not necessarily provide good quality solutions. Therefore, paying the price of increasing the complexity of the method, it makes sense to select, at each iteration, not a single operation to be handled, but a small set of operations to be handled. In this way, a partial solution can be split into several partial solutions with one more handled operation. This strategy gives rise to a search tree and ﬁts into the framework of a beam search method. In this sense, we say that the presented beam search method is a “natural extension” of the list scheduling method described in the previous section. The introduced beam search method is a ﬁltered beam search method with approximately f(β ) nodes at each level of the search tree, where β ∈ (0, 1] is a given real parameter and f: IR → IN is a function to be deﬁned later. Each node at a given level of the search tree represents a partial solution with handled operations. From the point of view of the beam search method, we can say that the (simple, thus non-expensive) rules 1–3 described in the previous section place the role of the local evaluation used to choose the g(α ) more promising operations that would be considered to add one more handled operation to a given partial solution (where α ∈ (0, 1] is a given real parameter and g: IR → IN is a function to be deﬁned later). Therefore, a partial solution may be split into g(α ) partial solutions with one more handled operation. Then, a global (and more time consuming) evaluation is considered in order to select one partial solutions among the set of g(α ) partial solutions. As expected, the global evaluation consists in completing each partial solution by running to the end the list scheduling algorithm described in the previous section. The one with the smallest makespan is chosen as the child of the parent being considered and, in this way, a new level of the search tree is built. In this work, we adopted the strategy of keeping a single child for each node as suggested in Sabuncuoglu and Bayiz (1999). 4.1. Local and global strategies We now describe how rules 1–3 from the list scheduling algorithm are used to generate the children of a given node or partial solution. Consider a partial solution with < o scheduled operations, or, equivalently, a node at level of the search tree, given by (, s, w, u, v, L, Cmax , η, ). Rules 1–3 can be applied to select a pair (i1 , k1 ) ∈ 0 and a starting time sti1 k1 for operation i1 on machine k1 ; and the pair (i1 , k1 ) with the starting time sti1 k1 can be used to generate a partial solution with an additional handled operation. We will say that the generated child is the most promising child from the point of view of the local strategy. Assume now that the pair (i1 , k1 ) is forbidden and that rules 1–3 are applied again (to the node at level ). Then, another pair (i2 , k2 ) ∈ 0 is selected and a starting time sti2 k2 for operation i2 on machine k2 determined. This pair can also be used to generate a second child. We will say that this second child is the second more promising child from the point of view of the local strategy. In the way described in the previous paragraph, several children may be generated. On the one hand, since a relatively expensive global strategy must be applied to all children in order to keep a single child, a limit on the number of generated children is imposed. This limit is a function of the input parameter α ∈ (0, 1] and, ideally, should be independent of the size of the instance. However, to avoid a degradation on the method’s performance for instances of increasing sizes, we considered a limit given by αˆ 1 ≡ α|0 |. Note that 0 (as deﬁned in (9)) depends on the partial solution being considered and, therefore, αˆ 1 may vary from node to node. On the other hand, a second limit that depends on the input parameter ξ > 0 and aims to preserve the children’s quality is also imposed. Note that rule 1 constructs the set 1 (see 12) picking up from 0 (see (9)) the pairs where stik and st are given by (10) and (11), (i, k) such that stik = st,

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

Algorithm 1 List Scheduling.

427

428

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

respectively. This means that (in the list scheduling algorithm) only pairs operation/machine with the earliest possible starting time are candidates for producing a new partial solution with an additional handled operation. However, this may not be the case when rules 1– 3 are applied iteratively to pick several pairs up from 0 . Therefore, it makes sense to impose an upper bound on the starting time of the selected pairs. Consider the set

+ξ 0.5 = {(i, k) | (i, k) ∈ 0 and stik ≤ st p} ⊆ 0 ,

(14)

where

= min{stik st

| (i, k) ∈ 0 } and p = max{ pik | (i, k) ∈ 0 }.

(15)

By imposing that no more than αˆ 2 ≡ |0.5 | children can be generated, we guarantee that the children’s starting time will not be larger +ξ than st p. This means that the bound on the number of children is given by αˆ ≡ min{αˆ 1 , αˆ 2 }. This deﬁnes the function g(α ) that depends on the node and determines how many children will be chosen by the local strategy. We aim at generating a single child for each node (or, in other words, keeping a single child open and closing or bounding all the others). Given the node (, s, w, u, v, L, Cmax , η, ), a natural way to do that would be to complete the partial solution that it represents in different ways by selecting every possible pair in (i, k) ∈ 0 (associated with the earliest starting time of operation i on machine k) and then completing all those partial solutions, that have + 1 scheduled operations, using the list scheduling algorithm . Since applying the list scheduling algorithm to all the | 0 | children may be too expensive, the strategy described in the previous paragraph ﬁlters the children preserving only the αˆ more promising children. After having selected a small set of promising children, the socalled global strategy is used to pick a single child from the set. As already suggested, the global strategy consists of computing the makespan of a solution that can be obtained by completing each child using the list scheduling algorithm. Note that we are not interested in the completed solution, but only in its makespan. The completed soest . Among all lution has an associated makespan that we will name Cmax est children, the one with the smallest Cmax is the chosen one, while all the others are discarded, closed, or bounded. A few technical details related to tie breaks between children and some other minor details will be given below, when describing the pseudo-code of the introduced beam search method. 4.2. Pseudo-code and complexity The whole beam search method is described in Algorithms 2, 3, 4, 5. Algorithm 2 corresponds to the initializations and to the construction of the ﬁrst level of the search tree that is slightly different from the construction of the other levels. Algorithm 5 is the core of the beam search method that iterates o − 1 times generating the successive levels of the search tree. Algorithm 4 consists of subroutine Select that receives a partial solution and implements rules 1–3 to select a pair operation/machine that can be used to expand the given partial solution. Algorithm 3 consists of subroutine CmaxEst that receives a partial solution and, by completing it using the list scheduling algorithm, computes an estimate of the completion time that could be obtained by completing the given partial solution. Algorithm 3 is a re-implementation of Algorithm 1 in the form of a subroutine that does not perform the initializations and receives a partial solution. Subroutine Select plays the role of the local evaluation while subroutine CmaxEst plays the role of the global evaluation. The input parameters of Algorithm 2 are the data that describes the instance (n, m, o, Fi (i = 1, . . . , o), pik (i = 1, . . . , o, k ∈ Fi ), A) plus the scalar real values ξ > 0 and α , β ∈ (0, 1]. Parameter ξ is related to the tolerance for the earliest starting time stik of a candidate operation/machine pair (i, k) with respect to the minimum ear among all pairs in 0 , as described in (14,15). liest starting time st

Parameter α is related to the number of children of a given node that are selected by the local strategy and to which the global strategy is applied in order to keep a single child per node. Parameter β determines the number of nodes that remain open at each level of the search tree. Lines 2 to 16 of Algorithm 2 initializes (in the same way it is done , u , in Algorithm 1) Pi , Si , p¯ i , and RWi (i = 1, . . . , o). Quantities Cmax v , L , η , and the set associated with the partial solution with none scheduled operation (i.e. the root node of the search tree) are initialized in lines 17–21. Since the “quality” of the partial solutions at the ﬁrst level of the search tree has a strong inﬂuence in the overall performance of the method, no ﬁlter is considered to construct the ﬁrst level. It means that all possible pairs operation/machine will be evaluated by the global strategy and the most promising ones will constitute the ﬁrst level of the search tree (partial solutions with a single scheduled or handled operation). Lines 22 to 34 are devoted to this task. All possible pairs operation/machine (i.e. the ones in 0 as deﬁned in (9)) are considered, the partial solutions are constructed, its estimated completion times are computed (see the call to subroutine CmaxEst at line 33), and the partial solutions are saved in the set N . In lines 35 and 36, the fraction β of the most promising partial solutions in N is saved into the set N that turns out to be the ﬁrst level of the search tree. Note that due to possible ties, the cardinality of N may be larger than βˆ ≡ β |N |. In line 37 subroutine BeamSearch (implemented in Algorithm 5) is called. It receives the ﬁrst level of the search tree, constructs the remaining of the tree, and returns the best leave. Algorithm 5 implements subroutine BeamSearch. It iterates o − 1 times (see the main loop in line 2). Each iteration starts with the set N composed by the nodes of the current level of the search tree. Nodes in N are temporary labeled (numbered from 1 to |N |) with the identiﬁer nid (that stands for node identiﬁer) (see lines 3 and 5) and a set of children Dnid for each node is constructed. The limit on the number of children of each node was already described in the previous subsection and is implemented in lines 6–16. Then, a single child for each node is selected and saved into the set N . The iteration ﬁnishes replacing N by N . It means that no tree structure is in fact build, since only the nodes of the current level are needed as each node carries full information of the partial solution it represents. The set of children Dnid of a given node identiﬁed by nid is build in lines 17–28. The set Dnid is initialized as the empty set in line 17. Exactly min{αˆ 1 , αˆ 2 } children are generated, where αˆ 1 is the one computed in line 6 and αˆ 2 is computed in lines 12–16. A copy , u , v , L , η , ) of the current solution (s, w, C (s , w , Cmax max , u, v, L, η, ) is made at line 19. A pair operation/machine (λ, θ ) is selected (at line 20) by calling to subroutine Select. The new partial solution with an extra handled operation is build (lines 21–26) and its estimated est is computed by calling to subroutine CmaxEst completion time Cmax est , is at line 27. The new partial solution, together with λ, θ , and Cmax saved in Dnid . To guarantee that a different pair operation/machine will be selected in the forthcoming calls to subroutine Select (to generate the siblings of the just generated partial solution), the selected pair (λ, θ ) is added to the set F that stands for “forbidden pairs”. Subroutine Select (see Algorithm 4) implements rules 1–3 with the only difference that the most promising pair is chosen from 0 \ F. It remains to explain how a single child for node nid is chosen from the set Dnid for nid = 1, . . . , |N |. This selection is based on the value est and it uses (λ, θ ) to solve tie-breaks. Other than that there is of Cmax an additional task that requires to inspect all Dnid simultaneously: to avoid identical children of different partial solutions in N . Note that all nodes in the ﬁrst level of the search tree are different (since they consist of partial solutions with a single handled operation coming from different pairs operation/machine). However, when expanded, by considering an additional handled operation, two different partial solutions may become identical. Moreover, if some level of the search tree has identical solutions, their expansions will continue

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

429

Algorithm 2 Beam Search initialization and construction of the search tree’s ﬁrst level.

being identical to the end, affecting the diversity of the leaves (complete solutions). To avoid this situation, before choosing a single child per node, identical partial solutions are removed from ∪Dnid . If identical partial solutions exist, the one with smallest λ is preserved and, in the case of ties, the one with smallest θ . (This strategy solves all possible ties since identical partial solutions cannot have identical λ and θ . Otherwise, there must be identical parents, which is not the case.) This elimination of identical partial solutions is done in the

loop that goes form line 30 to line 36, marking the solutions to be est equal to ∞. Solutions are in fact removed in removed by setting Cmax line 37. Finally, the best child of each parent is chosen in lines 38 and 39 and saved in the set N that, at the end of the iteration, substitutes the current set of nodes N (see line 40). A few words regarding the complexity of the beam search method described by Algorithms 2, 3, 4, 5 are in order. Algorithm 3 is O(oq + |A|) while Algorithm 4 is O(q) with q given by (13), as

430

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

Algorithm 3 Completion time estimation procedure (global evaluation).

Algorithm 4 Selection procedure (local evaluation).

already analyzed in Section 3.3. Algorithms 2 (disregarding line 37 where subroutine BeamSearch described by Algorithm 5 is called, i.e. considering initializations and construction of the ﬁrst level of the search tree only) has a worst-case time complexity

O

o

|Fi | + r + oq2 + q|A| ,

i=1

w is the size of a maximum antichain in D = ({1, . . . , o}, A), and the indices i1 , i2 , . . . , io are such that |Fi1 ||Si1 | ≥ |Fi2 ||Si2 | ≥ · · · ≥ |Fio ||Sio |. To analyze Algorithm 5, we will ﬁrst assume that αˆ and βˆ are constants that do not depend on the instance size. It implies that each level of the search tree has |N | ≤ βˆ and that |Dnid | ≤ αˆ for nid = 1, . . . , |N |. Thus, we have that the worst-case time complexity of Algorithm 5 is given by

where

r = |Fi1 ||Si1 | + |Fi2 ||Si2 | + · · · + |Fiw ||Siw |,

ˆ o|A| + o2 q) + αˆ 2 o). O(αˆ β(

(16)

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

Algorithm 5 Beam Search body.

431

432

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440 Table 1 Description of the instances with Y-jobs. Name

YFJS01 YFJS02 YFJS03 YFJS04 YFJS05 YFJS06 YFJS07 YFJS08 YFJS09 YFJS10 YFJS11 YFJS12 YFJS13 YFJS14 YFJS15 YFJS16 YFJS17 YFJS18 YFJS19 YFJS20

33

34

n

4 4 6 7 8 9 9 9 9 10 10 10 10 13 13 13 17 17 17 17

m

7 7 7 7 7 7 7 12 12 12 10 10 10 26 26 26 26 26 26 26

# of operations

min

max

min

max

10 10 4 4 4 4 4 4 4 4 5 5 5 17 17 17 17 17 17 17

10 10 4 4 4 4 4 4 4 4 5 5 5 17 17 17 17 17 17 17

40 40 24 28 32 36 36 36 36 40 50 50 50 221 221 221 289 289 289 289

1 1 2 2 2 2 2 2 4 1 1 2 1 2 2 2 3 3 3 3

3 3 3 3 3 3 3 3 8 3 3 3 3 3 3 3 5 5 5 5

104 104 63 71 81 95 93 100 219 113 134 133 137 641 648 633 1328 1362 1347 1343

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

36 36 18 21 24 27 27 27 27 30 40 40 40 208 208 208 272 272 272 272

32

35

36

21

22

26

24

25

11

12

13

14

15

16

1

2

3

# of precedences

max

31

23

|Fi |

min

37

38

39

40

27

28

29

30

17

18

19

20

4

5

6

7

8

9

10

Fig. 3. Graphical representation of the operations’ precedence constraints of instance YFJS02.

However, in practice, we consider αˆ = O(q) and βˆ = O(q). Therefore, (16) becomes

O(oq2 |A| + o2 q3 ). Summing up, the overal worst-case time complexity of the introduced beam search method is

ˆ o|A| + o2 q) + αˆ 2 o), O(r + q|A| + αˆ β(

if αˆ and βˆ are given constants or O r + oq2 |A| + o2 q3 , if αˆ and βˆ are O(q). 5. Numerical experiments We implemented the list scheduling algorithm (Algorithm 1) and the beam search method (Algorithms 2, 3, 4, 5) in order to be able to evaluate their eﬃciency and eﬃcacy. Codes, fully written in C/C++ and available for download at http://www.ime.usp.br/˜egbirgin/, were compiled with the g++ compiler of GCC (version 4.7.2, Debian 4.7.2–5) using the optimization option “-O3”. All tests were conducted on a 2.40GHz Intel(R) Xeon(R) E5645 with 132GB of RAM memory and running GNU/Linux operating system (Debian 7, kernel 3.7.6 SMP x86_64). Detailed descriptions of the numerical results (including a complete description of the obtained solutions) are also

available for download together with the source codes of the implemented methods. In order to have a way to evaluate the behavior of the introduced methods when applied to small and medium-sized instances, we applied an exact solver to the 50 instances of model (1–8) introduced in Birgin et al. (2014). Instances YFJS01–YFJS20 correspond to instances composed by two independent sequences of operations followed by an assembling operation that joins the previously processed components (named Y-jobs in Birgin et al., 2014), while instances DAFJS01–DAFJS30 correspond to instances composed by jobs whose precedences are given by arbitrary directed acyclic graphs. Tables 1 and 2 summarize the main characteristics of each instance. Figs. 3 and 4 illustrate the kind of operations’ precedence constraints that are present in each set of instances. It is worth noting that other complicating issues of the instances (like the routing ﬂexibility given by the fact that each operation can be performed by a subset of machines) are not being displayed in the pictures. As exact solver, we used the IBM ILOG CPLEX 12.1 solver with the following settings: 1 for the maximum number of threads and 2048MB for working memory. All other parameters were left at their default values. In a ﬁrst run, the CPU time limit was set to 1 hour and, in a second run, considering only the instances for which the optimal solutions was not found in the ﬁrst run, the CPU time limit was

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

433

Table 2 Description of the instances for which the precedence relations between the operations are given by an arbitrary directed acyclic graph. Name

DAFJS01 DAFJS02 DAFJS03 DAFJS04 DAFJS05 DAFJS06 DAFJS07 DAFJS08 DAFJS09 DAFJS10 DAFJS11 DAFJS12 DAFJS13 DAFJS14 DAFJS15 DAFJS16 DAFJS17 DAFJS18 DAFJS19 DAFJS20 DAFJS21 DAFJS22 DAFJS23 DAFJS24 DAFJS25 DAFJS26 DAFJS27 DAFJS28 DAFJS29 DAFJS30

n

4 4 4 4 6 6 6 6 8 8 8 8 10 10 10 10 12 12 8 10 12 12 8 8 10 10 12 8 8 10

m

5 5 10 10 5 5 10 10 5 5 10 10 5 5 10 10 5 5 7 7 7 7 9 9 9 9 9 10 10 10

# of operations

|Fi |

max

min

max

min

max

5 5 10 9 5 5 7 6 4 4 10 9 5 4 8 6 4 5 7 6 5 5 6 6 9 8 7 8 7 8

9 7 17 14 13 13 23 23 9 11 23 22 11 10 19 20 11 9 13 17 16 17 17 25 19 17 22 15 19 19

26 25 55 43 39 44 85 85 45 58 113 117 62 69 120 120 82 74 70 92 107 116 76 92 123 119 127 91 95 98

2 2 3 3 2 2 3 3 2 2 3 3 2 2 3 3 2 2 3 3 3 3 4 4 4 4 4 3 3 3

4 4 7 7 4 4 7 7 4 4 7 7 4 4 7 7 4 4 5 5 5 5 6 6 6 6 6 7 7 7

82 79 279 220 104 136 431 403 135 168 534 603 193 206 595 602 246 231 283 361 425 450 367 463 619 606 625 457 468 509

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3 3 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 2 3 3 3 3 3 2 3 3 3 3 3 3

26 23 52 40 34 41 82 82 42 52 108 114 55 62 117 114 77 64 66 87 102 109 71 87 119 116 118 89 94 94

Table 3 Numerical results of applying an exact MIP solver (CPLEX) to instances YFJS01–YFJS20 of model (1–8). Following (Birgin et al., 2014), an initial feasible solution computed with the heuristic EST (Birgin et al., 2014) was given to CPLEX. Instance

YFJS01 YFJS02 YFJS03 YFJS04 YFJS05 YFJS06 YFJS07 YFJS08 YFJS09 YFJS10 YFJS11 YFJS12 YFJS13 YFJS14 YFJS15 YFJS16 YFJS17 YFJS18 YFJS19 YFJS20

CPLEX with 1 hour of time limit

CPLEX with 10 hours of time limit

mks

mks

773 825 347 390 445 [427;447] 4.47% 444 353 242 399 526 512 405 1317 [1239;1244] 0.40% [1222;1243] 1.69% [1133;1622] 30.15% [1220;2082] 41.40% [926;1525] 39.28% [968;2020] 52.08%

CPU(s) 10.83 9.25 3.50 7.31 341.38 3600 1277.35 0.59 13.09 3.83 168.72 2843.79 1427.47 3378.27 3600 3600 3600 3600 3600 3600

# of precedences

min

– – – – – 446 – – – – – – – – [1239;1244] 0.40% [1222;1231] 0.73% [1133;1290] 12.17% [1220;1499] 18.61% [926;1333] 30.53% [968;1279] 24.32%

CPU(s) – – – – – 15072.02 – – – – – – – – 36000 36000 36000 36000 36000 36000

set to 10 hours. Tables 3 and 4 show the results for the sets of instances YFJS01–YFJS20 and DAFJS01–DAFJS30, respectively. In the tables, “mks” stands for makespan and “CPU(s)” stands for the elapsed CPU time in seconds. In the cases in which the exact solver achieved the CPU time limit without ﬁnding an optimal solution, the tables report the obtained lower and upper bounds and the relative gap (given by the difference between the bounds divided by the upper bound). It is worth noting (in Table 3) that the exact solver was able to ﬁnd the optimal solution in 14 (YFJS01–YFJS14) out of the 20 instances

Table 4 Numerical results of applying an exact MIP solver (CPLEX) to instances DAFJS01– DAFJS30 of model (1–8). Following (Birgin et al., 2014), an initial feasible solution computed with the heuristic EST (Birgin et al., 2014) was given to CPLEX. Instance

DAFJS01 DAFJS02 DAFJS03 DAFJS04 DAFJS05 DAFJS06 DAFJS07 DAFJS08 DAFJS09 DAFJS10 DAFJS11 DAFJS12 DAFJS13 DAFJS14 DAFJS15 DAFJS16 DAFJS17 DAFJS18 DAFJS19 DAFJS20 DAFJS21 DAFJS22 DAFJS23 DAFJS24 DAFJS25 DAFJS26 DAFJS27 DAFJS28 DAFJS29 DAFJS30

CPLEX with 1 hour of time limit

CPLEX with 10 hours of time limit

mks

CPU(s)

mks

CPU(s)

257 289 576 606 [351.57;402] 12.54% [326;431] 24.36% [497.90;565] 11.88% [628;631] 0.48% [315;484] 34.92% [336;569] 40.95% [658;708] 7.06% [530;720] 26.39% [304;710] 57.18% [358.95;838] 57.17% [512;818] 37.41% [640;831] 22.98% [300;904] 66.81% [322;951] 66.14% [512;595] 13.95% [434;815] 46.75% [504;965] 47.77% [464;902] 48.56% [450;541] 16.82% [476;660] 27.88% [584;897] 34.89% [565;903] 37.43% [503;981] 48.73% [535;662] 19.18% [609;720] 15.42% [467;637] 26.69%

50.17 794.66 10.22 0.76 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600 3600

– – – – [381;394] 3.30% [326;410] 20.49% [498.22;547] 8.92% [628;631] 0.48% [316.74;471] 32.75% [336;538] 37.55% [658;701] 6.13% [530;720] 26.39% [304;683] 55.49% [358.95;775] 53.68% [512;796] 35.68% [640;798] 19.80% [300;902] 66.74% [322;878] 63.33% [512;585] 12.48% [434;810] 46.42% [504;959] 47.45% [464;896] 48.21% [450;537] 16.20% [476;648] 26.54% [584;879] 33.56% [565;898] 37.08% [503;981] 48.73% [535;572] 6.47% [609;710] 14.23% [467;615] 24.07%

– – – – 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000 36000

434

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

79

80

81

82

83

84

85

86

87

88

89

90

91

7

8

9

10

11

12

13

14

15

67

68

69

70

71

72

73

74

75

76

77

78

55

56

57

58 64

65

66

54

59

44

34

60

45

35

61

46

62

63

47

48

49

50

51

52

37

38

39

40

41

42

29

30

31

53

36 43

26

27

28

33 32 16

17

18

19

20

21

22

23

1

2

3

4

24

5

25

6

Fig. 4. Graphical representation of the operations’ precedence constraints of instance DAFJS28.

YFJS01–YFJS20, while a gap smaller than 1% was also obtained in two other instances (YFJS15 and YFJS16). On the other hand, Table 4 shows that the exact solver found the optimal solution in only four instances out of 30 and a gap smaller than 1% in a single instance when considering the set of instances DAFJS01–DAFJS30. These results will explain the “improvements” obtained by the introduced heuristic methods when compared against the solutions obtained by the exact solver (with a CPU time limit). In a ﬁrst set of experiments, we aim to evaluate the numerical performance of the list scheduling algorithm. Tables 5 and 6 show the results of applying the list scheduling algorithm to the sets of instances YFJS01–YFJS20 and DAFJS01–DAFJS30, respectively. For each instance, the tables display the makespan of the solution obtained by the list scheduling algorithm. A comparison with the makespan obtained by the EST heuristic presented in Birgin et al. (2014) and the makespan obtained by the exact solver with CPU time limits of 1 and 10 hours is also presented. In each case, if v1 is the value

of the makespan found by the list scheduling algorithm and v2 is the value of the makespan found by the other method, “diff”, that stands for relative difference, is given by (v1 − v2 )/v2 . This means that negative values of “diff” indicate that the list scheduling algorithm found a solution of better quality. Both tables show that the introduced list scheduling method improves the solutions obtained by the heuristic EST introduced in Birgin et al. (2014). When compared against the exact solver, we must consider the two sets of instances in separate. In the set of instances YFJS01–YFJS20 for which the exact solver found optimal solutions in most of the cases, the difference is approximately 35%. In the set of instances DAFJS01–DAFJS30, in which the exact solver was unable to ﬁnd optimal solutions in most of the cases, the distance is approximately 7%. Needless to say, the list scheduling algorithm runs, for every instance, in less than a fraction of a second. In any case, it is not our intention to compare our heuristic method against the exact solver but simple to evaluate the quality of the obtained solutions.

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

435

Table 5 Results of applying the list scheduling algorithm to the instances YFJS01–YFJS20. Instance

mks

YFJS01 1130 YFJS02 1133 YFJS03 575 YFJS04 576 YFJS05 608 YFJS06 633 YFJS07 628 YFJS08 485 YFJS09 402 YFJS10 513 YFJS11 745 YFJS12 744 YFJS13 553 YFJS14 1555 YFJS15 1690 YFJS16 1769 YFJS17 1734 YFJS18 1735 YFJS19 1604 YFJS20 1700 Average

EST heuristic (Birgin et al., 2014)

CPLEX (1 hour limit)

CPLEX (10 hours limit)

mks

diff

mks

diff

mks

diff

1318 1243 439 569 566 633 628 531 506 541 740 813 717 2055 2296 2006 2408 2082 2038 2369

−14.26 −8.85 30.98 1.23 7.42 0.00 0.00 −8.66 −20.55 −5.18 0.68 −8.49 −22.87 −24.33 −26.39 −11.81 −27.99 −16.67 −21.30 −28.24 −10.26

773 825 347 390 445 447 444 353 242 399 526 512 405 1317 1244 1243 1622 2082 1525 2020

46.18 37.33 65.71 47.69 36.63 41.61 41.44 37.39 66.12 28.57 41.63 45.31 36.54 18.07 35.85 42.32 6.91 −16.67 5.18 −15.84 32.40

773 825 347 390 445 446 444 353 242 399 526 512 405 1317 1244 1231 1290 1499 1333 1279

46.18 37.33 65.71 47.69 36.63 41.93 41.44 37.39 66.12 28.57 41.63 45.31 36.54 18.07 35.85 43.70 34.42 15.74 20.33 32.92 38.68

Table 6 Results of applying the list scheduling algorithm to the instances DAFJS01–DAFJS30. Instance

mks

DAFJS01 321 DAFJS02 350 DAFJS03 631 DAFJS04 607 DAFJS05 505 DAFJS06 497 DAFJS07 632 DAFJS08 706 DAFJS09 533 DAFJS10 621 DAFJS11 767 DAFJS12 727 DAFJS13 768 DAFJS14 888 DAFJS15 788 DAFJS16 808 DAFJS17 935 DAFJS18 939 DAFJS19 598 DAFJS20 854 DAFJS21 937 DAFJS22 826 DAFJS23 548 DAFJS24 687 DAFJS25 885 DAFJS26 915 DAFJS27 982 DAFJS28 633 DAFJS29 800 DAFJS30 640 Average

EST heuristic Birgin et al. (2014)

CPLEX (1 hour limit)

CPLEX (10 hours limit)

mks

diff

mks

diff

mks

diff

327 382 710 653 482 489 717 847 535 629 708 720 766 871 818 831 910 951 601 815 965 902 632 674 897 903 981 703 760 657

−1.83 −8.38 −11.13 −7.04 4.77 1.64 −11.85 −16.65 −0.37 −1.27 8.33 0.97 0.26 1.95 −3.67 −2.77 2.75 −1.26 −0.50 4.79 −2.90 −8.43 −13.29 1.93 −1.34 1.33 0.10 −9.96 5.26 −2.59 −2.20

257 289 576 606 402 431 565 631 484 569 708 720 710 838 818 831 904 951 595 815 965 902 541 660 897 903 981 662 720 637

24.90 21.11 9.55 0.17 25.62 15.31 11.86 11.89 10.12 9.14 8.33 0.97 8.17 5.97 −3.67 −2.77 3.43 −1.26 0.50 4.79 −2.90 −8.43 1.29 4.09 −1.34 1.33 0.10 −4.38 11.11 0.47 5.73

257 289 576 606 394 410 547 631 471 538 701 720 683 775 796 798 902 878 585 810 959 896 537 648 879 898 981 572 710 615

24.90 21.11 9.55 0.17 28.17 21.22 15.54 11.89 13.16 15.43 9.42 0.97 12.45 14.58 −1.01 1.25 3.66 6.95 2.22 5.43 −2.29 −7.81 2.05 6.02 0.68 1.89 0.10 10.66 12.68 4.07 8.40

In a second set of experiments, we aim to evaluate the numerical performance of the introduced beam search method. As every heuristic method, it is also relevant to evaluate the inﬂuence of the method’s parameters in its performance. With that purpose, we considered all 80 combinations of α ∈ {0.25, 0.5, 0.75, 1}, β ∈ {0.25, 0.5, 0.75, 1}, and ξ ∈ {0, 0.25, 0.5, 0.75, 1}. For each combination of parameters, we run the beam search method and we computed the average difference against the solutions obtained by the exact solver running with a CPU time limit of 1 hour and 10 hours. Tables 7 and 8 show

those average distances for the sets of instances YFJS01–YFJS20 and DAFJS01–DAFJS30, respectively. The same information is graphically presented in Figs. 5a and 5b, respectively. Additionally, Figs. 6a and 6b present the average elapsed CPU time (in seconds) of the beam search method for each combination of parameters. A few comments are in order. We will focus on the comparison against the solutions obtained by the exact solver with a CPU time limit of 1 hour (the other comparison is similar). Table 7 shows that the beam search method with the combination (α , β , ξ ) =

436

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440 Table 7 Comparison of applying the Beam Search method, with different choices for parameters α , β , and ξ , to instances YFJS01–YFJS20 against the CPLEX solver with two different CPU time limits (1 hour and 10 hours).

α

ξ

0.25

0.75

1.00

Diff versus CPLEX (10 hours limit)

0.25

0.50

0.75

1.00

0.25

0.50

0.75

1.00

8.86 7.73 7.17 7.76 8.32 8.76 7.14 6.97 6.82 6.51 9.03 6.44 6.01 5.50 4.99 9.03 6.53 5.41 5.12 4.56

8.87 7.17 6.42 7.02 7.68 7.45 6.00 5.80 6.11 5.80 6.86 4.80 5.11 4.53 4.08 6.85 4.63 4.66 4.32 3.79

8.42 6.51 5.70 6.54 6.80 7.41 5.22 5.69 5.65 5.46 6.87 4.24 4.80 4.32 4.00 7.11 4.59 4.43 4.01 3.92

8.28 6.25 5.68 6.22 6.46 7.24 5.05 5.23 5.21 5.50 6.51 4.19 4.48 3.81 3.82 6.86 4.42 4.00 3.87 3.50

13.94 12.72 12.09 12.74 13.31 13.80 12.05 11.93 11.72 11.46 14.04 11.35 10.91 10.44 9.93 14.05 11.43 10.30 9.99 9.49

13.91 12.15 11.32 11.98 12.65 12.46 10.88 10.74 11.00 10.75 11.87 9.69 9.99 9.45 8.99 11.86 9.48 9.54 9.19 8.70

13.47 11.45 10.59 11.49 11.75 12.41 10.07 10.62 10.54 10.40 11.88 9.09 9.68 9.24 8.91 12.09 9.44 9.31 8.84 8.80

13.32 11.18 10.56 11.17 11.41 12.24 9.90 10.17 10.09 10.43 11.54 9.04 9.35 8.73 8.73 11.87 9.27 8.86 8.71 8.38

β

0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00

0.50

Diff versus CPLEX (1 hour limit)

β

Table 8 Comparison of applying the Beam Search method, with different choices for parameters α , β , and ξ , to instances DAFJS01–DAFJS30 against the CPLEX solver with two different CPU time limits (1 hour and 10 hours).

α

0.25

0.50

0.75

1.00

ξ

0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00

Diff versus CPLEX (1 hour limit)

Diff versus CPLEX (10 hours limit)

0.25

0.50

0.75

1.00

0.25

0.50

0.75

1.00

−5.07 −5.18 −5.24 −5.20 −5.19 −4.94 −5.53 −5.71 −5.43 −5.11 −5.19 −5.88 −5.82 −5.68 −5.74 −5.19 −5.90 −6.01 −5.72 −5.65

−5.57 −5.74 −5.70 −5.67 −5.66 −5.56 −6.01 −6.07 −5.91 −5.86 −5.62 −6.02 −6.06 −5.93 −6.07 −5.65 −6.10 −6.28 −6.04 −6.01

−5.75 −5.95 −5.81 −5.85 −5.87 −5.75 −6.14 −6.13 −6.06 −5.98 −5.70 −6.07 −6.27 −6.16 −6.19 −5.70 −6.30 −6.36 −6.19 −6.13

−5.72 −6.01 −5.91 −5.90 −6.03 −5.77 −6.19 −6.19 −6.11 −5.96 −5.77 −6.25 −6.32 −6.25 −6.34 −5.77 −6.33 −6.36 −6.29 −6.22

−2.69 −2.80 −2.86 −2.81 −2.80 −2.55 −3.16 −3.36 −3.05 −2.72 −2.81 −3.52 −3.46 −3.32 −3.37 −2.81 −3.53 −3.65 −3.36 −3.28

−3.20 −3.37 −3.33 −3.30 −3.28 −3.19 −3.65 −3.72 −3.54 −3.49 −3.25 −3.66 −3.71 −3.58 −3.71 −3.27 −3.74 −3.94 −3.69 −3.65

−3.38 −3.60 −3.44 −3.49 −3.50 −3.39 −3.79 −3.78 −3.70 −3.62 −3.33 −3.72 −3.93 −3.81 −3.83 −3.33 −3.95 −4.02 −3.84 −3.77

−3.35 −3.65 −3.55 −3.54 −3.68 −3.41 −3.84 −3.85 −3.75 −3.60 −3.40 −3.90 −3.98 −3.90 −3.99 −3.40 −3.98 −4.02 −3.95 −3.86

β

(1, 1, 1) achieves an average distance of 3.5% in the set of instances YFJS01–YFJS20. This is an interesting result if we recall that for that set of instances the exact solver found optimal solutions in most of the cases. On the other extreme of the table, with the choice of parameters (α , β , ξ ) = (0.25, 0.25, 0), the difference is 8.86%. On the other hand, Fig. 6a shows that with the former combination of parameters the beam search method requires almost 2 650 seconds per instance in average, while with the latter combination it requires approximately 5 seconds per instance in average. Other than that the table shows that considering all the 80 combinations of parameters, the differences range from 3.5% to 9.03%, showing a nice feature: the performance of the method does not strongly depend on a precise choice of parameters. The relation between the choice of parameters and the performance of the method can be easily seen in Figs. 5a and 5b. For ﬁxed values of α and β , the optimal value of ξ is always greater than

β

or equal to 0.5. Moreover, as expected, the larger the value of α and β , the larger the search space and, in consequence, the better the solutions’ quality and the larger the required CPU time. If we now focus in Table 8, we can see that, in the set of instances DAFJS01–DAFJS30, for which the exact solver was unable to ﬁnd optimal solutions in most of the cases, the beam search method improves the exact solver’s solutions displaying differences that range from −4.94% (for the combination of parameters (α , β , ξ ) = (0.5, 0.25, 0)) up to −6.36% (for the combination of parameters (α , β , ξ ) = (1, 1, 0.5)). In the former case, considering instances DAFJS01–DAFJS30 individually, differences ranges from 12.94% up to −17.18% with an average CPU time of 1.18 seconds per instance, while in the latter case, differences ranges from 8.95% up to −17.52% with an average CPU time of 115.04 seconds per instance. Recalling that this comparison is being done with the exact solver that run with

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

437

Fig. 5. Graphical representation of the performance of the beam search method for different choices of parameters α , β , and ξ (when compared against the results obtained by running the exact solver with a CPU time limit of 1 hour). (a) Instances YFJS01–YFJS20. (b) Instances DAFJS01–DAFJS30.

438

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

Fig. 6. Graphical representation of the elapsed CPU time required by the beam search method for different choices of parameters α , β , and ξ . (a) Instances YFJS01–YFJS20. (b) Instances DAFJS01–DAFJS30.

a CPU time limit of 1 hour (that was achieved in most of the cases; see Table 4), this means the beam search method is able to ﬁnd high quality solutions in a small elapsed time. In a ﬁnal experiment, we evaluate the performance of the introduced beam search method (for the combination of parameters (α , β , ξ ) = (1, 1, 1)) when applied to the classical FJSP (without sequencing ﬂexibility). We considered the sets of instances introduced in Barnes and Chambers (1996); Brandimarte (1993); Dauzère-Pérès and Paulli (1997); Hurink, Jurish, and Thole (1994). Table 9 reports

the relative error (RE) of the makespan mks found by our beam search method with respect to the lower bounds mksLB available at http://people.idsia.ch/˜monaldo/fjspresults/fjsp_result.ps 2015 given by

RE = 100% × ( mks − mksLB )/ mksLB . In the table, “# inst.” is the number of instances in each set. If these results are compared to those associated with the state-of-the-art GA

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440 Table 9 Performance of the beam search method when applied to all well-known instances of the classical FJSP. Data Set

# inst.

RE

Brandimarte Dauzére-Pérés and Paulli Barnes and Chambers Hurink EData Hurink RData Hurink VData

10 18 21 43 43 43

26.54 6.21 32.09 8.52 4.75 0.26

method introduced in Pezzella et al. (2008, see Table 7 on p.3210), it can be seen that the beam search method obtains competitive results; outperforming all the three GA methods whose results are reported in Pezzella et al. (2008) in the set “Dauzére-Pérés and Paulli” and the set “Hurink VData” that is the set in which instances present the largest machine ﬂexibility. However, all possible warnings apply to this comparison: (a) the beam search method is a deterministic method, with no randomness and with ﬁnite termination, while the other methods are run ﬁve times and the best makespan is used in the comparison; (b) in order to obtain a comparison-at-a-glance, we run the beam search method ﬁxing its parameters α , β , and ξ all equal to 1 (while 80 different combinations were evaluated when the numerical experiments with the FJSP with sequencing ﬂexibility were done); and (c) methods being compared were run on different machines and the CPU times of the GA methods considered in Pezzella et al. (2008) were not reported. 6. Conclusions An extension of the classical ﬂexible job shop problem, in which precedences between the operations can be given by an arbitrary directed acyclic graph instead of a linear order, was considered in this work. A list scheduling algorithm that fully exploits the characteristics of the problem was introduced. Then, substituting the choice of a single operation to be scheduled and sequenced at each step of the method by a set of operations, a beam search method was naturally developed. Numerical results assessed the effectiveness and eﬃciency of both approaches. The precise description and full availability of methods and results allows them to be used as benchmark for future developments. There is vast range of extensions of the classical ﬂexible job shop problem that might be considered in order to deal with more realistic situations. Redesigning the presented methods to deal with those extensions would be the subject of future research. Acknowledgements This work was supported by PRONEX-CNPq/FAPERJ E26/111.449/2010-APQ1, FAPESP (2010/10133-0, 2013/03447-6, 2013/05475-7, and 2013/07375-0), and CNPq. The authors are thankful to Marcio T. I. Oshiro that made it possible the numerical comparisons included in the present work by re-running the numerical experiments in Birgin et al. (2014) on the same computational environment considered in the present work. The authors are also thankful to Jun Zeng (Hewlett-Packard Laboratories, Hewlett-Packard Co. Palo Alto, California, USA) for fruitful discussions and suggestions and to three anonymous referees whose comments helped to improve the quality of this work. References Alvarez-Valdés, R., Fuertes, A., Tamarit, J. M., Giménez, G., & Ramos, R. (2005). A heuristic to schedule ﬂexible job-shop in a glass factory. European Journal of Operational Research, 165, pp.525–534.

439

Barnes, J. W., & Chambers, J. B. (1996). Flexible Job Shop Scheduling with tabu search, graduate program in operations research and industrial engineering. Technical Report. University of Texas, Austin. Birgin, E. G., Feoﬁloff, P., Fernandes, C. G., de Melo, E. L., Oshiro, M. T. I., & Ronconi, D. P. (2014). A MILP model for an extended version of the ﬂexible job shop problem. Optimization Letters, 8, pp.1417–1431. Brandimarte, P. (1993). Routing and scheduling in a ﬂexible job shop by tabu search. Annals of Operations Research, 41, pp.157–183. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms (3rd ed.). Cambridge, MA: The MIT Press. Dauzère-Pérès, S., & Paulli, J. (1997). An integrated approach for modeling and solving the general multiprocessor job-shop scheduling problem using tabu search. Annals of Operations Research, 70, pp.281–306. Fattahi, P., Mehrabad, M., & Jolai, F. (2007). Mathematical modeling and heuristic approaches to ﬂexible job shop scheduling problems. Journal of Intelligent Manufacturing, 18, pp.331–342. Fulkerson, D. R. (1956). Note on Dilworth’s decomposition theorem for partially ordered sets. Proceedings of the American Mathematical Society, 7, pp.701–702. Gan, P. Y., & Lee, K. S. (2002). Scheduling of ﬂexible-sequenced process plans in a mould manufacturing shop. International Journal of Advanced Manufacturing Technology, 20, pp.214–222. Garey, M., Johnson, D., & Sethi, R. (1976). The complexity of ﬂowshop and jobshop scheduling. Mathematics of Operations Research, 1, pp.117–129. Gutiérrez, C., & García-Magariño, I. (2011). Modular design of a hybrid genetic algorithm for a ﬂexible job-shop scheduling problem. Knowledge-Based Systems, 24, pp.102–112. Ho, N. B., Tai, J. C., & Lai, E. M.-K. (2006). An effective architecture for learning and evolving ﬂexible job-shop schedules. European Journal of Operational Research, 179, pp.316–333. Hurink, J., Jurish, B., & Thole, M. (1994). Tabu search for the job shop scheduling problem with multi-purpose machines. OR-Spektrum, 15, pp.205–215. Joseph, O. A., & Sridharan, R. (2011). Effects of routing ﬂexibility, sequencing ﬂexibility and scheduling decision rules on the performance of a ﬂexible manufacturing system. International Journal of Advanced Manufacturing Technology, 56, pp.291–306. Kacem, I., Hammadi, S., & Borne, P. (2002). Pareto-optimality approach for ﬂexible job-shop scheduling problems: hybridization of evolutionary algorithms and fuzzy logic. Mathematics and Computers in Simulation, 60, pp.245–276. Kim, Y.-D. (1995). A backward approach in list scheduling algorithms for multimachine tardiness problems. Computers & Operations Research, 22, pp.307–319. Kim, Y. K., Park, K., & Ko, J. (2003). A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Computers & Operations Research, 30, pp.1151–1171. Lee, G.-C., Kim, Y.-D., & Choi, S.-W. (2004). Bottleneck-focused scheduling for a hybrid ﬂowshop. International Journal of Production Research, 42, pp.165–181. Lee, S., Moon, I., Bae, H., & Kim, J. (2012). Flexible job-shop scheduling problems with AND/OR precedence constraints. International Journal of Production Research, 50, pp.1979–2001. Li, J., Pan, Q., & Liang, Y. (2010). An effective hybrid tabu search algorithm for multiobjective ﬂexible job-shop scheduling problems. Computers & Industrial Engineering, 59, pp.647–662. Lin, G. Y.-J., & Solberg, J. J. (1991). Effectiveness of ﬂexible routing control. International Journal of Flexible Manufacturing Systems, 3, pp.189–211. Mainieri, G., & Ronconi, D. P. (2013). New heuristics for total tardiness minimization in a ﬂexible ﬂowshop. Optimization Letters, 7, pp.665–684. Manne, A. (1960). On the job-shop scheduling problem. Operations Research, 8, pp.219– 223. Mastrolilli, M., & Gambardella, L. M. (2000). Effective neighbourhood functions for the ﬂexible job shop problem. Journal of Scheduling, 3, pp.3–20. Ow, P. S., & Morton, T. E. (1988). Filtered beam search in scheduling. International Journal of Production Research, 26, pp.35–62. Özgüven, C., Özbakır, L., & Yavuz, Y. (2010). Mathematical models for job-shop scheduling problems with routing and process plan ﬂexibility. Applied Mathematical Modeling, 34, pp.1539–1548. Pezzella, F., Morganti, G., & Ciaschetti, G. (2008). A genetic algorithm for the ﬂexible job-shop scheduling problem. Computers & Operations Research, 35, pp.3202– 3212. Pinedo, M. (2008). Scheduling: theory, algorithms, and systems (third edition). New York, NY: Springer. Rachamadugu, R., Nandkeolyar, U., & Schriber, T. (1993). Scheduling with sequencing ﬂexibility. Decision Sciences, 24, pp.315–342. Sabuncuoglu, I., & Bayiz, M. (1999). Job shop scheduling with beam search. European Journal of Operational Research, 118, pp.390–412. Sabuncuoglu, I., & Karabuk, S. (1998). A beam search-based algorithm and evaluation of scheduling approaches for ﬂexible manufacturing systems. IIE Transaction, 30, pp.179–191. Shao, X., Liu, W., Liu, Q., & Zhang, C. (2013). Hybrid discrete particle swarm optimization for multi-objective ﬂexible job-shop scheduling problem. International Journal of Advanced manufacturing Technology, 67, pp.2885–2901. Shi-Jin, W., Bing-Hai, Z., & Li-Feng, X. (2008). A ﬁltered-beam-search-based heuristic algorithm for ﬂexible job-shop scheduling problem. International Journal of Production Research, 46, pp.3027–3058. Sörensen, K. (2015). Metaheuristics – the metaphor exposed. International Transactions in Operational Research, 22, pp.3–18. Tay, J. C., & Ho, N. B. (2008). Evolving dispatching rules using genetic programming for solving multi-objective ﬂexible job-shop problems. Computers & Industrial Engineering, 54, pp.453–473.

440

E.G. Birgin et al. / European Journal of Operational Research 247 (2015) 421–440

Vilcot, G., & Billaut, J.-C. (2008). A tabu search and a genetic algorithm for solving a bicriteria general job shop scheduling problem. European Journal of Operational Research, 190, pp.398–411. Yuan, Y., & Xu, H. (2013). An integrated search heuristic for large-scale ﬂexible job shop scheduling problems. Computers & Operations Research, 40, pp.2864–2877. Zeng, J., Jackson, S., Lin, I.-J., Gustafson, M., Hoarau, E., & Mitchell, R. (2010). On-demand digital print operations: a simulation based case study. Technical Report. HewlettPackard.

Zhang, G., Shao, X., Li, P., & Gao, L. (2009). An effective hybrid particle swarm optimization algorithm for multi-objective ﬂexible job-shop scheduling problem. Computers & Industrial Engineering, 56, pp.1309–1318. Printers National Environmental Assistance Center (PNEAC). Available at: http://www.pneac.org, accessed on February 9, (2015). http://people.idsia.ch/˜monaldo/fjspresults/fjsp_result.ps accessed on February 9. (2015).

Copyright © 2018 KUNDOC.COM. All rights reserved.