grouping techniques for scheduling problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · tim...
TRANSCRIPT
![Page 1: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/1.jpg)
Grouping Techniques For Scheduling Problems
Tim Hartnack
Theory of ParallelismInstitute of Computer Science
Christian-Albrechts-University of Kiel
October 11, 2007
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 1 of 26
![Page 2: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/2.jpg)
Introduction Overview
Overview
1 IntroductionOverview
2 Unrelated parallel machines with costsBasic ideasRounding and profiling jobsGrouping jobsDynamic programming
3 Outlook and discussion
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 2 of 26
![Page 3: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/3.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 4: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/4.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 5: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/5.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 6: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/6.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 7: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/7.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 8: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/8.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 9: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/9.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 10: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/10.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 11: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/11.jpg)
Unrelated parallel machines with costs
Problem
0 < ε < 1 fixed
m≥ 2 fixedGiven:
n independent jobsm unrelated parallel machines
jobs without interruption
each machine: one job at a moment
job Jj on machine i requires pij ≥ 0
and incurs cij ≥ 0 costs, i = 1, · · · ,m, j = 1, · · · ,n
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 3 of 26
![Page 12: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/12.jpg)
Unrelated parallel machines with costs
Objective function of unrelated parallel machines with costs
Objective function
T + µ
n
∑j=1
n
∑i=1
xijcij (1)
with xij =
{1, if job Jj runs on machine i0, else
T makespan, and µ ≥ 0
By multiplying each cost value by µ we may assume, w.l.o.g. that µ = 1
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 4 of 26
![Page 13: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/13.jpg)
Unrelated parallel machines with costs
Objective function of unrelated parallel machines with costs
Objective function
T + µ
n
∑j=1
n
∑i=1
xijcij (1)
with xij =
{1, if job Jj runs on machine i0, else
T makespan, and µ ≥ 0
By multiplying each cost value by µ we may assume, w.l.o.g. that µ = 1
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 4 of 26
![Page 14: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/14.jpg)
Unrelated parallel machines with costs
Objective function of unrelated parallel machines with costs
Objective function
T + µ
n
∑j=1
n
∑i=1
xijcij (1)
with xij =
{1, if job Jj runs on machine i0, else
T makespan, and µ ≥ 0
By multiplying each cost value by µ we may assume, w.l.o.g. that µ = 1
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 4 of 26
![Page 15: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/15.jpg)
Unrelated parallel machines with costs
Objective function of unrelated parallel machines with costs
Objective function
T + µ
n
∑j=1
n
∑i=1
xijcij (1)
with xij =
{1, if job Jj runs on machine i0, else
T makespan, and µ ≥ 0
By multiplying each cost value by µ we may assume, w.l.o.g. that µ = 1
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 4 of 26
![Page 16: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/16.jpg)
Unrelated parallel machines with costs
Notation and scaling factors
Definition (scaling factor)
Define for each job Jj ∈J
1 dj = mini=1,··· ,m (pij + cij)2 D = ∑
nj=1 dj
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 5 of 26
![Page 17: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/17.jpg)
Unrelated parallel machines with costs
Upper and lower bound of the objective function
LemmaFor the objective function, the following inequality holds: D≤ OPT ≤ m
Proof.
D =n
∑j=1
dj ≤m
∑i=1
n
∑j=1
x∗ijcij +m
∑i=1
n
∑j=1
x∗ijpij
≤ C∗+T∗ ≤ m(C∗+T∗) = m ·OPT
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 6 of 26
![Page 18: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/18.jpg)
Unrelated parallel machines with costs
Upper and lower bound of the objective function
Let mj indicate a machine such that dj = pmj,j + cmj,j
Assign each job Jj to machine mj
The objective function is bounded by
∑j∈J
cmj,j + ∑j∈J
pmj,j = D
OPT ∈[D
m ,D]
By dividing all times and costs by Dm we get:
1≤ OPT ≤ m
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 7 of 26
![Page 19: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/19.jpg)
Unrelated parallel machines with costs
Upper and lower bound of the objective function
Let mj indicate a machine such that dj = pmj,j + cmj,j
Assign each job Jj to machine mj
The objective function is bounded by
∑j∈J
cmj,j + ∑j∈J
pmj,j = D
OPT ∈[D
m ,D]
By dividing all times and costs by Dm we get:
1≤ OPT ≤ m
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 7 of 26
![Page 20: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/20.jpg)
Unrelated parallel machines with costs
Upper and lower bound of the objective function
Let mj indicate a machine such that dj = pmj,j + cmj,j
Assign each job Jj to machine mj
The objective function is bounded by
∑j∈J
cmj,j + ∑j∈J
pmj,j = D
OPT ∈[D
m ,D]
By dividing all times and costs by Dm we get:
1≤ OPT ≤ m
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 7 of 26
![Page 21: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/21.jpg)
Unrelated parallel machines with costs
Upper and lower bound of the objective function
Let mj indicate a machine such that dj = pmj,j + cmj,j
Assign each job Jj to machine mj
The objective function is bounded by
∑j∈J
cmj,j + ∑j∈J
pmj,j = D
OPT ∈[D
m ,D]
By dividing all times and costs by Dm we get:
1≤ OPT ≤ m
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 7 of 26
![Page 22: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/22.jpg)
Unrelated parallel machines with costs
Upper and lower bound of the objective function
Let mj indicate a machine such that dj = pmj,j + cmj,j
Assign each job Jj to machine mj
The objective function is bounded by
∑j∈J
cmj,j + ∑j∈J
pmj,j = D
OPT ∈[D
m ,D]
By dividing all times and costs by Dm we get:
1≤ OPT ≤ m
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 7 of 26
![Page 23: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/23.jpg)
Unrelated parallel machines with costs Basic ideas
Overview of the algorithm
1 Rounding and profiling of jobs creates profilesconstant number of profiles
2 Grouping of jobsconstant number of jobs
3 Schedule constant number of jobs with dynamic programming
Observation (Transformation)
We say that a transformation produces 1+O(ε) loss at the objective function
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 8 of 26
![Page 24: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/24.jpg)
Unrelated parallel machines with costs Basic ideas
Overview of the algorithm
1 Rounding and profiling of jobs creates profilesconstant number of profiles
2 Grouping of jobsconstant number of jobs
3 Schedule constant number of jobs with dynamic programming
Observation (Transformation)
We say that a transformation produces 1+O(ε) loss at the objective function
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 8 of 26
![Page 25: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/25.jpg)
Unrelated parallel machines with costs Basic ideas
Overview of the algorithm
1 Rounding and profiling of jobs creates profilesconstant number of profiles
2 Grouping of jobsconstant number of jobs
3 Schedule constant number of jobs with dynamic programming
Observation (Transformation)
We say that a transformation produces 1+O(ε) loss at the objective function
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 8 of 26
![Page 26: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/26.jpg)
Unrelated parallel machines with costs Basic ideas
Overview of the algorithm
1 Rounding and profiling of jobs creates profilesconstant number of profiles
2 Grouping of jobsconstant number of jobs
3 Schedule constant number of jobs with dynamic programming
Observation (Transformation)
We say that a transformation produces 1+O(ε) loss at the objective function
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 8 of 26
![Page 27: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/27.jpg)
Unrelated parallel machines with costs Basic ideas
Overview of the algorithm
1 Rounding and profiling of jobs creates profilesconstant number of profiles
2 Grouping of jobsconstant number of jobs
3 Schedule constant number of jobs with dynamic programming
Observation (Transformation)
We say that a transformation produces 1+O(ε) loss at the objective function
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 8 of 26
![Page 28: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/28.jpg)
Unrelated parallel machines with costs Basic ideas
Overview of the algorithm
1 Rounding and profiling of jobs creates profilesconstant number of profiles
2 Grouping of jobsconstant number of jobs
3 Schedule constant number of jobs with dynamic programming
Observation (Transformation)
We say that a transformation produces 1+O(ε) loss at the objective function
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 8 of 26
![Page 29: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/29.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Sets of machines
For every Jj define:
fast machines pij ≤ ε
m dj
cheap machines cij ≤ ε
m dj
slow machines pij ≥ mε
dj
expensive machines cij ≥ djε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 9 of 26
![Page 30: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/30.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Sets of machines
For every Jj define:
fast machines pij ≤ ε
m dj
cheap machines cij ≤ ε
m dj
slow machines pij ≥ mε
dj
expensive machines cij ≥ djε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 9 of 26
![Page 31: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/31.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Sets of machines
For every Jj define:
fast machines pij ≤ ε
m dj
cheap machines cij ≤ ε
m dj
slow machines pij ≥ mε
dj
expensive machines cij ≥ djε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 9 of 26
![Page 32: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/32.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Sets of machines
For every Jj define:
fast machines pij ≤ ε
m dj
cheap machines cij ≤ ε
m dj
slow machines pij ≥ mε
dj
expensive machines cij ≥ djε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 9 of 26
![Page 33: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/33.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Rounding Jobs
fast machine i of Jj : pij = 0
cheap machine i of Jj : cij = 0
slow machine i of Jj : pij = +∞
expensive machine i ∈ of Jj : cij = +∞
other machine i of Jj round pij, cij to the nearest lower value of ε
m dj (1+ ε)h,for some h ∈ N
ObservationFor each job Jj ∈J there is always a machine wich is neither expensive norslow
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 10 of 26
![Page 34: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/34.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Rounding Jobs
fast machine i of Jj : pij = 0
cheap machine i of Jj : cij = 0
slow machine i of Jj : pij = +∞
expensive machine i ∈ of Jj : cij = +∞
other machine i of Jj round pij, cij to the nearest lower value of ε
m dj (1+ ε)h,for some h ∈ N
ObservationFor each job Jj ∈J there is always a machine wich is neither expensive norslow
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 10 of 26
![Page 35: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/35.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Rounding Jobs
fast machine i of Jj : pij = 0
cheap machine i of Jj : cij = 0
slow machine i of Jj : pij = +∞
expensive machine i ∈ of Jj : cij = +∞
other machine i of Jj round pij, cij to the nearest lower value of ε
m dj (1+ ε)h,for some h ∈ N
ObservationFor each job Jj ∈J there is always a machine wich is neither expensive norslow
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 10 of 26
![Page 36: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/36.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Rounding Jobs
fast machine i of Jj : pij = 0
cheap machine i of Jj : cij = 0
slow machine i of Jj : pij = +∞
expensive machine i ∈ of Jj : cij = +∞
other machine i of Jj round pij, cij to the nearest lower value of ε
m dj (1+ ε)h,for some h ∈ N
ObservationFor each job Jj ∈J there is always a machine wich is neither expensive norslow
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 10 of 26
![Page 37: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/37.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Rounding Jobs
fast machine i of Jj : pij = 0
cheap machine i of Jj : cij = 0
slow machine i of Jj : pij = +∞
expensive machine i ∈ of Jj : cij = +∞
other machine i of Jj round pij, cij to the nearest lower value of ε
m dj (1+ ε)h,for some h ∈ N
ObservationFor each job Jj ∈J there is always a machine wich is neither expensive norslow
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 10 of 26
![Page 38: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/38.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Rounding Jobs
fast machine i of Jj : pij = 0
cheap machine i of Jj : cij = 0
slow machine i of Jj : pij = +∞
expensive machine i ∈ of Jj : cij = +∞
other machine i of Jj round pij, cij to the nearest lower value of ε
m dj (1+ ε)h,for some h ∈ N
ObservationFor each job Jj ∈J there is always a machine wich is neither expensive norslow
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 10 of 26
![Page 39: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/39.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding
LemmaRounding produces 1+4ε loss
Proof.Start by considering rounding to zero the times and costs of jobs on fastand cheap machines, respectively
Let A be an optimal schedule of thisThe objective function value of A≤ OPT
we just reduced times and costs
F and C denote sets of jobs, which are processed on fast and cheapmachines according to AReplace times and costs of the transformed instance by the originals
∑Jj∈F
ε
mdj + ∑
Jj∈C
ε
mdj ≤ 2
n
∑j=1
ε
mdj = 2ε
Dm
= 2ε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 11 of 26
![Page 40: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/40.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding
LemmaRounding produces 1+4ε loss
Proof.Start by considering rounding to zero the times and costs of jobs on fastand cheap machines, respectively
Let A be an optimal schedule of thisThe objective function value of A≤ OPT
we just reduced times and costs
F and C denote sets of jobs, which are processed on fast and cheapmachines according to AReplace times and costs of the transformed instance by the originals
∑Jj∈F
ε
mdj + ∑
Jj∈C
ε
mdj ≤ 2
n
∑j=1
ε
mdj = 2ε
Dm
= 2ε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 11 of 26
![Page 41: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/41.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding
LemmaRounding produces 1+4ε loss
Proof.Start by considering rounding to zero the times and costs of jobs on fastand cheap machines, respectively
Let A be an optimal schedule of thisThe objective function value of A≤ OPT
we just reduced times and costs
F and C denote sets of jobs, which are processed on fast and cheapmachines according to AReplace times and costs of the transformed instance by the originals
∑Jj∈F
ε
mdj + ∑
Jj∈C
ε
mdj ≤ 2
n
∑j=1
ε
mdj = 2ε
Dm
= 2ε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 11 of 26
![Page 42: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/42.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding
LemmaRounding produces 1+4ε loss
Proof.Start by considering rounding to zero the times and costs of jobs on fastand cheap machines, respectively
Let A be an optimal schedule of thisThe objective function value of A≤ OPT
we just reduced times and costs
F and C denote sets of jobs, which are processed on fast and cheapmachines according to AReplace times and costs of the transformed instance by the originals
∑Jj∈F
ε
mdj + ∑
Jj∈C
ε
mdj ≤ 2
n
∑j=1
ε
mdj = 2ε
Dm
= 2ε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 11 of 26
![Page 43: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/43.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding
LemmaRounding produces 1+4ε loss
Proof.Start by considering rounding to zero the times and costs of jobs on fastand cheap machines, respectively
Let A be an optimal schedule of thisThe objective function value of A≤ OPT
we just reduced times and costs
F and C denote sets of jobs, which are processed on fast and cheapmachines according to AReplace times and costs of the transformed instance by the originals
∑Jj∈F
ε
mdj + ∑
Jj∈C
ε
mdj ≤ 2
n
∑j=1
ε
mdj = 2ε
Dm
= 2ε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 11 of 26
![Page 44: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/44.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding
LemmaRounding produces 1+4ε loss
Proof.Start by considering rounding to zero the times and costs of jobs on fastand cheap machines, respectively
Let A be an optimal schedule of thisThe objective function value of A≤ OPT
we just reduced times and costs
F and C denote sets of jobs, which are processed on fast and cheapmachines according to AReplace times and costs of the transformed instance by the originals
∑Jj∈F
ε
mdj + ∑
Jj∈C
ε
mdj ≤ 2
n
∑j=1
ε
mdj = 2ε
Dm
= 2ε
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 11 of 26
![Page 45: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/45.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding II
Proof.Show: there exists an approximate schedule where jobs are scheduledneither on slow nor on expensive machines
pij,cij := +∞
Let A be an optimal schedule, T∗ Makespan C∗ total costsS and E sets,
containing jobs, running on slow and expensive machines
Assign Jj ∈ S∪E mj
This may increase the objective funtion value by at most
∑Jj∈S∪E
dj ≤ε
m ∑Jj∈S
pA(j),j + ε ∑Jj∈E
cA(j),j ≤ εT∗+ εC∗
since pA(j),j ≥ mε
dj for Jj ∈ S and cA(j),j ≥djε
for Jj ∈ E
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 12 of 26
![Page 46: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/46.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding II
Proof.Show: there exists an approximate schedule where jobs are scheduledneither on slow nor on expensive machines
pij,cij := +∞
Let A be an optimal schedule, T∗ Makespan C∗ total costsS and E sets,
containing jobs, running on slow and expensive machines
Assign Jj ∈ S∪E mj
This may increase the objective funtion value by at most
∑Jj∈S∪E
dj ≤ε
m ∑Jj∈S
pA(j),j + ε ∑Jj∈E
cA(j),j ≤ εT∗+ εC∗
since pA(j),j ≥ mε
dj for Jj ∈ S and cA(j),j ≥djε
for Jj ∈ E
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 12 of 26
![Page 47: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/47.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding II
Proof.Show: there exists an approximate schedule where jobs are scheduledneither on slow nor on expensive machines
pij,cij := +∞
Let A be an optimal schedule, T∗ Makespan C∗ total costsS and E sets,
containing jobs, running on slow and expensive machines
Assign Jj ∈ S∪E mj
This may increase the objective funtion value by at most
∑Jj∈S∪E
dj ≤ε
m ∑Jj∈S
pA(j),j + ε ∑Jj∈E
cA(j),j ≤ εT∗+ εC∗
since pA(j),j ≥ mε
dj for Jj ∈ S and cA(j),j ≥djε
for Jj ∈ E
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 12 of 26
![Page 48: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/48.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding II
Proof.Show: there exists an approximate schedule where jobs are scheduledneither on slow nor on expensive machines
pij,cij := +∞
Let A be an optimal schedule, T∗ Makespan C∗ total costsS and E sets,
containing jobs, running on slow and expensive machines
Assign Jj ∈ S∪E mj
This may increase the objective funtion value by at most
∑Jj∈S∪E
dj ≤ε
m ∑Jj∈S
pA(j),j + ε ∑Jj∈E
cA(j),j ≤ εT∗+ εC∗
since pA(j),j ≥ mε
dj for Jj ∈ S and cA(j),j ≥djε
for Jj ∈ E
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 12 of 26
![Page 49: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/49.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding II
Proof.Show: there exists an approximate schedule where jobs are scheduledneither on slow nor on expensive machines
pij,cij := +∞
Let A be an optimal schedule, T∗ Makespan C∗ total costsS and E sets,
containing jobs, running on slow and expensive machines
Assign Jj ∈ S∪E mj
This may increase the objective funtion value by at most
∑Jj∈S∪E
dj ≤ε
m ∑Jj∈S
pA(j),j + ε ∑Jj∈E
cA(j),j ≤ εT∗+ εC∗
since pA(j),j ≥ mε
dj for Jj ∈ S and cA(j),j ≥djε
for Jj ∈ E
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 12 of 26
![Page 50: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/50.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding II
Proof.Show: there exists an approximate schedule where jobs are scheduledneither on slow nor on expensive machines
pij,cij := +∞
Let A be an optimal schedule, T∗ Makespan C∗ total costsS and E sets,
containing jobs, running on slow and expensive machines
Assign Jj ∈ S∪E mj
This may increase the objective funtion value by at most
∑Jj∈S∪E
dj ≤ε
m ∑Jj∈S
pA(j),j + ε ∑Jj∈E
cA(j),j ≤ εT∗+ εC∗
since pA(j),j ≥ mε
dj for Jj ∈ S and cA(j),j ≥djε
for Jj ∈ E
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 12 of 26
![Page 51: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/51.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Results of rounding II
Proof.Show: there exists an approximate schedule where jobs are scheduledneither on slow nor on expensive machines
pij,cij := +∞
Let A be an optimal schedule, T∗ Makespan C∗ total costsS and E sets,
containing jobs, running on slow and expensive machines
Assign Jj ∈ S∪E mj
This may increase the objective funtion value by at most
∑Jj∈S∪E
dj ≤ε
m ∑Jj∈S
pA(j),j + ε ∑Jj∈E
cA(j),j ≤ εT∗+ εC∗
since pA(j),j ≥ mε
dj for Jj ∈ S and cA(j),j ≥djε
for Jj ∈ E
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 12 of 26
![Page 52: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/52.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Summary & Outlook
up to nowAll jobs rounded
nextCreate profiles of jobs
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 13 of 26
![Page 53: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/53.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Summary & Outlook
up to nowAll jobs rounded
nextCreate profiles of jobs
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 13 of 26
![Page 54: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/54.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Summary & Outlook
up to nowAll jobs rounded
nextCreate profiles of jobs
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 13 of 26
![Page 55: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/55.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Summary & Outlook
up to nowAll jobs rounded
nextCreate profiles of jobs
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 13 of 26
![Page 56: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/56.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Profiles for jobs
Definition (Execution profile)The execution profile of a job Jj is a m-tuple⟨
Π1,j, · · · ,Πm,j⟩,
so that pij = ε
m dj (1+ ε)Πi,j
Definition (Cost profile)The cost profile of a job Jj is a m-tuple⟨
Γ1,j, · · · ,Γm,j⟩,
so that cij = ε
m dj (1+ ε)Γi,j
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 14 of 26
![Page 57: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/57.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Profiles for jobs
Definition (Execution profile)The execution profile of a job Jj is a m-tuple⟨
Π1,j, · · · ,Πm,j⟩,
so that pij = ε
m dj (1+ ε)Πi,j
Definition (Cost profile)The cost profile of a job Jj is a m-tuple⟨
Γ1,j, · · · ,Γm,j⟩,
so that cij = ε
m dj (1+ ε)Γi,j
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 14 of 26
![Page 58: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/58.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Special cases in the profile
For pij = +∞ put Πi,j := +∞
For pij = 0 put Πi,j :=−∞
For cij = +∞ put Γi,j := +∞
For cij = 0 put Γi,j :=−∞
ObservationTwo jobs have the same profile, if they have the same execution profile as wellas the same cost profile
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 15 of 26
![Page 59: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/59.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Special cases in the profile
For pij = +∞ put Πi,j := +∞
For pij = 0 put Πi,j :=−∞
For cij = +∞ put Γi,j := +∞
For cij = 0 put Γi,j :=−∞
ObservationTwo jobs have the same profile, if they have the same execution profile as wellas the same cost profile
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 15 of 26
![Page 60: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/60.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Special cases in the profile
For pij = +∞ put Πi,j := +∞
For pij = 0 put Πi,j :=−∞
For cij = +∞ put Γi,j := +∞
For cij = 0 put Γi,j :=−∞
ObservationTwo jobs have the same profile, if they have the same execution profile as wellas the same cost profile
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 15 of 26
![Page 61: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/61.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Special cases in the profile
For pij = +∞ put Πi,j := +∞
For pij = 0 put Πi,j :=−∞
For cij = +∞ put Γi,j := +∞
For cij = 0 put Γi,j :=−∞
ObservationTwo jobs have the same profile, if they have the same execution profile as wellas the same cost profile
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 15 of 26
![Page 62: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/62.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Special cases in the profile
For pij = +∞ put Πi,j := +∞
For pij = 0 put Πi,j :=−∞
For cij = +∞ put Γi,j := +∞
For cij = 0 put Γi,j :=−∞
ObservationTwo jobs have the same profile, if they have the same execution profile as wellas the same cost profile
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 15 of 26
![Page 63: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/63.jpg)
Unrelated parallel machines with costs Rounding and profiling jobs
Number of profiles
LemmaThe number of different profiles is at most
l :=(
3+2log1+ε
mε
)2m
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 16 of 26
![Page 64: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/64.jpg)
Unrelated parallel machines with costs Grouping jobs
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles is constant
next: Group jobs =⇒ Number of jobs constant
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 17 of 26
![Page 65: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/65.jpg)
Unrelated parallel machines with costs Grouping jobs
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles is constant
next: Group jobs =⇒ Number of jobs constant
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 17 of 26
![Page 66: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/66.jpg)
Unrelated parallel machines with costs Grouping jobs
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles is constant
next: Group jobs =⇒ Number of jobs constant
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 17 of 26
![Page 67: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/67.jpg)
Unrelated parallel machines with costs Grouping jobs
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles is constant
next: Group jobs =⇒ Number of jobs constant
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 17 of 26
![Page 68: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/68.jpg)
Unrelated parallel machines with costs Grouping jobs
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles is constant
next: Group jobs =⇒ Number of jobs constant
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 17 of 26
![Page 69: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/69.jpg)
Unrelated parallel machines with costs Grouping jobs
Grouping Jobs
1 Make a partition of the jobs
L = {Jj : dj >ε
m}
andS = {Jj : dj ≤
ε
m}
2 L set of big jobs3 S set of small jobs4 Partition S in Si, i = 1, · · · , l based on the profile
Ja,Jb ∈ Si with da,db ≤εm2
Create Jcfrom Ja and Jb
Continue this step until there is only one job Jj ∈ Si with dj ≤εm2 left
5 use the above grouping on all Si of S
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 18 of 26
![Page 70: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/70.jpg)
Unrelated parallel machines with costs Grouping jobs
Grouping Jobs
1 Make a partition of the jobs
L = {Jj : dj >ε
m}
andS = {Jj : dj ≤
ε
m}
2 L set of big jobs3 S set of small jobs4 Partition S in Si, i = 1, · · · , l based on the profile
Ja,Jb ∈ Si with da,db ≤εm2
Create Jcfrom Ja and Jb
Continue this step until there is only one job Jj ∈ Si with dj ≤εm2 left
5 use the above grouping on all Si of S
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 18 of 26
![Page 71: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/71.jpg)
Unrelated parallel machines with costs Grouping jobs
Grouping Jobs
1 Make a partition of the jobs
L = {Jj : dj >ε
m}
andS = {Jj : dj ≤
ε
m}
2 L set of big jobs3 S set of small jobs4 Partition S in Si, i = 1, · · · , l based on the profile
Ja,Jb ∈ Si with da,db ≤εm2
Create Jcfrom Ja and Jb
Continue this step until there is only one job Jj ∈ Si with dj ≤εm2 left
5 use the above grouping on all Si of S
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 18 of 26
![Page 72: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/72.jpg)
Unrelated parallel machines with costs Grouping jobs
Grouping Jobs
1 Make a partition of the jobs
L = {Jj : dj >ε
m}
andS = {Jj : dj ≤
ε
m}
2 L set of big jobs3 S set of small jobs4 Partition S in Si, i = 1, · · · , l based on the profile
Ja,Jb ∈ Si with da,db ≤εm2
Create Jcfrom Ja and Jb
Continue this step until there is only one job Jj ∈ Si with dj ≤εm2 left
5 use the above grouping on all Si of S
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 18 of 26
![Page 73: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/73.jpg)
Unrelated parallel machines with costs Grouping jobs
Grouping Jobs
1 Make a partition of the jobs
L = {Jj : dj >ε
m}
andS = {Jj : dj ≤
ε
m}
2 L set of big jobs3 S set of small jobs4 Partition S in Si, i = 1, · · · , l based on the profile
Ja,Jb ∈ Si with da,db ≤εm2
Create Jcfrom Ja and Jb
Continue this step until there is only one job Jj ∈ Si with dj ≤εm2 left
5 use the above grouping on all Si of S
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 18 of 26
![Page 74: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/74.jpg)
Unrelated parallel machines with costs Grouping jobs
Grouping Jobs
1 Make a partition of the jobs
L = {Jj : dj >ε
m}
andS = {Jj : dj ≤
ε
m}
2 L set of big jobs3 S set of small jobs4 Partition S in Si, i = 1, · · · , l based on the profile
Ja,Jb ∈ Si with da,db ≤εm2
Create Jcfrom Ja and Jb
Continue this step until there is only one job Jj ∈ Si with dj ≤εm2 left
5 use the above grouping on all Si of S
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 18 of 26
![Page 75: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/75.jpg)
Unrelated parallel machines with costs Grouping jobs
Grouping Jobs
1 Make a partition of the jobs
L = {Jj : dj >ε
m}
andS = {Jj : dj ≤
ε
m}
2 L set of big jobs3 S set of small jobs4 Partition S in Si, i = 1, · · · , l based on the profile
Ja,Jb ∈ Si with da,db ≤εm2
Create Jcfrom Ja and Jb
Continue this step until there is only one job Jj ∈ Si with dj ≤εm2 left
5 use the above grouping on all Si of S
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 18 of 26
![Page 76: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/76.jpg)
Unrelated parallel machines with costs Grouping jobs
Grouping Jobs
1 Make a partition of the jobs
L = {Jj : dj >ε
m}
andS = {Jj : dj ≤
ε
m}
2 L set of big jobs3 S set of small jobs4 Partition S in Si, i = 1, · · · , l based on the profile
Ja,Jb ∈ Si with da,db ≤εm2
Create Jcfrom Ja and Jb
Continue this step until there is only one job Jj ∈ Si with dj ≤εm2 left
5 use the above grouping on all Si of S
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 18 of 26
![Page 77: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/77.jpg)
Unrelated parallel machines with costs Grouping jobs
Results of grouping
LemmaWith a loss of 1+ ε the number of jobs can be reduced tok := min{n,
(log m
ε
)O(m)}
Proof.After the grouping there are at most l jobs, one from each subset Si, withdj ≤
ε
m2
Therefore the number of jobs is bounded to:
2Dε
m+ l≤ 2m2
ε
m+ l =
(log
mε
)O(m)
Proof of loss will be omitted
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 19 of 26
![Page 78: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/78.jpg)
Unrelated parallel machines with costs Grouping jobs
Results of grouping
LemmaWith a loss of 1+ ε the number of jobs can be reduced tok := min{n,
(log m
ε
)O(m)}
Proof.After the grouping there are at most l jobs, one from each subset Si, withdj ≤
ε
m2
Therefore the number of jobs is bounded to:
2Dε
m+ l≤ 2m2
ε
m+ l =
(log
mε
)O(m)
Proof of loss will be omitted
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 19 of 26
![Page 79: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/79.jpg)
Unrelated parallel machines with costs Grouping jobs
Results of grouping
LemmaWith a loss of 1+ ε the number of jobs can be reduced tok := min{n,
(log m
ε
)O(m)}
Proof.After the grouping there are at most l jobs, one from each subset Si, withdj ≤
ε
m2
Therefore the number of jobs is bounded to:
2Dε
m+ l≤ 2m2
ε
m+ l =
(log
mε
)O(m)
Proof of loss will be omitted
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 19 of 26
![Page 80: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/80.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles constantGrouping =⇒ Number of jobs constant
next: Create a schedule with dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 20 of 26
![Page 81: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/81.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles constantGrouping =⇒ Number of jobs constant
next: Create a schedule with dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 20 of 26
![Page 82: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/82.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles constantGrouping =⇒ Number of jobs constant
next: Create a schedule with dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 20 of 26
![Page 83: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/83.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles constantGrouping =⇒ Number of jobs constant
next: Create a schedule with dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 20 of 26
![Page 84: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/84.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles constantGrouping =⇒ Number of jobs constant
next: Create a schedule with dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 20 of 26
![Page 85: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/85.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary & Outlook
up to nowAll jobs roundedEvery job has a profileNumber of profiles constantGrouping =⇒ Number of jobs constant
next: Create a schedule with dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 20 of 26
![Page 86: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/86.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 87: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/87.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 88: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/88.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 89: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/89.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 90: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/90.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 91: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/91.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 92: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/92.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 93: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/93.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 94: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/94.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 95: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/95.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 96: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/96.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 97: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/97.jpg)
Unrelated parallel machines with costs Dynamic programming
Dynamic Programming
1 J1, · · · ,Jk jobs of the transformed instance2 A schedule configuration s = (t1, · · · , tm,c) is a (m+1)-tuple
ti completion time of machine ic total cost
3 Vj a set of these tuples (f.a. j = 1, · · · ,n)for every i = 1, · · · ,m there is a tuple v ∈ Vj whose entries are 0
Exception: the ith component, which is pij,the (m+1)th component, which is cij
4 T (j,s) denote the truth value of: There is a schedule for J1, · · · ,Jj, forwhich s is the corresponding configuration
Calculate all T (j,s):
T (1,v) =
{true, if v ∈ Vj
false, if v /∈ Vj
T (j,s) =∨
v∈Vj;v≤s T (j−1,s− v) for j = 2, · · · ,k
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 21 of 26
![Page 98: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/98.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary
up to nowAll jobs roundedEvery job has a profileNumber of profile constantGrouping =⇒ number of jobs constantSchedule per dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 22 of 26
![Page 99: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/99.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary
up to nowAll jobs roundedEvery job has a profileNumber of profile constantGrouping =⇒ number of jobs constantSchedule per dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 22 of 26
![Page 100: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/100.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary
up to nowAll jobs roundedEvery job has a profileNumber of profile constantGrouping =⇒ number of jobs constantSchedule per dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 22 of 26
![Page 101: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/101.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary
up to nowAll jobs roundedEvery job has a profileNumber of profile constantGrouping =⇒ number of jobs constantSchedule per dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 22 of 26
![Page 102: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/102.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary
up to nowAll jobs roundedEvery job has a profileNumber of profile constantGrouping =⇒ number of jobs constantSchedule per dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 22 of 26
![Page 103: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/103.jpg)
Unrelated parallel machines with costs Dynamic programming
Summary
up to nowAll jobs roundedEvery job has a profileNumber of profile constantGrouping =⇒ number of jobs constantSchedule per dynamic programming
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 22 of 26
![Page 104: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/104.jpg)
Unrelated parallel machines with costs Dynamic programming
Unrelated Parallel Machines with Costs
LemmaFor the problem Unrelated Parallel Machines with Costs there is a FPTAS
that runs in O(n)+(log m
ε
)O(m2).Without proof
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 23 of 26
![Page 105: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/105.jpg)
Outlook and discussion
Outlook and Discussion
Implementing the algorithm in Java (quite slow)
For which other problem would this algorithm match ?
Could the running time be better ?
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 24 of 26
![Page 106: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/106.jpg)
Outlook and discussion
Outlook and Discussion
Implementing the algorithm in Java (quite slow)
For which other problem would this algorithm match ?
Could the running time be better ?
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 24 of 26
![Page 107: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/107.jpg)
Outlook and discussion
Outlook and Discussion
Implementing the algorithm in Java (quite slow)
For which other problem would this algorithm match ?
Could the running time be better ?
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 24 of 26
![Page 108: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/108.jpg)
Outlook and discussion
Literature
Aleksei V. Fishkin, Klaus Jansen, Monaldo Mastrolilli. GroupingTechniques for Scheduling Problems: Simpler and Faster.
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 25 of 26
![Page 109: Grouping Techniques For Scheduling Problemscgi.csc.liv.ac.uk/~ctag/seminars/tim-harnack.pdf · Tim Hartnack Theory of Parallelism Institute of Computer Science Christian-Albrechts-University](https://reader036.vdocuments.site/reader036/viewer/2022071213/603fc3668f5ea8256c301b6a/html5/thumbnails/109.jpg)
Outlook and discussion
END
Thanks for your attention
October 11, 2007 Tim Hartnack Grouping Techniques For Scheduling Problems 26 of 26