cosc 3101a - design and analysis of algorithms 8 elements of dp memoization longest common...
TRANSCRIPT
COSC 3101A - Design and Analysis of Algorithms
8
Elements of DPMemoization
Longest Common SubsequenceGreedy Algorithms
Many of these slides are taken from Monica Nicolescu, Univ. of Nevada, Reno, [email protected]
6/22/2004 Lecture 8 COSC3101A 2
Elements of Dynamic Programming
• Optimal Substructure
– An optimal solution to a problem contains within it an
optimal solution to subproblems
– Optimal solution to the entire problem is build in a
bottom-up manner from optimal solutions to
subproblems
• Overlapping Subproblems
– If a recursive algorithm revisits the same subproblems
over and over the problem has overlapping
subproblems
6/22/2004 Lecture 8 COSC3101A 3
Optimal Substructure - Examples
• Assembly line
– Fastest way of going through a station j contains:
the fastest way of going through station j-1 on either
line
• Matrix multiplication
– Optimal parenthesization of Ai Ai+1 Aj that splits
the product between Ak and Ak+1 contains:
an optimal solution to the problem of parenthesizing
Ai..k and Ak+1..j
6/22/2004 Lecture 8 COSC3101A 4
Discovering Optimal Substructure
1. Show that a solution to a problem consists of making a
choice that leaves one or more similar problems to be
solved
2. Suppose that for a given problem you are given the
choice that leads to an optimal solution
3. Given this choice determine which subproblems result
4. Show that the solutions to the subproblems used within
the optimal solution must themselves be optimal
• Cut-and-paste approach
6/22/2004 Lecture 8 COSC3101A 5
Parameters of Optimal Substructure
• How many subproblems are used in an optimal
solution for the original problem
– Assembly line:
– Matrix multiplication:
• How many choices we have in determining
which subproblems to use in an optimal solution
– Assembly line:
– Matrix multiplication:
One subproblem (the line that gives best time)
Two choices (line 1 or line 2)
Two subproblems (subproducts Ai..k, Ak+1..j)
j - i choices for k (splitting the product)
6/22/2004 Lecture 8 COSC3101A 6
Parameters of Optimal Substructure
• Intuitively, the running time of a dynamic
programming algorithm depends on two factors:
– Number of subproblems overall
– How many choices we look at for each subproblem
• Assembly line (n) subproblems (n stations)
– 2 choices for each subproblem
• Matrix multiplication: (n2) subproblems (1 i j n)
– At most n-1 choices
(n) overall
(n3) overall
6/22/2004 Lecture 8 COSC3101A 7
Memoization
• Top-down approach with the efficiency of typical dynamic
programming approach
• Maintaining an entry in a table for the solution to each
subproblem
– memoize the inefficient recursive algorithm
• When a subproblem is first encountered its solution is
computed and stored in that table
• Subsequent “calls” to the subproblem simply look up that
value
6/22/2004 Lecture 8 COSC3101A 8
Memoized Matrix-Chain
Alg.: MEMOIZED-MATRIX-CHAIN(p)
1. n length[p] – 1
2. for i 1 to n
3. do for j i to n
4. do m[i, j]
5. return LOOKUP-CHAIN(p, 1, n)
Initialize the m table withlarge values that indicate whether the values of m[i, j]have been computed
Top-down approach
6/22/2004 Lecture 8 COSC3101A 9
Memoized Matrix-Chain
Alg.: LOOKUP-CHAIN(p, i, j)
1. if m[i, j] <
2. then return m[i, j]
3. if i = j
4. then m[i, j] 0
5. else for k i to j – 1
6. do q LOOKUP-CHAIN(p, i, k) +
LOOKUP-CHAIN(p, k+1, j) + pi-1pkpj
7. if q < m[i, j]
8. then m[i, j] q
9. return m[i, j]
Running time is O(n3)
6/22/2004 Lecture 8 COSC3101A 10
Dynamic Progamming vs. Memoization
• Advantages of dynamic programming vs.
memoized algorithms
– No overhead for recursion, less overhead for
maintaining the table
– The regular pattern of table accesses may be used to
reduce time or space requirements
• Advantages of memoized algorithms vs.
dynamic programming
– Some subproblems do not need to be solved
6/22/2004 Lecture 8 COSC3101A 11
Matrix-Chain Multiplication - Summary
• Both the dynamic programming approach
and the memoized algorithm can solve the
matrix-chain multiplication problem in O(n3)
• Both methods take advantage of the overlapping
subproblems property
• There are only (n2) different subproblems
– Solutions to these problems are computed only once
• Without memoization the natural recursive
algorithm runs in exponential time
6/22/2004 Lecture 8 COSC3101A 12
Longest Common Subsequence
• Given two sequences
X = x1, x2, …, xm
Y = y1, y2, …, yn
find a maximum length common subsequence (LCS) of X and Y
• E.g.: X = A, B, C, B, D, A, B
• Subsequences of X:– A subset of elements in the sequence taken in order
A, B, D, B, C, D, B, etc.
6/22/2004 Lecture 8 COSC3101A 13
Example
X = A, B, C, B, D, A, B X = A, B, C, B, D, A, B
Y = B, D, C, A, B, A Y = B, D, C, A, B, A
B, C, B, A and B, D, A, B are longest common subsequences of X and Y (length = 4)
B, C, A, however is not a LCS of X and Y
6/22/2004 Lecture 8 COSC3101A 14
Brute-Force Solution
• For every subsequence of X, check whether it’s
a subsequence of Y
• There are 2m subsequences of X to check
• Each subsequence takes (n) time to check
– scan Y for first letter, from there scan for second, and
so on
• Running time: (n2m)
6/22/2004 Lecture 8 COSC3101A 15
Notations
• Given a sequence X = x1, x2, …, xm we define
the i-th prefix of X, for i = 0, 1, 2, …, m
Xi = x1, x2, …, xi
• c[i, j] = the length of a LCS of the sequences
Xi = x1, x2, …, xi and Yj = y1, y2, …, yj
6/22/2004 Lecture 8 COSC3101A 16
A Recursive Solution
Case 1: xi = yj
e.g.: Xi = A, B, D, E
Yj = Z, B, E
– Append xi = yj to the LCS of Xi-1 and Yj-1
– Must find a LCS of Xi-1 and Yj-1 optimal solution to a
problem includes optimal solutions to subproblems
c[i, j] =c[i - 1, j - 1] + 1
6/22/2004 Lecture 8 COSC3101A 17
A Recursive Solution
Case 2: xi yj
e.g.: Xi = A, B, D, G
Yj = Z, B, D
– Must solve two problems
• find a LCS of Xi-1 and Yj: Xi-1 = A, B, D and Yj = Z, B, D
• find a LCS of Xi and Yj-1: Xi = A, B, D, G and Yj = Z, B
• Optimal solution to a problem includes optimal
solutions to subproblems
c[i, j] =max { c[i - 1, j], c[i, j-1] }
6/22/2004 Lecture 8 COSC3101A 18
Overlapping Subproblems
• To find a LCS of X and Y
– we may need to find the LCS between X and Yn-1 and
that of Xm-1 and Y
– Both the above subproblems has the subproblem of
finding the LCS of Xm-1 and Yn-1
• Subproblems share subsubproblems
6/22/2004 Lecture 8 COSC3101A 19
3. Computing the Length of the LCS
0 if i = 0 or j = 0c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi yj
0 0 0 0 0 0
0
0
0
0
0
yj:
xm
y1 y2 yn
x1
x2
xi
j
i
0 1 2 n
m
1
2
0
first
second
6/22/2004 Lecture 8 COSC3101A 20
Additional Information
0 if i,j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j])if xi yj
0 0 0 0 0 0
0
0
0
0
0
yj:
D
A C F
A
B
xi
j
i
0 1 2 n
m
1
2
0
A matrix b[i, j]:• For a subproblem [i, j] it
tells us what choice was made to obtain the optimal value
• If xi = yj
b[i, j] = “ ”
• Else, if c[i - 1, j] ≥ c[i, j-1]
b[i, j] = “ ”else
b[i, j] = “ ”
3
3 C
Db & c:
c[i,j-1]
c[i-1,j]
6/22/2004 Lecture 8 COSC3101A 21
LCS-LENGTH(X, Y, m, n)
1. for i ← 1 to m2. do c[i, 0] ← 03. for j ← 0 to n4. do c[0, j] ← 05. for i ← 1 to m6. do for j ← 1 to n7. do if xi = yj
8. then c[i, j] ← c[i - 1, j - 1] + 19. b[i, j ] ← “ ”10. else if c[i - 1, j] ≥ c[i, j - 1]11. then c[i, j] ← c[i - 1, j]12. b[i, j] ← “↑”13. else c[i, j] ← c[i, j - 1]14. b[i, j] ← “←”15.return c and b
The length of the LCS if one of the sequencesis empty is zero
Case 1: xi = yj
Case 2: xi yj
Running time: (mn)
6/22/2004 Lecture 8 COSC3101A 22
Example
X = B, D, C, A, B, AY = A, B, C, B, D, A
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi yj0 1 2 63 4 5yj B D AC A B
5
1
2
0
3
4
6
7
D
A
B
xi
C
B
A
B
0 0 00 0 00
0
0
0
0
0
0
0
0
0
0
1 1
1
1 1 1
1
2 2
1
1
2 2
2
2
1
1
2
2
3 3
1
2
2
2
3
3
1
2
3
2
3
4
1
2
2
3
4
4
If xi = yj
b[i, j] = “ ”
Else if c[i - 1, j] ≥ c[i, j-1]
b[i, j] = “ ”else
b[i, j] = “ ”
6/22/2004 Lecture 8 COSC3101A 23
4. Constructing a LCS
• Start at b[m, n] and follow the arrows
• When we encounter a “ “ in b[i, j] xi = yj is an element of the LCS 0 1 2 63 4 5
yj B D AC A B
5
1
2
0
3
4
6
7
D
A
B
xi
C
B
A
B
0 0 00 0 00
0
0
0
0
0
0
0
0
0
0
1 1
1
1 1 1
1
2 2
1
1
2 2
2
2
1
1
2
2
3 3
1
2
2
2
3
3
1
2
3
2
3
4
1
2
2
3
4
4
6/22/2004 Lecture 8 COSC3101A 24
PRINT-LCS(b, X, i, j)
1. if i = 0 or j = 02. then return
3. if b[i, j] = “ ”4. then PRINT-LCS(b, X, i - 1, j - 1)
5. print xi
6. elseif b[i, j] = “↑”7. then PRINT-LCS(b, X, i - 1, j)8. else PRINT-LCS(b, X, i, j - 1)
Initial call: PRINT-LCS(b, X, length[X], length[Y])
Running time: (m + n)
6/22/2004 Lecture 8 COSC3101A 25
Improving the Code
• What can we say about how each entry c[i, j] is computed?– It depends only on c[i -1, j - 1], c[i - 1, j], and c[i, j - 1]
– Eliminate table b and compute in O(1) which of the three values was used to compute c[i, j]
– We save (mn) space from table b
– However, we do not asymptotically decrease the auxiliary space requirements: still need table c
• If we only need the length of the LCS– LCS-LENGTH works only on two rows of c at a time
• The row being computed and the previous row
– We can reduce the asymptotic space requirements by storing only these two rows
6/22/2004 Lecture 8 COSC3101A 26
Greedy Algorithms
• Similar to dynamic programming, but simpler approach
– Also used for optimization problems
• Idea: When we have a choice to make, make the one
that looks best right now
– Make a locally optimal choice in hope of getting a globally
optimal solution
• Greedy algorithms don’t always yield an optimal solution
• When the problem has certain general characteristics,
greedy algorithms give optimal solutions
6/22/2004 Lecture 8 COSC3101A 27
Activity Selection
• Schedule n activities that require exclusive use of a common resource
S = {a1, . . . , an} – set of activities
• ai needs resource during period [si , fi)
– si = start time and fi = finish time of activity ai
– 0 si < fi <
• Activities ai and aj are compatible if the intervals [si , fi) and [sj, fj) do not overlap
i j j ifj sifi sj
6/22/2004 Lecture 8 COSC3101A 28
Activity Selection Problem
Select the largest possible set of nonoverlapping (mutually compatible) activities.
E.g.:
• Activities are sorted in increasing order of finish times
• A subset of mutually compatible activities: {a3, a9, a11}
• Maximal set of mutually compatible activities:
{a1, a4, a8, a11} and {a2, a4, a9, a11}
i 1 2 3 4 5 6 7 8 9 10 11
si 1 3 0 5 3 5 6 8 8 2 12
fi 4 5 6 7 8 9 10 11 12 13 14
6/22/2004 Lecture 8 COSC3101A 29
Optimal Substructure
• Define the space of subproblems:
Sij = { ak S : fi ≤ sk < fk ≤ sj }
– activities that start after ai finishes and finish before aj starts
• Activities that are compatible with the ones in S ij
– All activities that finish by fi
– All activities that start no earlier than sj
6/22/2004 Lecture 8 COSC3101A 30
Representing the Problem
• Add fictitious activities
– a0 = [-, 0)
– an+1 = [, “ + 1”)
– Range for Sij is 0 i, j n + 1
• In a set Sij we assume that activities are sorted in increasing order of finish times:
f0 f1 f2 … fn < fn+1
• What happens if i ≥ j ?
– For an activity ak Sij: fi sk < fk sj < fj
contradiction with fi fj!
Sij = (the set Sij must be empty!)
• We only need to consider sets Sij with 0 i < j n + 1
S = S0,n+1 entire space of activities
6/22/2004 Lecture 8 COSC3101A 31
Optimal Substructure
• Subproblem:– Select a maximum size subset of mutually compatible
activities from set Sij
• Assume that a solution to the above subproblem includes activity ak (Sij is non-empty)
Solution to Sij = (Solution to Sik) {ak} (Solution to Skj)
Solution to Sij = Solution to Sik + 1 + Solution to Skj
Sik Skj
Sij
6/22/2004 Lecture 8 COSC3101A 32
Optimal Substructure (cont.)
Aij = Optimal solution to Sij
• Claim: Sets Aik and Akj must be optimal solutions
• Assume Aik’ that includes more activities than Aik
Size[Aij’] = Size[Aik’] + 1 + Size[Akj] > Size[Aij]
Contradiction: we assumed that Aij is the maximum # of
activities taken from Sij
Aik Akj
Aij
6/22/2004 Lecture 8 COSC3101A 33
Recursive Solution
• Any optimal solution (associated with a set Sij) contains within it optimal solutions to subproblems Sik and Skj
• c[i, j] = size of maximum-size subset of mutually compatible activities in Sij
• If Sij = c[i, j] = 0 (i ≥ j)
6/22/2004 Lecture 8 COSC3101A 34
Recursive Solution
If Sij and if we consider that ak is used in an
optimal solution (maximum-size subset of
mutually compatible activities of Sij)
c[i, j] =
Sik Skj
Sij
c[i,k] + c[k, j] + 1
6/22/2004 Lecture 8 COSC3101A 35
Recursive Solution
0 if Sij =
c[i, j] = max {c[i,k] + c[k, j] + 1} if Sij
• There are j – i – 1 possible values for k
– k = i+1, …, j – 1
– ak cannot be ai or aj (from the definition of Sij)
Sij = { ak S : fi ≤ sk < fk ≤ sj }
– We check all the values and take the best one
We could now write a dynamic programming algorithm
i < k < jak Sij
6/22/2004 Lecture 8 COSC3101A 36
Theorem
Let Sij and am be the activity in Sij with the
earliest finish time:
fm = min { fk: ak Sij }
Then:
1. am is used in some maximum-size subset of
mutually compatible activities of Sij
– There exists some optimal solution that contains am
2. Sim =
– Choosing am leaves Smj the only nonempty subproblem
6/22/2004 Lecture 8 COSC3101A 37
Proof
2. Assume ak Sim
fi sk < fk sm < fm
fk < fm contradiction !
am did not have the earliest finish time
There is no ak Sim Sim =
Sim Smj
sm fm
am
Sij
ak
6/22/2004 Lecture 8 COSC3101A 38
Proof
1. am is used in some maximum-size subset of
mutually compatible activities of Sij
• Aij = optimal solution for activity selection from Sij
– Order activities in Aij in increasing order of finish time
– Let ak be the first activity in Aij = {ak, …}
• If ak = am Done!
• Otherwise, replace ak with am (resulting in a set Aij’)
– since fm fk the activities in Aij’ will continue to be compatible
– Aij’ will have the same size with Aij am is used in some maximum-size subset
Sij
sm fm
am fk
: Greedy Choice Property
6/22/2004 Lecture 8 COSC3101A 39
Why is the Theorem Useful?
• Making the greedy choice (the activity with the earliest finish time in Sij)– Reduce the number of subproblems and choices– Solve each subproblem in a top-down fashion
Dynamic programming
Using the theorem
Number of subproblems in the optimal solution
Number of choices to consider
2 subproblems:Sik, Skj
j – i – 1 choices 1 choice: the activitywith the earliest finish time in Sij
1 subproblem: Smj
Sim =
6/22/2004 Lecture 8 COSC3101A 40
Greedy Approach• To select a maximum size subset of mutually
compatible activities from set Sij:
– Choose am Sij with earliest finish time (greedy choice)
– Add am to the set of activities used in the optimal solution
– Solve the same problem for the set Smj
• From the theorem
– By choosing am we are guaranteed to have used an activity
included in an optimal solution
We do not need to solve the subproblem Smj before making the
choice!
– The problem has the GREEDY CHOICE property
6/22/2004 Lecture 8 COSC3101A 41
Characterizing the Subproblems
• The original problem: find the maximum subset of mutually compatible activities for S = S0, n+1
• Activities are sorted by increasing finish time
a0, a1, a2, a3, …, an+1
• We always choose an activity with the earliest finish time– Greedy choice maximizes the unscheduled time
remaining
– Finish time of activities selected is strictly increasing
6/22/2004 Lecture 8 COSC3101A 42
A Recursive Greedy Algorithm
Alg.: REC-ACT-SEL (s, f, i, j)1. m ← i + 12. while m < j and sm < fi ►Find first activity in Sij
3. do m ← m + 14. if m < j5. then return {am} REC-ACT-SEL(s, f, m, j)
6. else return
• Activities are ordered in increasing order of finish time • Running time: (n) – each activity is examined only once• Initial call: REC-ACT-SEL(s, f, 0, n+1)
fiai
am fmam fmam fm
6/22/2004 Lecture 8 COSC3101A 43
0 - 0
1 1 4
2 3 5
3 0 6
4 5 7
5 3 8
6 5 9
7 6 10
8 8 11
9 8 12
10 2 13
11 12 14
12 -
k sk fk
a1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
a0 m=1
a1
a1
a1
a1
a1
a1
a1
a1
a1
a1
a1
a2
a3
a4
a4
a4
a4
a4
a4
a4
a4
a4
a5
a6
a7
a8
m=4
m=8
a8
a8
a8
a8
a9
a10
a11
m=11
a11
Example
6/22/2004 Lecture 8 COSC3101A 44
An Iterative Greedy Algorithm
Alg.: GREEDY-ACTIVITY-SELECTOR(s, f)1. n ← length[s]
2. A ← {a1}
3. i ← 14. for m ← 2 to n5. do if sm ≥ fi ► activity am is compatible with ai
6. then A ← A {am}
7. i ← m ► ai is most recent addition to A
8. return A
• Assumes that activities are ordered in increasing order of finish time • Running time: (n) – each activity is examined only once
fiai
am
am
am
fmfm
fm
6/22/2004 Lecture 8 COSC3101A 45
Steps Toward Our Greedy Solution
1. Determine the optimal substructure of the problem
2. Develop a recursive solution
3. Prove that one of the optimal choices is the greedy
choice
4. Show that all but one of the subproblems resulted by
making the greedy choice are empty
5. Develop a recursive algorithm that implements the
greedy strategy
6. Convert the recursive algorithm to an iterative one
6/22/2004 Lecture 8 COSC3101A 46
Designing Greedy Algorithms
1. Cast the optimization problem as one for which:
we make a choice and are left with only one
subproblem to solve
2. Prove that there is always an optimal solution to
the original problem that makes the greedy choice
– Making the greedy choice is always safe
3. Demonstrate that after making the greedy choice:
the greedy choice + an optimal solution to the
resulting subproblem leads to an optimal solution
6/22/2004 Lecture 8 COSC3101A 47
Correctness of Greedy Algorithms
1. Greedy Choice Property
– A globally optimal solution can be arrived at by
making a locally optimal (greedy) choice
2. Optimal Substructure Property
– We know that we have arrived at a subproblem by
making a greedy choice
– Optimal solution to subproblem + greedy choice
optimal solution for the original problem
6/22/2004 Lecture 8 COSC3101A 48
Activity Selection
• Greedy Choice Property
There exists an optimal solution that includes the greedy
choice:
– The activity am with the earliest finish time in Sij
• Optimal Substructure:
If an optimal solution to subproblem Sij includes activity
ak it must contain optimal solutions to Sik and Skj
Similarly, am + optimal solution to Sim optimal sol.
6/22/2004 Lecture 8 COSC3101A 49
Dynamic Programming vs. Greedy Algorithms
• Dynamic programming
– We make a choice at each step
– The choice depends on solutions to subproblems
– Bottom up solution, from smaller to larger subproblems
• Greedy algorithm
– Make the greedy choice and THEN
– Solve the subproblem arising after the choice is made
– The choice we make may depend on previous choices,
but not on solutions to subproblems
– Top down solution, problems decrease in size
6/22/2004 Lecture 8 COSC3101A 50
The Knapsack Problem
• The 0-1 knapsack problem
– A thief rubbing a store finds n items: the i-th item is
worth vi dollars and weights wi pounds (vi, wi
integers)
– The thief can only carry W pounds in his knapsack
– Items must be taken entirely or left behind
– Which items should the thief take to maximize the
value of his load?
• The fractional knapsack problem
– Similar to above
– The thief can take fractions of items
6/22/2004 Lecture 8 COSC3101A 51
Fractional Knapsack Problem
• Knapsack capacity: W
• There are n items: the i-th item has value vi and
weight wi
• Goal:
– find xi such that for all 0 xi 1, i = 1, 2, .., n
wixi W and
xivi is maximum
6/22/2004 Lecture 8 COSC3101A 52
Fractional Knapsack Problem
• Greedy strategy 1:– Pick the item with the maximum value
• E.g.:– W = 1
– w1 = 100, v1 = 2
– w2 = 1, v2 = 1
– Taking from the item with the maximum value:
Total value taken = v1/w1 = 2/100
– Smaller than what the thief can take if choosing the other item
Total value (choose item 2) = v2/w2 = 1
6/22/2004 Lecture 8 COSC3101A 53
Fractional Knapsack Problem
Greedy strategy 2:
• Pick the item with the maximum value per pound vi/wi
• If the supply of that element is exhausted and the thief can
carry more: take as much as possible from the item with the
next greatest value per pound
• It is good to order items based on their value per pound
n
n
w
v
w
v
w
v ...
2
2
1
1
6/22/2004 Lecture 8 COSC3101A 54
Fractional Knapsack Problem
Alg.: Fractional-Knapsack (W, v[n], w[n])
1. While w > 0 and as long as there are items remaining
2. pick item with maximum vi/wi
3. xi min (1, w/wi)
4. remove item i from list
5. w w – xiwi
• w – the amount of space remaining in the knapsack (w = W)
• Running time: (n) if items already ordered; else (nlgn)
6/22/2004 Lecture 8 COSC3101A 55
50
Fractional Knapsack - Example
• E.g.:
1020
30
50
Item 1
Item 2
Item 3
$60 $100 $120
10
20
$60
$100
+
$240
$6/pound $5/pound $4/pound
20---30
$80
+
6/22/2004 Lecture 8 COSC3101A 56
Greedy Choice
Items: 1 2 3 … j … nOptimal solution: x1 x2 x3 xj xn
Greedy solution: x1’ x2’ x3’ xj’ xn’• We know that: x1’ x1
– greedy choice takes as much as possible from item 1
• Modify the optimal solution to take x1’ of item 1– We have to decrease the quantity taken from some item j: the new
xj is decreased by: (x1’ - x1) w1/wj
• Increase in profit:• Decrease in profit:
jj111111 /w v)wx - ’(x v)x - ’(x
j
j11 w
v w v
j
j
1
1
w
v
w
v True, since x1 had the
best value/pound ratio
v)x - ’(x 111
jj111 /w v)wx - ’(x
6/22/2004 Lecture 8 COSC3101A 57
Optimal Substructure
• Consider the most valuable load that weights at
most W pounds
• If we remove a weight w of item j from the
optimal load
The remaining load must be the most valuable
load weighing at most W – w that can be taken
from the remaining n – 1 items plus wj – w
pounds of item j
6/22/2004 Lecture 8 COSC3101A 58
The 0-1 Knapsack Problem
• Thief has a knapsack of capacity W
• There are n items: for i-th item value vi and
weight wi
• Goal:
– find xi such that for all xi = {0, 1}, i = 1, 2, .., n
wixi W and
xivi is maximum
6/22/2004 Lecture 8 COSC3101A 59
50
0-1 Knapsack - Greedy Strategy
• E.g.:
1020
30
50
Item 1
Item 2
Item 3
$60 $100 $120
10
20
$60
$100
+
$160
50
20 $100
$120
+
$220
30
$6/pound $5/pound $4/pound
• None of the solutions involving the greedy choice (item 1) leads to an optimal solution– The greedy choice property does not hold
6/22/2004 Lecture 8 COSC3101A 60
0-1 Knapsack - Dynamic Programming
• P(i, w) – the maximum profit that can be
obtained from items 1 to i, if the
knapsack has size w
• Case 1: thief takes item i
P(i, w) =
• Case 2: thief does not take item i
P(i, w) =
vi + P(i - 1, w-wi)
P(i - 1, w)
6/22/2004 Lecture 8 COSC3101A 61
0-1 Knapsack - Dynamic Programming
0 0 0 0 0 0 0 0 0 0 0
0
0
0
0
0
0
0:
n
1 w - wi W
i-1
0
first
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
Item i was taken Item i was not taken
i
w
second
6/22/2004 Lecture 8 COSC3101A 62
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
0 0 0 0 0 0
0
0
0
0
Item Weight Value
1 2 12
2 1 10
3 3 20
4 2 150 1 2 3 4 5
1
2
3
4
W = 5
0
12 12 12 12
10 12 22 22 22
10 12 22 30 32
10 15 25 30 37
P(1, 1) =
P(1, 2) =
P(1, 3) =
P(1, 4) =
P(1, 5) =
P(2, 1)=
P(2, 2)=
P(2, 3)=
P(2, 4)=
P(2, 5)=
P(3, 1)=
P(3, 2)=
P(3, 3)=
P(3, 4)=
P(4, 5)=
P(4, 1)=
P(4, 2)=
P(4, 3)=
P(4, 4)=
P(4, 5)=
max{12+0, 0} = 12
max{12+0, 0} = 12
max{12+0, 0} = 12
max{12+0, 0} = 12
max{10+0, 0} = 10
max{10+0, 12} = 12
max{10+12, 12} = 22
max{10+12, 12} = 22
max{10+12, 12} = 22
P(2,1) = 10
P(2,2) = 12
max{20+0, 22}=22
max{20+10,22}=30
max{20+12,22}=32
P(3,1) = 10
max{15+0, 12} = 15
max{15+10, 22}=25
max{15+12, 30}=30
max{15+22, 32}=37
0
P(0, 1) = 0
Example:
6/22/2004 Lecture 8 COSC3101A 63
Reconstructing the Optimal Solution
0 0 0 0 0 0
0
0
0
0
0 1 2 3 4 5
1
2
3
4
0
12 12 12 12
10 12 22 22 22
10 12 22 30 32
10 15 25 30 37
0
• Start at P(n, W)• When you go left-up item i has been taken• When you go straight up item i has not been
taken
• Item 4
• Item 2
• Item 1
6/22/2004 Lecture 8 COSC3101A 64
Optimal Substructure
• Consider the most valuable load that weights at
most W pounds
• If we remove item j from this load
The remaining load must be the most valuable
load weighing at most W – wj that can be taken
from the remaining n – 1 items
6/22/2004 Lecture 8 COSC3101A 65
Overlapping Subproblems
0 0 0 0 0 0 0 0 0 0 0
0
0
0
0
0
0
0:
n
1 W
i-1
0
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
i
w
E.g.: all the subproblems shown in grey may depend on P(i-1, w)
6/22/2004 Lecture 8 COSC3101A 66
Readings
• Chapter 15• Chapter 16