1
Design and Analysis of Algorithms Chapter 3
3-0Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
More Information
Textbook
• Introduction to Algorithms 2nd ,Cormen,
Leiserson, Rivest and Stein, The MIT Press, 2001.
Others
• Introduction to Design & Analysis Computer Algorithm 3rd, Sara
Baase, Allen Van Gelder, Adison-Wesley, 2000.
• Algorithms, Richard Johnsonbaugh, Marcus Schaefer, Prentice
Hall, 2004.
• Introduction to The Design and Analysis of Algorithms 2nd Edition,
Anany Levitin, Adison-Wesley, 2007.
0
Design and Analysis of Computer Algorithm
Recurrences
3-2Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Solving Recurrences
A recurrence is an equation or inequality that describes a function in terms of itself by using smaller inputs
The expression:
is a recurrence.
>+
=
=
12
2
1
)(
ncnn
T
nc
nT
3-3Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Solving Recurrences
Examples:
• T(n) = 2 T(n/2) + ΘΘΘΘ(n)
– T(n) = ΘΘΘΘ(n lg n)
• T(n) = 2T(n/2) + n
– T(n) = ΘΘΘΘ(n lg n)
• T(n) = 2(T(n/2)+ 17) + n
– T(n) = ΘΘΘΘ(n lg n)
Three methods for solving recurrences
• Substitution method
• Iteration method
• Master method
3-4Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Recurrence Examples
>
=
−+=
0
0
)1(
0)(
n
n
nTcnT
>−+
==
0)1(
00)(
nnTn
nnT
>+
=
=
12
2
1
)(
ncn
T
nc
nT
>+
=
=
1
1
)(
ncnb
naT
nc
nT
3-5Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Substitution Method
The substitution method
• “making a good guess method”
• Guess the form of the answer, then
• use induction to find the constants and show that solution works
Our goal: show that
T(n) = 2T(n/2) + n = O(n lg n)
2
Design and Analysis of Algorithms Chapter 3
3-6Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Substitution Method T(n) = 2T(n/2) + n = O(n lg n)
Thus, we need to show that T(n) ≤≤≤≤ c n lg n with an appropriate choice of c• Inductive hypothesis: assume
T(n/2) ≤≤≤≤ c (n/2) lg (n/2)• Substitute back into recurrence to show that
T(n) ≤≤≤≤ c n lg n follows, when c ≥≥≥≥ 1
– T(n) = 2T(n/2) + n
≤≤≤≤ 2(c (n/2) lg (n/2)) + n
= cn lg(n/2) + n
= cn lg n – cn lg 2 + n
= cn lg n – cn + n
≤≤≤≤ cn lg n for c ≥≥≥≥ 1
= O(n lg n) for c ≥≥≥≥ 1
3-7Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Substitution Method
Consider
• Simplify it by letting m = lg n n = 2m
• Rename S(m) = T(2m)
• S(m) = 2 S(m/2) + m = O(m lg m)
• Changing back from S(m) to T(n), we obtain
• T(n) = T(2m)
= S(m)
= O(m lg m)
= O(lg n lg lg n)
nnTnT lg)(2)( +=
mTTmm
+= )2(2)2( 2/
3-8Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Iteration Method
Iteration method:
• Expand the recurrence k times
• Work some algebra to express as a summation
• Evaluate the summation
3-9Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
T(n)= c + T(n-1) = c + c + T(n-2)
= 2c + T(n-2)= 2c + c + T(n-3)
= 3c + T(n-3) … kc + T(n-k)
= ck + T(n-k)
So far for n ≥≥≥≥ k we have
• T(n) = ck + T(n-k)
To stop the recursion, we should have
• n - k = 0 k = n
• T(n) = cn + T(0) = cn
Thus in general T(n) = O(n)
>−+
==
0)1(
00)(
nnTc
nnT
3-10Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
T(n) = n + T(n-1)
= n + n-1 + T(n-2)
= n + n-1 + n-2 + T(n-3)
= n + n-1 + n-2 + n-3 + T(n-4)
= …
= n + n-1 + n-2 + n-3 + … + (n-k+1) + T(n-k)
= for n ≥≥≥≥ k
To stop the recursion, we should have n - k = 0 k = n
>−+
==
0)1(
00)(
nnTn
nnT
)(1
knTin
kni
−+∑+−=
2
10)0(
11
+=+=+ ∑∑
==
nniTi
n
i
n
i
)(2
1)(
2nO
nnnT =
+=
3-11Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
T(n) = 2 T(n/2) + c 1
= 2(2 T(n/2/2) + c) + c 2
= 22 T(n/22) + 2c + c
= 22(2 T(n/22/2) + c) + (22-1)c 3
= 23 T(n/23) + 4c + 3c
= 23 T(n/23) + (23-1)c
= 23(2 T(n/23/2) + c) + 7c 4
= 24 T(n/24) + (24-1)c
= …
= 2k T(n/2k) + (2k - 1)c k
>+
=
=1
22
1
)(nc
nT
nc
nT
3
Design and Analysis of Algorithms Chapter 3
3-12Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
So far for n ≥≥≥≥ k we have
• T(n) = 2k T(n/2k) + (2k - 1)c
To stop the recursion, we should have
• n/2k = 1 k = lg n
• T(n) = 2lgn T(n/2lgn) + (2lgn - 1)c
= n T(n/n) + (n - 1)c
= n T(1) + (n-1)c
= nc + (n-1)c
= nc + nc – c = 2cn – c = cn – c/2
≤≤≤≤ cn = ΟΟΟΟ(n) for all n ≥≥≥≥ ½
>+
=
=1
22
1
)(nc
nT
nc
nT
3-13Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
T(n) = aT(n/b) + cn 1
= a(aT(n/b/b) + cn/b) + cn 2
= a2T(n/b2) + cna/b + cn
= a2T(n /b2) + cn(a/b + 1)
= a2(aT(n/b2/b) + cn/b2) + cn(a/b + 1) 3
= a3T(n /b3) + cn(a2/b2) + cn(a/b + 1)
= a3T(n/b3) + cn(a2/b2 + a/b + 1)
= …
= akT(n/bk) + cn(ak-1/bk-1 + ak-2/bk-2 + … + a2/b2 + a/b + 1) k
>+
=
=1
1
)(ncn
b
naT
nc
nT
3-14Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
So we have
• T(n) = akT(n/bk) + cn(ak-1/bk-1 + ... + a2/b2 +a/b+1)
To stop the recursion, we should have
• n/bk = 1 n = bk k = logb n
• T(n) = akT(1) + cn(ak-1/bk-1 +...+ a2/b2 + a/b+1)
= akc + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= cak + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= cnak/bk + cn(ak-1/bk-1 + ... +a2/b2 +a/b+1)
= cn(ak/bk + ... + a2/b2 + a/b + 1)
>+
=
=1
1
)(ncn
b
naT
nc
nT
3-15Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
So with k = logb n• T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
What if a = b?• T(n) = cn(1 + ... + 1 + 1 + 1) // k+1 times
= cn(k + 1)
= cn(logb n + 1)
= ΘΘΘΘ(n logb n)
>+
=
=1
1
)(ncn
b
naT
nc
nT
3-16Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
So with k = logb n• T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
What if a <<<< b?• Recall that
– ΣΣΣΣ(xk + xk-1 + … + x + 1) = (xk+1 -1)/(x-1)
• So:
• T(n) = cn · ΘΘΘΘ(1) = ΘΘΘΘ(n)
>+
=
=1
1
)(ncn
b
naT
nc
nT
( )( )
( )( ) baba
ba
ba
ba
b
a
b
a
b
akk
k
k
k
k
−<
−
−=
−
−=++++
++
−
−
1
1
1
1
1
11
11
1
1
L
3-17Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
So with k = logb n• T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
What if a >>>> b?
• T(n) = cn · ΘΘΘΘ(ak / bk)= cn · ΘΘΘΘ(alogbn / blogbn) = cn · ΘΘΘΘ(alogbn / n)
recall logarithm fact: alogbn = nlogba
= cn · ΘΘΘΘ(nlogba / n) = ΘΘΘΘ(cn · nlogba / n) = ΘΘΘΘ(nlogba)
>+
=
=1
1
)(ncn
b
naT
nc
nT
( )( )
( )( )kk
k
k
k
k
baba
ba
b
a
b
a
b
aΘ=
−
−=++++
+
−
−
1
11
1
1
1
L
4
Design and Analysis of Algorithms Chapter 3
3-18Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
So…
>+
=
=1
1
)(ncn
b
naT
nc
nT
( )
( )
( )
>Θ
=Θ
<Θ
=
ban
bann
ban
nTa
b
blog
log)(
3-19Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
The Master Theorem
Given: a divide and conquer algorithm
• An algorithm that divides the problem of size n into a
subproblems, each of size n/b
• Let the cost of each stage (i.e., the work to divide the problem +
combine solved subproblems) be described by the function f(n)
Then, the Master Theorem gives us a cookbook for the algorithm’s running time:
3-20Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
General Divide-and-Conquer Recurrence
T(n) = aT(n/b) + f (n) where f(n) ∈∈∈∈ ΘΘΘΘ(nd), d ≥≥≥≥ 0
Master Theorem: If a < bd, T(n) ∈∈∈∈ ΘΘΘΘ(nd)
If a = bd, T(n) ∈∈∈∈ ΘΘΘΘ(nd log n)
If a > bd, T(n) ∈∈∈∈ ΘΘΘΘ(nlog b a )
Note: The same results hold with O instead of ΘΘΘΘ.
Examples: T(n) = 4T(n/2) + n ⇒⇒⇒⇒ T(n) ∈∈∈∈ ?
T(n) = 4T(n/2) + n2 ⇒⇒⇒⇒ T(n) ∈∈∈∈ ?
T(n) = 4T(n/2) + n3 ⇒⇒⇒⇒ T(n) ∈∈∈∈ ?
ΘΘΘΘ(n^2)
ΘΘΘΘ(n^2log n)
ΘΘΘΘ(n^3)
3-21Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
The Master Theorem
if T(n) = aT(n/b) + f(n) where a ≥ 1 & b > 1
then
( )
( )
( )
( )
( )
( )
<
>
<
Ω=
Θ=
=
Θ
Θ
Θ
=
+
−
1
0
largefor )()/(
& )(
)(
)(
)(
lg)(
log
log
log
log
log
c
nncfbnaf
nnf
nnf
nOnf
nf
nn
n
nT
a
a
a
a
a
b
b
b
b
b
ε
ε
ε
3-22Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Using The Master Method Case 1
T(n) = 9T(n/3) + n
• a = 9, b = 3, f(n) = n
• nlogb a = nlog39 = n2
• since f(n) = O(nlog39 - εεεε) = O(n2-0.5) = O(n1.5)
• where εεεε = 0.5
• case 1 applies:
• Thus the solution is
– T(n) = ΘΘΘΘ(n2)
( ) ( )ε−=Θ=
aa bb nOnfnnTloglog
)( when )(
3-23Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Using The Master Method Case 2
T(n) = T(2n/3) + 1
• a = 1, b = 3/2, f(n) = 1
• nlogba = nlog3/21 = n0 = 1
• since f(n) = ΘΘΘΘ(nlogba) = ΘΘΘΘ(1)
• case 2 applies:
• Thus the solution is
– T(n) = ΘΘΘΘ(lg n)
( ) ( )aa bb nnfnnnTloglog
)( when lg)( Θ=Θ=
5
Design and Analysis of Algorithms Chapter 3
3-24Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Using The Master Method Case 3
T(n) = 3T(n/4) + n lg n
• a = 3, b = 4, f(n) = n lg n
• nlogba = nlog43 = n0.793 = n0.8
• Since f(n) = ΩΩΩΩ(nlog43+εεεε) = ΩΩΩΩ(n0.8+0.2) = ΩΩΩΩ(n)
• where εεεε ≈≈≈≈ 0.2, and for sufficiently large n,
– a.f(n/b) = 3(n/4) lg(n/4) <<<< (3/4) n lg n for c = 3/4
• case 3 applies:
• Thus the solution is
– T(n) = ΘΘΘΘ(n lg n)
( ) ( )ε+Ω=Θ=
abnnfnfnTlog
)( when )()(
3-25Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Approaches to Algorithm Design
• Brute force
• Divide-and-Conquer
• Dynamic Programming
• Greedy method
• Backtracking
• Branch and Bound
• Local search
• Hill climbing, Simulated annealing
• Genetic algorithms, Genetic programming
Thursday, January 09, 2014 25
Brute Force
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
3-27Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Brute Force
A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved
Examples:
1. Computing an (a > 0, n a nonnegative integer)
2. Computing n!
3. Multiplying two matrices
4. Searching for a key of a given value in a list
3-28Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Brute-Force Sorting Algorithm
Selection Sort Scan the array to find its smallest element and swap it with the first element. Then, starting with the second element, scan the elements to the right of it to find the smallest among them and swap it with the second elements. Generally, on pass i (0 ≤≤≤≤ i ≤≤≤≤ n-2), find the smallest element in A[i..n-1] and swap it with A[i]:
A[0] ≤≤≤≤ . . . ≤≤≤≤ A[i-1] | A[i], . . . , A[min], . . ., A[n-1]
in their final positions
Example: 7 3 2 5
3-29Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Analysis of Selection Sort
Time efficiency:
Space efficiency:
Stability:
Θ(n^2)
Θ(1), so in place
yes
6
Design and Analysis of Algorithms Chapter 3
3-30Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Brute-Force String Matching
pattern: a string of m characters to search for
text: a (longer) string of n characters to search in
problem: find a substring in the text that matches the pattern
Brute-force algorithm
Step 1 Align pattern at beginning of text
Step 2 Moving from left to right, compare each character ofpattern to the corresponding character in text until
– all characters are found to match (successful search); or
– a mismatch is detected
Step 3 While pattern is not found and the text is not yetexhausted, realign pattern one position to the right andrepeat Step 2
3-31Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Examples of Brute-Force String Matching
1. Pattern: 001011 001011 001011 001011 Text: 10010101101001100101111010 10010101101001100101111010 10010101101001100101111010 10010101101001100101111010
2. Pattern: happyhappyhappyhappyText: It is never too late to have a happy It is never too late to have a happy It is never too late to have a happy It is never too late to have a happy childhood.childhood.childhood.childhood.
3-32Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Pseudocode and Efficiency
Time efficiency: Θ(mn) comparisons (in the worst case)
Why?
3-33Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Brute-Force Polynomial Evaluation
Problem: Find the value of polynomial
p(x) = anxn + an-1xn-1 +… + a1x
1 + a0
at a point x = x0
Brute-force algorithm
Efficiency:
p ← 0.0
for i ← n downto 0 dopower ← 1
for j ← 1 to i do //compute xi
power ← power ∗ x
p ← p + a[i] ∗ powerreturn p
ΣΣΣΣ0≤≤≤≤i≤≤≤≤n i = Θ(n^2) multiplications
3-34Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Polynomial Evaluation: Improvement
We can do better by evaluating from right to left:
Better brute-force algorithm
Efficiency:
Horner’s Rule is another linear time method.
p ← a[0]
power ← 1
for i ← 1 to n dopower ← power ∗ x
p ← p + a[i] ∗ power
return p
Θ(n) multiplications
3-35Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Closest-Pair Problem
Find the two closest points in a set of n points (in the two-dimensional Cartesian plane).
Brute-force algorithm
Compute the distance between every pair of distinct points
and return the indexes of the points for which the distance is the smallest.
7
Design and Analysis of Algorithms Chapter 3
3-36Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Closest-Pair Brute-Force Algorithm (cont.)
Efficiency:
How to make it faster?
Θ(n^2) multiplications (or sqrt)
Using divide-and-conquer!
3-37Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Brute-Force Strengths and Weaknesses
Strengths
• wide applicability
• simplicity
• yields reasonable algorithms for some important problems(e.g., matrix multiplication, sorting, searching, string matching)
Weaknesses
• rarely yields efficient algorithms
• some brute-force algorithms are unacceptably slow
• not as constructive as some other design techniques
3-38Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Exhaustive Search
A brute force solution to a problem involving search for an element with a special property, usually among combinatorial objects such as permutations, combinations, or subsets of a set.
Method:
• generate a list of all potential solutions to the problem in a systematic manner (see algorithms in Sec. 5.4)
• evaluate potential solutions one by one, disqualifying infeasible ones and, for an optimization problem, keeping track of the best one found so far
• when search ends, announce the solution(s) found
3-39Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example 1: Traveling Salesman Problem
Given n cities with known distances between each pair, find the shortest tour that passes through all the cities exactly once before returning to the starting city
Alternatively: Find shortest Hamiltonian circuit in a weighted connected graph
Example:
a b
c d
8
2
7
5 34
How do we represent a solution (Hamiltonian circuit)?
3-40Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
TSP by Exhaustive Search
Tour Cost
a→b→c→d→a 2+3+7+5 = 17
a→b→d→c→a 2+4+7+8 = 21
a→c→b→d→a 8+3+4+5 = 20
a→c→d→b→a 8+7+4+2 = 21
a→d→b→c→a 5+4+3+8 = 20
a→d→c→b→a 5+7+3+2 = 17
Efficiency: Θ((n-1)!)
Chapter 5 discusses how to generate permutations fast.
3-41Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example 2: Knapsack Problem
Given n items:
• weights: w1 w2 … wn
• values: v1 v2 … vn
• a knapsack of capacity W
Find most valuable subset of the items that fit into the knapsack
Example: Knapsack capacity W=16
item weight value
1 2 $20
2 5 $30
3 10 $50
4 5 $10
8
Design and Analysis of Algorithms Chapter 3
3-42Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Knapsack Problem by Exhaustive Search
Subset Total weight Total value1 2 $20
2 5 $30
3 10 $50
4 5 $10
1,2 7 $50
1,3 12 $70
1,4 7 $30
2,3 15 $80
2,4 10 $40
3,4 15 $60
1,2,3 17 not feasible
1,2,4 12 $60
1,3,4 17 not feasible
2,3,4 20 not feasible
1,2,3,4 22 not feasible Efficiency: Θ(2^n)Each subset can be represented by a binary string (bit vector, Ch 5).
3-43Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example 3: The Assignment Problem
There are n people who need to be assigned to n jobs, one person per job. The cost of assigning person i to job j is C[i,j]. Find an assignment that minimizes the total cost.
Job 0 Job 1 Job 2 Job 3
Person 0 9 2 7 8
Person 1 6 4 3 7
Person 2 5 8 1 8
Person 3 7 6 9 4
Algorithmic Plan: Generate all legitimate assignments, computetheir costs, and select the cheapest one.
How many assignments are there?
Pose the problem as one about a cost matrix:
n!
cycle cover
in a graph
3-44Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
9 2 7 8
6 4 3 7
5 8 1 8
7 6 9 4
Assignment (col.#s) Total Cost
1, 2, 3, 4 9+4+1+4=18
1, 2, 4, 3 9+4+8+9=30
1, 3, 2, 4 9+3+8+4=24
1, 3, 4, 2 9+3+8+6=26
1, 4, 2, 3 9+7+8+9=33
1, 4, 3, 2 9+7+1+6=23
etc.
(For this particular instance, the optimal assignment can be found by exploiting the specific features of the number given. It is: )
Assignment Problem by Exhaustive Search
C =
2,1,3,4
3-45Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Final Comments on Exhaustive Search
Exhaustive-search algorithms run in a realistic amount of time only on very small instances
In some cases, there are much better alternatives!
• Euler circuits
• shortest paths
• minimum spanning tree
• assignment problem
In many cases, exhaustive search or its variation is the only known way to get exact solution
The Hungarian method
runs in O(n^3) time.
Divide-and-Conquer
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
3-47Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Divide-and-Conquer
The most-well known algorithm design strategy:
1. Divide instance of problem into two or more smaller instances
2. Solve smaller instances recursively
3. Obtain solution to original (larger) instance by combining these solutions
9
Design and Analysis of Algorithms Chapter 3
3-48Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Divide-and-Conquer Technique (cont.)
subproblem 2
of size n/2
subproblem 1
of size n/2
a solution to
subproblem 1
a solution to
the original problem
a solution to
subproblem 2
a problem of size n(instance)
It general leads to a
recursive algorithm!
3-49Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Divide-and-Conquer Examples
Sorting: mergesort and quicksort
Binary tree traversals
Binary search (?)
Multiplication of large integers
Matrix multiplication: Strassen’s algorithm
Closest-pair and convex-hull algorithms
3-50Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
General Divide-and-Conquer Recurrence
T(n) = aT(n/b) + f (n) where f(n) ∈∈∈∈ ΘΘΘΘ(nd), d ≥≥≥≥ 0
Master Theorem: If a < bd, T(n) ∈∈∈∈ ΘΘΘΘ(nd)
If a = bd, T(n) ∈∈∈∈ ΘΘΘΘ(nd log n)
If a > bd, T(n) ∈∈∈∈ ΘΘΘΘ(nlog b a )
Note: The same results hold with O instead of ΘΘΘΘ.
Examples: T(n) = 4T(n/2) + n ⇒⇒⇒⇒ T(n) ∈∈∈∈ ?
T(n) = 4T(n/2) + n2 ⇒⇒⇒⇒ T(n) ∈∈∈∈ ?
T(n) = 4T(n/2) + n3 ⇒⇒⇒⇒ T(n) ∈∈∈∈ ?
ΘΘΘΘ(n^2)
ΘΘΘΘ(n^2log n)
ΘΘΘΘ(n^3)
3-51Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Mergesort
Split array A[0..n-1] into about equal halves and make copies of each half in arrays B and C
Sort arrays B and C recursively
Merge sorted arrays B and C into array A as follows:
• Repeat the following until no elements remain in one of the arrays:
– compare the first elements in the remaining unprocessed portions of the arrays
– copy the smaller of the two into A, while incrementing the index indicating the unprocessed portion of that array
• Once all elements in one of the arrays are processed, copy the remaining unprocessed elements from the other array into A.
3-52Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Pseudocode of Mergesort
3-53Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Pseudocode of Merge
Time complexity: Θ(p+q) = Θ(n) comparisons
10
Design and Analysis of Algorithms Chapter 3
3-54Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Mergesort Example
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
3 8 2 9 1 7 4 5
2 3 8 9 1 4 5 7
1 2 3 4 5 7 8 9
The non-recursive
version of Mergesort
starts from merging
single elements into
sorted pairs.
3-55Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Analysis of Mergesort
All cases have same efficiency: Θ(n log n)
Number of comparisons in the worst case is close to theoretical minimum for comparison-based sorting:
log2 n! ≈ n log2 n - 1.44n
Space requirement: Θ(n) (not in-place)
Can be implemented without recursion (bottom-up)
T(n) = 2T(n/2) + Θ(n), T(1) = 0
3-56Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Quicksort
Select a pivot (partitioning element) – here, the first element
Rearrange the list so that all the elements in the first s
positions are smaller than or equal to the pivot and all the elements in the remaining n-s positions are larger than or equal to the pivot (see next slide for an algorithm)
Exchange the pivot with the last element in the first (i.e., ≤≤≤≤)subarray — the pivot is now in its final position
Sort the two subarrays recursively
p
A[i]≤p A[i]≥p
3-57Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Partitioning Algorithm
Time complexity: Θ(r-l) comparisons
or i > r
or j = l <
3-58Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Quicksort Example
5 3 1 9 8 2 4 7
2 3 1 4 5 8 9 7
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
3-59Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Analysis of Quicksort
Best case: split in the middle — Θ(n log n)
Worst case: sorted array! — Θ(n2)
Average case: random arrays — Θ(n log n)
Improvements:
• better pivot selection: median of three partitioning
• switch to insertion sort on small subfiles
• elimination of recursion
These combine to 20-25% improvement
Considered the method of choice for internal sorting of large files (n ≥ 10000)
T(n) = T(n-1) + Θ(n)
11
Design and Analysis of Algorithms Chapter 3
3-60Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Binary Search
Very efficient algorithm for searching in sorted array:K
vsA[0] . . . A[m] . . . A[n-1]
If K = A[m], stop (successful search); otherwise, continuesearching by the same method in A[0..m-1] if K < A[m]and in A[m+1..n-1] if K > A[m]
l ←←←← 0; r ←←←← n-1while l ≤≤≤≤ r do
m ←←←← (l+r)/2if K = A[m] return melse if K < A[m] r ←←←← m-1else l ←←←← m+1
return -1
3-61Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Analysis of Binary Search
Time efficiency
• worst-case recurrence: Cw (n) = 1 + Cw( n/2 ), Cw (1) = 1
solution: Cw(n) = log2(n+1)
This is VERY fast: e.g., Cw(106) = 20
Optimal for searching a sorted array
Limitations: must be a sorted array (not linked list)
Bad (degenerate) example of divide-and-conquer
because only one of the sub-instances is solved
Has a continuous counterpart called bisection method for solving
equations in one unknown f(x) = 0 (see Sec. 12.4)
3-62Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Binary Tree Algorithms
Binary tree is a divide-and-conquer ready structure!
Ex. 1: Classic traversals (preorder, inorder, postorder)
Algorithm Inorder(T)
if T ≠≠≠≠ ∅∅∅∅ a a
Inorder(Tleft) b c b c
print(root of T) d e d e
Inorder(Tright) Efficiency: Θ(n). Why? Each node is visited/printed once.
3-63Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Binary Tree Algorithms (cont.)
Ex. 2: Computing the height of a binary tree
T TL R
h(T) = maxh(TL), h(TR) + 1 if T ≠≠≠≠ ∅∅∅∅ and h(∅∅∅∅) = -1
Efficiency: Θ(n). Why?
3-64Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Multiplication of Large Integers
Consider the problem of multiplying two (large) n-digit integers represented by arrays of their digits such as:
A = 12345678901357986429 B = 87654321284820912836
The grade-school algorithm:
a1 a2 … an
b1 b2 … bn
(d10) d11d12 … d1n
(d20) d21d22 … d2n
… … … … … … …
(dn0) dn1dn2 … dnn
Efficiency: Θ(n2) single-digit multiplications
3-65Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
First Divide-and-Conquer Algorithm
A small example: A ∗ B where A = 2135 and B = 4014
A = (21·102 + 35), B = (40 ·102 + 14)
So, A ∗ B = (21 ·102 + 35) ∗ (40 ·102 + 14)
= 21 ∗ 40 ·104 + (21 ∗ 14 + 35 ∗ 40) ·102 + 35 ∗ 14
In general, if A = A1A2 and B = B1B2 (where A and B are n-digit,
A1, A2, B1, B2 are n/2-digit numbers),
A ∗ B = A1 ∗ B1·10n + (A1 ∗ B2 + A2 ∗ B1) ·10n/2 + A2 ∗ B2
Recurrence for the number of one-digit multiplications M(n):
M(n) = 4M(n/2), M(1) = 1Solution: M(n) = n2
12
Design and Analysis of Algorithms Chapter 3
3-66Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Second Divide-and-Conquer Algorithm
A ∗ B = A1 ∗ B1·10n + (A1 ∗ B2 + A2 ∗ B1) ·10n/2 + A2 ∗ B2
The idea is to decrease the number of multiplications from 4 to 3:
(A1 + A2 ) ∗ (B1 + B2 ) = A1 ∗ B1 + (A1 ∗ B2 + A2 ∗ B1) + A2 ∗ B2,
I.e., (A1 ∗ B2 + A2 ∗ B1) = (A1 + A2 ) ∗ (B1 + B2 ) - A1 ∗ B1 - A2 ∗ B2,
which requires only 3 multiplications at the expense of (4-1) extra add/sub.
Recurrence for the number of multiplications M(n):
M(n) = 3M(n/2), M(1) = 1
Solution: M(n) = 3log 2n = nlog 23≈ n1.585
What if we count
both multiplications
and additions?
3-67Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example of Large-Integer Multiplication
2135 ∗ 4014
= (21*10^2 + 35) * (40*10^2 + 14)
= (21*40)*10^4 + c1*10^2 + 35*14
where c1 = (21+35)*(40+14) - 21*40 - 35*14, and
21*40 = (2*10 + 1) * (4*10 + 0)
= (2*4)*10^2 + c2*10 + 1*0
where c2 = (2+1)*(4+0) - 2*4 - 1*0, etc.
This process requires 9 digit multiplications as opposed to 16.
3-68Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Conventional Matrix Multiplication
Brute-force algorithm
c00 c01 a00 a01 b00 b01
= *
c10 c11 a10 a11 b10 b11
a00 * b00 + a01 * b10 a00 * b01 + a01 * b11
=
a10 * b00 + a11 * b10 a10 * b01 + a11 * b11
8 multiplications
4 additions
Efficiency class in general: ΘΘΘΘ (n3)
3-69Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Strassen’s Matrix Multiplication
Strassen’s algorithm for two 2x2 matrices (1969):
c00 c01 a00 a01 b00 b01
= *
c10 c11 a10 a11 b10 b11
m1 + m4 - m5 + m7 m3 + m5
=
m2 + m4 m1 + m3 - m2 + m6
m1 = (a00 + a11) * (b00 + b11)
m2 = (a10 + a11) * b00
m3 = a00 * (b01 - b11)
m4 = a11 * (b10 - b00)
m5 = (a00 + a01) * b11
m6 = (a10 - a00) * (b00 + b01)
m7 = (a01 - a11) * (b10 + b11)
7 multiplications
18 additions
3-70Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Strassen’s Matrix Multiplication
Strassen observed [1969] that the product of two matrices can be computed in general as follows:
C00 C01 A00 A01 B00 B01
= *
C10 C11 A10 A11 B10 B11
M1 + M4 - M5 + M7 M3 + M5
=
M2 + M4 M1 + M3 - M2 + M6
3-71Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Formulas for Strassen’s Algorithm
M1 = (A00 + A11) ∗ (B00 + B11)
M2 = (A10 + A11) ∗ B00
M3 = A00 ∗ (B01 - B11)
M4 = A11 ∗ (B10 - B00)
M5 = (A00 + A01) ∗ B11
M6 = (A10 - A00) ∗ (B00 + B01)
M7 = (A01 - A11) ∗ (B10 + B11)
13
Design and Analysis of Algorithms Chapter 3
3-72Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Analysis of Strassen’s Algorithm
If n is not a power of 2, matrices can be padded with zeros.
Number of multiplications:
M(n) = 7M(n/2), M(1) = 1
Solution: M(n) = 7log 2n = nlog 27 ≈ n2.807 vs. n3 of brute-force alg.
Algorithms with better asymptotic efficiency are known but theyare even more complex and not used in practice.
What if we count both
multiplications and additions?
3-73Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Closest-Pair Problem by Divide-and-Conquer
Step 0 Sort the points by x (list one) and then by y (list two).
Step 1 Divide the points given into two subsets S1 and S2 by a vertical line x = c so that half the points lie to the left or on the line and half the points lie to the right or on the line.
3-74Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Closest Pair by Divide-and-Conquer (cont.)
Step 2 Find recursively the closest pairs for the left and rightsubsets.
Step 3 Set d = mind1, d2
We can limit our attention to the points in the symmetricvertical strip of width 2d as possible closest pair. Let C1
and C2 be the subsets of points in the left subset S1 and ofthe right subset S2, respectively, that lie in this verticalstrip. The points in C1 and C2 are stored in increasing order of their y coordinates, taken from the second list.
Step 4 For every point P(x,y) in C1, we inspect points inC2 that may be closer to P than d. There can be no morethan 6 such points (because d ≤ d2)!
3-75Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Closest Pair by Divide-and-Conquer: Worst Case
The worst case scenario is depicted below:
3-76Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Efficiency of the Closest-Pair Algorithm
Running time of the algorithm (without sorting) is:
T(n) = 2T(n/2) + M(n), where M(n) ∈∈∈∈ Θ(n)
By the Master Theorem (with a = 2, b = 2, d = 1)
T(n) ∈∈∈∈ Θ(n log n)
So the total time is Θ(n log n).
3-77Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Quickhull Algorithm
Convex hull: smallest convex set that includes given points. An O(n^3) bruteforce time is given in Levitin, Ch 3.
Assume points are sorted by x-coordinate values
Identify extreme points P1 and P2 (leftmost and rightmost)
Compute upper hull recursively:
• find point Pmax that is farthest away from line P1P2
• compute the upper hull of the points to the left of line P1Pmax
• compute the upper hull of the points to the left of line PmaxP2
Compute lower hull in a similar manner
P1
P2
Pmax
14
Design and Analysis of Algorithms Chapter 3
3-78Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Efficiency of Quickhull Algorithm
Finding point farthest away from line P1P2 can be done in linear time
Time efficiency: T(n) = T(x) + T(y) + T(z) + T(v) + O(n), where x + y + z +v <= n.
• worst case: Θ(n2) T(n) = T(n-1) + O(n)
• average case: Θ(n) (under reasonable assumptions aboutdistribution of points given)
If points are not initially sorted by x-coordinate value, this can be accomplished in O(n log n) time
Several O(n log n) algorithms for convex hull are known
Dynamic Programming
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
3-80Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Dynamic Programming
Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated as recurrences with overlapping subinstances
• Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS
• “Programming” here means “planning”
• Main idea:- set up a recurrence relating a solution to a larger instance
to solutions of some smaller instances- solve smaller instances once- record solutions in a table - extract solution to the initial instance from that table
3-81Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example: Fibonacci numbers
• Recall definition of Fibonacci numbers:
F(n) = F(n-1) + F(n-2)F(0) = 0F(1) = 1
• Computing the nth Fibonacci number recursively (top-down):
F(n)
F(n-1) + F(n-2)
F(n-2) + F(n-3) F(n-3) + F(n-4)
...
3-82Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example: Fibonacci numbers (cont.)
Computing the nth Fibonacci number using bottom-up iteration and
recording results:
F(0) = 0F(1) = 1
F(2) = 1+0 = 1
… F(n-2) =
F(n-1) = F(n) = F(n-1) + F(n-2)
Efficiency:- time- space
0
1
1
. . .
F(n-2)
F(n-1)
F(n)
n n
What if we solve
it recursively?
3-83Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Examples of DP algorithms
• Computing a binomial coefficient
• Longest common subsequence
• Warshall’s algorithm for transitive closure
• Floyd’s algorithm for all-pairs shortest paths
• Constructing an optimal binary search tree
• Some instances of difficult discrete optimization problems:- traveling salesman- knapsack
15
Design and Analysis of Algorithms Chapter 3
3-84Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Computing a binomial coefficient by DP
Binomial coefficients are coefficients of the binomial formula:
(a + b)n = C(n,0)anb0 + . . . + C(n,k)an-kbk + . . . + C(n,n)a0bn
Recurrence: C(n,k) = C(n-1,k) + C(n-1,k-1) for n > k > 0
C(n,0) = 1, C(n,n) = 1 for n ≥≥≥≥ 0
Value of C(n,k) can be computed by filling a table:
0 1 2 . . . k-1 k
0 1
1 1 1
. 1 2 1
. 1 3 3 1
. 1 4 6 4 1
n-1 C(n-1,k-1) C(n-1,k)
n C(n,k)
3-85Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Computing C(n,k): pseudocode and analysis
Time efficiency: Θ(nk)
Space efficiency: Θ(nk)
3-86Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … wn
values: v1 v2 … vn
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack
Consider instance defined by first i items and capacity j (j ≤≤≤≤ W).
Let V[i,j] be optimal value of such an instance. Then
max V[i-1,j], vi + V[i-1,j- wi] if j- wi ≥≥≥≥ 0V[i,j] =
V[i-1,j] if j- wi < 0
Initial conditions: V[0,j] = 0 and V[i,0] = 0
3-87Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Knapsack Problem by DP (example)
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
0
w1 = 2, v1= 12 1
w2 = 1, v2= 10 2
w3 = 3, v3= 20 3
w4 = 2, v4= 15 4 ?
0 0 0
0 0 12
0 10 12 22 22 22
0 10 12 22 30 32
0 10 15 25 30 37
Backtracing
finds the actual
optimal subset,
i.e. solution.
3-88Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Knapsack Problem by DP (pseudocode)
Algorithm DPKnapsack(w[1..n], v[1..n], W)
var V[0..n,0..W], P[1..n,1..W]: int
for j := 0 to W do
V[0,j] := 0
for i := 0 to n do
V[i,0] := 0
for i := 1 to n do
for j := 1 to W do
if w[i] ≤≤≤≤ j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then
V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-w[i]
else
V[i,j] := V[i-1,j]; P[i,j] := j
return V[n,W] and the optimal subset by backtracing
Running time and space:
O(nW).
3-89Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Longest Common Subsequence (LCS)
A subsequence of a sequence/string S is obtained by deleting zero or more symbols from S. For example, the following are some subsequences of “president”: pred, sdn, predent. In other words, the letters of a subsequence of S appear in order in S, but they are not required to be consecutive.
The longest common subsequence problem is to find a maximum length common subsequence between two sequences.
16
Design and Analysis of Algorithms Chapter 3
3-90Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
LCS
For instance,
Sequence 1: president
Sequence 2: providence
Its LCS is priden.
president
providence
3-91Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
LCS
Another example:
Sequence 1: algorithm
Sequence 2: alignment
One of its LCS is algm.
a l g o r i t h m
a l i g n m e n t
3-92Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
How to compute LCS?
Let A=a1a2…am and B=b1b2…bn .
len(i, j): the length of an LCS between
a1a2…ai and b1b2…bj
With proper initializations, len(i, j) can be computed as follows.
,
. and 0, if)),1(),1,(max(
and 0, if1)1,1(
,0or 0 if0
),(
≠>−−
=>+−−
==
=
ji
ji
bajijilenjilen
bajijilen
ji
jilen
3-93Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
procedure LCS-Length(A, B)
1. for i ← 0 to m do len(i,0) = 0
2. for j ← 1 to n do len(0,j) = 0
3. for i ← 1 to m do
4. for j ← 1 to n do
5. if jiba = then
=
+−−=
" "),(
1)1,1(),(
jiprev
jilenjilen
6. else if )1,(),1( −≥− jilenjilen
7. then
=
−=
" "),(
),1(),(
jiprev
jilenjilen
8. else
=
−=
" "),(
)1,(),(
jiprev
jilenjilen
9. return len and prev
3-94Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
i j 0 1
p
2
r
3
o
4
v
5
i
6
d
7
e
8
n
9
c
10
e
0 0 0 0 0 0 0 0 0 0 0 0
1 p 0 1 1 1 1 1 1 1 1 1 1
2 r 0 1 2 2 2 2 2 2 2 2 2
3 e 0 1 2 2 2 2 2 3 3 3 3
4 s 0 1 2 2 2 2 2 3 3 3 3
5 i 0 1 2 2 2 3 3 3 3 3 3
6 d 0 1 2 2 2 3 4 4 4 4 4
7 e 0 1 2 2 2 3 4 5 5 5 5
8 n 0 1 2 2 2 3 4 5 6 6 6
9 t 0 1 2 2 2 3 4 5 6 6 6
Running time and memory: O(mn) and O(mn).
3-95Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
procedure Output-LCS(A, prev, i, j)
1 if i = 0 or j = 0 then return
2 if prev(i, j)=” “ then
−−−
ia
jiprevALCSOutput
)1,1,,(
3 else if prev(i, j)=” “ then Output-LCS(A, prev, i-1, j)
4 else Output-LCS(A, prev, i, j-1)
The backtracing algorithm
17
Design and Analysis of Algorithms Chapter 3
3-96Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
i j 0 1
p
2
r
3
o
4
v
5
i
6
d
7
e
8
n
9
c
10
e
0 0 0 0 0 0 0 0 0 0 0 0
1 p 0 1 1 1 1 1 1 1 1 1 1
2 r 0 1 2 2 2 2 2 2 2 2 2
3 e 0 1 2 2 2 2 2 3 3 3 3
4 s 0 1 2 2 2 2 2 3 3 3 3
5 i 0 1 2 2 2 3 3 3 3 3 3
6 d 0 1 2 2 2 3 4 4 4 4 4
7 e 0 1 2 2 2 3 4 5 5 5 5
8 n 0 1 2 2 2 3 4 5 6 6 6
9 t 0 1 2 2 2 3 4 5 6 6 6
Output: priden
3-97Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Warshall’s Algorithm: Transitive Closure
• Computes the transitive closure of a relation
• Alternatively: existence of all nontrivial paths in a digraph
• Example of transitive closure:
3
42
1
0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
0 0 1 0
1 1 1 1
0 0 0 0
1 1 1 1
3
42
1
3-98Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence of n-by-n matrices R(0), … , R(k), … , R(n) whereR(k)[i,j] = 1 iff there is nontrivial path from i to j with only the first k vertices allowed as intermediate Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)
3
42
13
42
13
42
1
3
42
1
R(0)
0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
R(1)
0 0 1 0
1 0 1 1
0 0 0 0
0 1 0 0
R(2)
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(3)
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(4)
0 0 1 0
1 1 1 1
0 0 0 0
1 1 1 1
3
42
1
3-99Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Warshall’s Algorithm (recurrence)
On the k-th iteration, the algorithm determines for every pair of vertices i, j if a path exists from i and j with just vertices 1,…,k
allowed as intermediate
R(k-1)[i,j] (path using just 1 ,…,k-1)R(k)[i,j] = or
R(k-1)[i,k] and R(k-1)[k,j] (path from i to kand from k to jusing just 1 ,…,k-1)
i
j
k
Initial condition?
3-100Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Warshall’s Algorithm (matrix generation)
Recurrence relating elements R(k) to elements of R(k-1) is:
R(k)[i,j] = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j])
It implies the following rules for generating R(k) from R(k-1):
Rule 1 If an element in row i and column j is 1 in R(k-1), it remains 1 in R(k)
Rule 2 If an element in row i and column j is 0 in R(k-1),it has to be changed to 1 in R(k) if and only if the element in its row i and column k and the elementin its column j and row k are both 1’s in R(k-1)
3-101Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Warshall’s Algorithm (example)
3
42
1 0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
R(0) =
0 0 1 0
1 0 1 1
0 0 0 0
0 1 0 0
R(1) =
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(2) =
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(3) =
0 0 1 0
1 1 1 1
0 0 0 0
1 1 1 1
R(4) =
18
Design and Analysis of Algorithms Chapter 3
3-102Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Warshall’s Algorithm (pseudocode and analysis)
Time efficiency: Θ(n3)
Space efficiency: Matrices can be written over their predecessors
(with some care), so it’s Θ(n^2).
3-103Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Floyd’s Algorithm: All pairs shortest paths
Problem: In a weighted (di)graph, find shortest paths betweenevery pair of vertices
Same idea: construct solution through series of matrices D(0), …,D (n) using increasing subsets of the vertices allowedas intermediate
Example: 3
42
1
4
1
61
5
3
0 ∞ 4 ∞
1 0 4 3
∞ ∞ 0 ∞
6 5 1 0
3-104Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Floyd’s Algorithm (matrix generation)
On the k-th iteration, the algorithm determines shortest paths between every pair of vertices i, j that use only vertices among 1,…,k as intermediate
D(k)[i,j] = min D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]
i
j
k
D(k-1)[i,j]
D(k-1)[i,k]
D(k-1)[k,j]
Initial condition?
3-105Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Floyd’s Algorithm (example)
0 ∞ 3 ∞
2 0 ∞ ∞
∞ 7 0 1
6 ∞ ∞ 0
D(0) =
0 ∞ 3 ∞
2 0 5 ∞
∞ 7 0 1
6 ∞ 9 0
D(1) =
0 ∞ 3 ∞
2 0 5 ∞
9 7 0 1
6 ∞ 9 0
D(2) =
0 10 3 42 0 5 69 7 0 1
6 16 9 0
D(3) =
0 10 3 4
2 0 5 6
7 7 0 1
6 16 9 0
D(4) =
31
3
2
6 7
4
1 2
3-106Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Floyd’s Algorithm (pseudocode and analysis)
Time efficiency: Θ(n3)
Space efficiency: Matrices can be written over their predecessors
Note: Works on graphs with negative edges but without negative cycles.
Shortest paths themselves can be found, too. How?
If D[i,k] + D[k,j] < D[i,j] then P[i,j] k
Since the superscripts k or k-1 make
no difference to D[i,k] and D[k,j].
3-107Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Optimal Binary Search Trees
Problem: Given n keys a1 < …< an and probabilities p1, …, pn
searching for them, find a BST with a minimumaverage number of comparisons in successful search.
Since total number of BSTs with n nodes is given by C(2n,n)/(n+1), which grows exponentially, brute force is hopeless.
Example: What is an optimal BST for keys A, B, C, and D withsearch probabilities 0.1, 0.2, 0.4, and 0.3, respectively?
D
A
B
C
Average # of comparisons
= 1*0.4 + 2*(0.2+0.3) + 3*0.1
= 1.7
19
Design and Analysis of Algorithms Chapter 3
3-108Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
DP for Optimal BST Problem
Let C[i,j] be minimum average number of comparisons made in T[i,j], optimal BST for keys ai < …< aj , where 1 ≤ i ≤ j ≤ n.
Consider optimal BST among all BSTs with some ak (i ≤ k ≤ j ) as their root; T[i,j] is the best among them.
a
OptimalBST for
a , ..., a
Optimal
BST fora , ..., ai
k
k-1 k+1 j
C[i,j] =
min pk · 1 +
∑ ps (level as in T[i,k-1] +1) +
∑ ps (level as in T[k+1,j] +1)
i ≤ k ≤ j
s = i
k-1
s =k+1
j
3-109Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
goal0
0
C[i,j]
0
1
n+1
0 1 n
p 1
p2
np
i
j
DP for Optimal BST Problem (cont.)
After simplifications, we obtain the recurrence for C[i,j]:
C[i,j] = min C[i,k-1] + C[k+1,j] + ∑ ps for 1 ≤ i ≤ j ≤ n
C[i,i] = pi for 1 ≤ i ≤ j ≤ ns = i
j
i ≤ k ≤ j
Example: key A B C D
probability 0.1 0.2 0.4 0.3
The tables below are filled diagonal by diagonal: the left one is filled using the recurrence
C[i,j] = min C[i,k-1] + C[k+1,j] + ∑ ps , C[i,i] = pi ;
the right one, for trees’ roots, records k’s values giving the minima
0 1 2 3 4
1 0 .1 .4 1.1 1.7
2 0 .2 .8 1.4
3 0 .4 1.0
4 0 .3
5 0
0 1 2 3 4
1 1 2 3 3
2 2 3 3
3 3 3
4 4
5
i ≤ k ≤ j s = i
j
optimal BST
B
A
C
D
i j
i j
3-111Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Optimal Binary Search Trees
3-112Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Analysis DP for Optimal BST Problem
Time efficiency: Θ(n3) but can be reduced to Θ(n2) by takingadvantage of monotonicity of entries in theroot table, i.e., R[i,j] is always in the range between R[i,j-1] and R[i+1,j]
Space efficiency: Θ(n2)
Method can be expanded to include unsuccessful searches
Greedy Technique
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
20
Design and Analysis of Algorithms Chapter 3
3-114Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Greedy Technique
Constructs a solution to an optimization problem piece by
piece through a sequence of choices that are:
feasible, i.e. satisfying the constraints
locally optimal (with respect to some neighborhood definition)
greedy (in terms of some measure), and irrevocable
For some problems, it yields a globally optimal solution for every instance. For most, does not but can be useful for fast approximations. We are mostly interested in the former case in this class.
Defined by an objective function and a set of constraints
3-115Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Applications of the Greedy Strategy
Optimal solutions:
• change making for “normal” coin denominations
• minimum spanning tree (MST)
• single-source shortest paths
• simple scheduling problems
• Huffman codes
Approximations/heuristics:
• traveling salesman problem (TSP)
• knapsack problem
• other combinatorial optimization problems
3-116Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Change-Making Problem
Given unlimited amounts of coins of denominations d1 > … > dm ,
give change for amount n with the least number of coins
Example: d1 = 25c, d2 =10c, d3 = 5c, d4 = 1c and n = 48c
Greedy solution:
Greedy solution is
optimal for any amount and “normal’’ set of denominations
may not be optimal for arbitrary coin denominations
<1, 2, 0, 3>
For example, d1 = 25c, d2 = 10c, d3 = 1c, and n = 30c
Ex: Prove the greedy algorithm is optimal for the above denominations.
Q: What are the objective function and constraints?
3-117Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Minimum Spanning Tree (MST)
Spanning tree of a connected graph G: a connected acyclic subgraph of G that includes all of G’s vertices
Minimum spanning tree of a weighted, connected graph G: a spanning tree of G of the minimum total weight
Example:
c
db
a
6
2
4
3
1
c
db
a
2
3
1
c
db
a
6
4 1
3-118Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Prim’s MST algorithm
Start with tree T1 consisting of one (any) vertex and “grow” tree one vertex at a time to produce MST through a series of expanding subtrees T1, T2, …, Tn
On each iteration, construct Ti+1 from Ti by adding vertex not in Ti that is closest to those already in Ti (this is a “greedy” step!)
Stop when all vertices are included
3-119Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example
c
db
a
4
2
61
3
c
db
a
4
2
61
3
c
db
a
4
2
61
3
c
db
a
4
2
61
3
c
db
a
4
2
61
3
21
Design and Analysis of Algorithms Chapter 3
3-120Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Notes about Prim’s algorithm
Proof by induction that this construction actually yields an MST (CLRS, Ch. 23.1). Main property is given in the next page.
Needs priority queue for locating closest fringe vertex. The Detailed algorithm can be found in Levitin, P. 310.
Efficiency
• O(n2) for weight matrix representation of graph and array implementation of priority queue
• O(m log n) for adjacency lists representation of graph with n vertices and m edges and min-heap implementation of the priority queue
3-121Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
The Crucial Property behind Prim’s Algorithm
Claim: Let G = (V,E) be a weighted graph and (X,Y) be a partition of V (called a cut). Suppose e = (x,y) is an edge of E across the cut, where x is in X and y is in Y, and e has the minimum weight among all such crossing edges (called a light edge). Then there is an MST containing e.
y
x
X
Y
3-122Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Another greedy algorithm for MST: Kruskal’s
Sort the edges in nondecreasing order of lengths
“Grow” tree one edge at a time to produce MST through a series of expanding forests F1, F2, …, Fn-1
On each iteration, add the next edge on the sorted list unless this would create a cycle. (If it would, skip the edge.)
3-123Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example
c
db
a
4
2
61
3
c
db
a
2
61
3
c
db
a
4
2
61
3
c
db
a
4
2
61
3
c
db
a
4
2
61
3
c
db
a
2
1
3
3-124Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Notes about Kruskal’s algorithm
Algorithm looks easier than Prim’s but is harder to implement (checking for cycles!)
Cycle checking: a cycle is created iff added edge connects vertices in the same connected component
Union-find algorithms – see section 9.2
Runs in O(m log m) time, with m = |E|. The time is mostly spent on sorting.
3-125Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Minimum spanning tree vs. Steiner tree
c
db
a1
1 1
1
c
db
a
vs s
In general, a Steiner minimal tree (SMT) can be
much shorter than a minimum spanning tree
(MST), but SMTs are hard to compute.
22
Design and Analysis of Algorithms Chapter 3
3-126Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Shortest paths – Dijkstra’s algorithm
Single Source Shortest Paths Problem: Given a weighted
connected (directed) graph G, find shortest paths from source vertex s
to each of the other vertices
Dijkstra’s algorithm: Similar to Prim’s MST algorithm, with
a different way of computing numerical labels: Among vertices
not already in the tree, it finds vertex u with the smallest sum
dv + w(v,u)
where
v is a vertex for which shortest path has been already found
on preceding iterations (such vertices form a tree rooted at s)
dv is the length of the shortest path from source s to v
w(v,u) is the length (weight) of edge from v to u
3-127Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Exampled
4
Tree vertices Remaining vertices
a(-,0) b(a,3) c(-,∞) d(a,7) e(-,∞)
a
b4
e
3
7
62 5
c
a
b
d
4c
e
3
7 4
62 5
a
b
d
4c
e
3
7 4
62 5
a
b
d
4c
e
3
7 4
62 5
b(a,3) c(b,3+4) d(b,3+2) e(-,∞)
d(b,5) c(b,7) e(d,5+4)
c(b,7) e(d,9)
e(d,9)
d
a
b
d
4c
e
3
7 4
62 5
3-128Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Notes on Dijkstra’s algorithm
Correctness can be proven by induction on the number of vertices.
Doesn’t work for graphs with negative weights (whereas Floyd’s algorithm does, as long as there is no negative cycle). Can you find a counterexample for Dijkstra’s algorithm?
Applicable to both undirected and directed graphs
Efficiency
• O(|V|2) for graphs represented by weight matrix and array implementation of priority queue
• O(|E|log|V|) for graphs represented by adj. lists and min-heap implementation of priority queue
Don’t mix up Dijkstra’s algorithm with Prim’s algorithm! More details of the algorithm are in the text and ref books.
We prove the invariants: (i) when a vertex is added to the tree, its correct distance is calculated and (ii) the distance is at least those of the previously added vertices.
3-129Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Coding Problem
Coding: assignment of bit strings to alphabet characters
Codewords: bit strings assigned for characters of alphabet
Two types of codes:
fixed-length encoding (e.g., ASCII)
variable-length encoding (e,g., Morse code)
Prefix-free codes (or prefix-codes): no codeword is a prefix of another codeword
Problem: If frequencies of the character occurrences areknown, what is the best binary prefix-free code?
It allows for efficient (online) decoding!
E.g. consider the encoded string (msg) 10010110…
E.g. We can code a,b,c,d as 00,01,10,11 or 0,10,110,111 or 0,01,10,101.
The one with the shortest average code length. The average code length represents
on the average how many bits are required to transmit or store a character.
E.g. if P(a) = 0.4, P(b) = 0.3,
P(c) = 0.2, P(d) = 0.1, then
the average length of code #2
is 0.4 + 2*0.3 + 3*0.2 + 3*0.1
= 1.9 bits
3-130Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Huffman codes
Any binary tree with edges labeled with 0’s and 1’s yields a prefix-free code of characters assigned to its leaves
Optimal binary tree minimizing the average
length of a codeword can be constructed
as follows:
Huffman’s algorithm
Initialize n one-node trees with alphabet characters and the tree weights with
their frequencies.
Repeat the following step n-1 times: join two binary trees with smallest
weights into one (as left and right subtrees) and make its weight equal the
sum of the weights of the two trees.
Mark edges leading to left and right subtrees with 0’s and 1’s, respectively.
0 1
10
1
represents 00, 011, 1
3-131Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example
character A B C D _
frequency 0.35 0.1 0.2 0.2 0.15
codeword 11 100 00 01 101
average bits per character: 2.25
for fixed-length encoding: 3
compression ratio: (3-2.25)/3*100% = 25%
0.25
0.1
B
0.15
_
0.2
C
0.2
D
0.35
A
0.2
C
0.2
D
0.35
A
0.1
B
0.15
_
0.4
0.2
C
0.2
D
0.6
0.25
0.1
B
0.15
_
0.6
1.0
0 1
0.4
0.2
C
0.2
D0.25
0.1
B
0.15
_
0 1 0
0
1
1
0.25
0.1
B
0.15
_
0.35
A
0.4
0.2
C
0.2
D
0.35
A
0.35
A
23
Design and Analysis of Algorithms Chapter 3
Backtracking
&
Branch and Bound
Busby, Dodge, Fleming, and Negrusa
3-133Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Backtracking Algorithm
Is used to solve problems for which a sequence of objects is to be selected from a set such that the sequence satisfies some constraint
Traverses the state space using a depth-first search with pruning
3-134Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Backtracking
Performs a depth-first traversal of a tree
Continues until it reaches a node that is non-viable or non-promising
Prunes the sub tree rooted at this node and continues the depth-first traversal of the tree
3-135Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example: N-Queens Problem
Given an N x N sized chess board
Objective: Place N queens on the board so that no queens are in danger
3-136Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
One option would be to generate a tree of every possible board layout
This would be an expensive way to find a solution
3-137Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Backtracking
Backtracking prunes entire sub trees if their root node is not a viable solution
The algorithm will “backtrack” up the tree to search for other possible solutions
24
Design and Analysis of Algorithms Chapter 3
3-138Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Efficiency of Backtracking
This given a significant advantage over an exhaustive search of the tree for the average problem
Worst case: Algorithm tries every path, traversing the entire search space as in an exhaustive search
3-139Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Branch and Bound
Where backtracking uses a depth-first search with pruning, the branch and bound algorithm uses a breadth-first search with pruning
Branch and bound uses a queue as an auxiliary data structure
3-140Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
The Branch and Bound Algorithm
Starting by considering the root node and applying a Starting by considering the root node and applying a Starting by considering the root node and applying a Starting by considering the root node and applying a lowerlowerlowerlower----bounding and upperbounding and upperbounding and upperbounding and upper----bounding procedure to itbounding procedure to itbounding procedure to itbounding procedure to it If the bounds match, then an optimal solution has been If the bounds match, then an optimal solution has been If the bounds match, then an optimal solution has been If the bounds match, then an optimal solution has been found and the algorithm is finishedfound and the algorithm is finishedfound and the algorithm is finishedfound and the algorithm is finished If they do not match, then algorithm runs on the child If they do not match, then algorithm runs on the child If they do not match, then algorithm runs on the child If they do not match, then algorithm runs on the child nodesnodesnodesnodes
3-141Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Example: The Traveling Salesman Problem
Branch and bound can be used to solve the TSP using a priority queue as an auxiliary data structure
An example is the problem with a directed graph given by this adjacency matrix:
3-142Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Traveling Salesman Problem
The problem starts at vertex 1
The initial bound for the minimum tour is the sum of the minimum outgoing edges from each vertex
Vertex 1 min (14, 4, 10, 20) = 4
Vertex 2 min (14, 7, 8, 7) = 7
Vertex 3 min (4, 5, 7, 16) = 4
Vertex 4 min (11, 7, 9, 2) = 2
Vertex 5 min (18, 7, 17, 4) = 4
Bound = 21
3-143Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Traveling Salesman Problem
Next, the bound for the node for the partial tour from 1 to 2 is calculated using the formula:
Bound = Length from 1 to 2 + sum of min outgoing edges for vertices
2 to 5 = 14 + (7 + 4 + 2 + 4) = 31
25
Design and Analysis of Algorithms Chapter 3
3-144Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Traveling Salesman Problem
The node is added to the priority queue
The node with the lowest bound is then removed
This calculation for the bound for the node of the partial tours is repeated on this node
The process ends when the priority queue is empty
3-145Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Traveling Salesman Problem
The final results of this example are in this tree:
The accompanying number for each node is the order it was removed in
3-146Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Efficiency of Branch and Bound
In many types of problems, branch and bound is faster than branching, due to the use of a breadth-first search instead of a depth-first search
The worst case scenario is the same, as it will still visit every node in the tree
CS 4700:
Foundations of Artificial Intelligence
Carla P. Gomes
Module:
Local Search
(Reading R&N: Chapter 4:3 and 5:3)
3-148Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
So far: methods that systematically explore the search
space, possibly using principled pruning (e.g., A*)
Current best such algorithm can handle search spaces of up to 10100 states / around 500 binary variables (“ballpark” number only!)
What if we have much larger search spaces? Search spaces for some real-world problems might
be much larger - e.g. 1030,000 states.
A completely different kind of method is called for:
Local search methods or Iterative Improvement Methods (or Metaheuristics, OR)
3-149Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Local Search
Approach quite general used in:
planning, scheduling, routing, configuration, protein folding, etc
In fact, many (most?) large Operations Research problems are
solved using local search methods (e.g., airline scheduling).
26
Design and Analysis of Algorithms Chapter 3
3-150Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Local Search
Key idea (surprisingly simple):
Select (random) initial state (initial guess at solution)
Make local modification to improve current state
Repeat Step 2 until goal state found (or out of time)
Requirements:
–generate an initial (often random; probably-not-optimal or even valid) guess
–evaluate quality of guess –move to other states (well-defined neighborhood function)
. . . and do these operations quickly
. . . and don't save paths
followed
3-151Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Hill-climbing search
"Like climbing Everest in thick fog with amnesia”
Move to a better “neighbor”
Note: “successor” normally called neighbor.
3-152Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Local search for CSPs
Hill-climbing, simulated annealing typically work with "complete" states, i.e., all variables assigned
To apply to CSPs:
• allow states with unsatisfied constraints
•
• operators reassign variable values
•
Variable selection: randomly select any conflicted variable Value selection by min-conflicts heuristic:
• choose value that violates the fewest constraints
•
• i.e., hill-climb with h(n) = total number of violated constraints
•
3-153Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Queens
States: 4 queens in 4 columns (256 states)
Neighborhood Operators: move queen in column
Goal test: no attacks
Evaluation: h(n) = number of attacks
3-154Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
8 Queens
Min-Conflicts Heuristic
1
2
3
3
2
2
3
2
2
2
2
2
0
2
Pick initial complete assignment (at random)
Repeat• Pick a conflicted variable var (at random)
• Set the new value of var to minimize the number of conflicts
• If the new assignment is not conflicting then return it
(Min-conflicts heuristics) Inspired GSAT and Walksat
3-155Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Remark
Local search with min-conflict heuristic works extremely well for million-queen problems
The reason: Solutions are densely distributed in the O(nn) space, which means that on the average a solution is a few steps away from a randomly picked assignment
27
Design and Analysis of Algorithms Chapter 3
3-156Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Stochastic Hill-climbing search: 8-queens problem
h = number of pairs of queens that are attacking each other, either directly or indirectly
h = 17 for the above state
Operators: move queen in column - best move leads to h=12 (marked with square)
Stochastic hill climbing:
Pick next move randomly
from among uphill moves.
(many other
variations of hill climbing)
3-157Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Problems with Hill Climbing
Foothills / Local Optimal: No neighbor is better, but not at global optimum.
• (Maze: may have to move AWAY from goal to find (best) solution)
Plateaus: All neighbors look the same.
• (8-puzzle: perhaps no action will change # of tiles out of place)
Ridges: sequence of local maxima
Ignorance of the peak: Am I done?
3-158Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Hill-climbing search: 8-queens problem
A local minimum with
What’s the value of the evaluation
function h, (num. of attacking queens)?
h = 1
Find a move to improve it.
3-159Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Hill-climbing search
Problem: depending on initial state, can get stuck in local optimum (here maximum)
How to overcome
local optima and plateaus ?
Random-restart hill climibing
3-160Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Variations on Hill-Climbing
Local Beam Search: Run the random starting points in parallel, always keeping the k most promising states
Random restarts: Simply restart at a new random state after a pre-defined number of steps.
3-161Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Local beam search
• Start with k randomly generated states
• Keep track of k states rather than just one
• At each iteration, all the successors of all k states are generated
• If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
•
No: This is different than k random restarts since information is
shared between k search points:
Some search points may contribute none to best successors:
one search point may contribute all k successors “Come over
here, the grass is greener” (Russell and Norvig)
Equivalent to k random-restart hill-climbing????
28
Design and Analysis of Algorithms Chapter 3
3-162Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Improvements to Basic Local Search
Issue: • How to move more quickly to successively better plateaus?
• Avoid “getting stuck” / local maxima?
Idea: Introduce downhill moves (“noise”) to escape from plateaus/local maxima
Noise strategies: 1. Simulated Annealing
– Kirkpatrick et al. 1982; Metropolis et al. 1953
2. Mixed Random Walk (Satisfiability)
– Selman and Kautz 1993
3-163Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Simulated Annealing
Idea:Use conventional hill-climbing style techniques, but
occasionally take a step in a direction other than that in which
there is improvement (downhill moves).
As time passes, the probability that a down-hill step is taken is
gradually reduced and the size of any down-hill step taken is
decreased.
Kirkpatrick et al. 1982; Metropolis et al.1953.
3-164Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Another Approach:
Simulated annealing search
Idea: escape local maxima by allowing some "bad" moves but gradually
decrease their frequency
case of improvement, make the move
Similar to hill climbing,
but a random move instead
of best move
Otherwise, choose the move with probability
that decreases exponentially with the “badness”
of the move.
What’s the probability when: T inf?
What’s the probability when: T 0?What’s the probability when: ∆=0?
What’s the probability when: ∆-∞?
3-165Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Notes on Simulated Annealing
Noise model based on statistical mechanics • . . . introduced as analogue to physical process of growing crystals
• Kirkpatrick et al. 1982; Metropolis et al. 1953
Convergence: • 1. W/ exponential schedule, will converge to global optimum
One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1
• 2. No more-precise convergence rate (Recent work on rapidly mixing Markov chains)
Key aspect: downwards / sideways moves • Expensive, but (if have enough time) can be best
Hundreds of papers/ year; • Many applications: VLSI layout, factory scheduling, protein
folding. . .
3-166Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Genetic Algorithms
Another class of iterative improvement algorithms
• A genetic algorithm maintains a population of candidate solutions for the problem at hand, and makes it evolve by iteratively applying a set of stochastic operators
Inspired by the biological evolution process
Uses concepts of “Natural Selection” and “Genetic Inheritance” (Darwin 1859)
Originally developed by John Holland (1975)
3-167Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
High-level Algorithm
1. Randomly generate an initial population.
2. Evaluate the fitness population
3. Select parents and “reproduce” the next generation
4. Replace the old generation with the new generation
5. Repeat step 2 though 4 till iteration N
29
Design and Analysis of Algorithms Chapter 3
3-168Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Stochastic Operators
Cross-over
• decomposes two distinct solutions and then
• randomly mixes their parts to form novel solutions
Mutation
• randomly perturbs a candidate solution
3-169Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
1 0 1 0 1 1 1
1 1 0 0 0 1 1
Parent 1
Parent 2
1 0 1 0 0 1 1
1 1 0 0 1 1 0
Child 1
Child 2
Genetic Algorithm Operators:
Mutation and Crossover
How are the
children produce?
Mutation0
3-170Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Genetic algorithms
A successor state is generated by combining two parentstates
Start with k randomly generated states (population)
A state is represented as a string over a finite alphabet(often a string of 0s and 1s)
Evaluation function (fitness function). Higher values for better states.
Produce the next generation of states by selection, crossover, and mutation
3-171Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Genetic algorithms
production of next generationFitness function: number of non-attacking
pairs of queens (min = 0, max = 8 × 7/2 = 28
the higher the better)
24/(24+23+20+11) = 31%
23/(24+23+20+11) = 29% etc
probability of a given pair selection
proportional to the fitness (b)
crossover point randomly generated
random mutation
3-172Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Genetic algorithms
3-173Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Lots of variants of genetic algorithms with different selection, crossover, and mutation rules.
GA have a wide application in optimization – e.g., circuit layout and job shop scheduling
Much wirk remains to be done to formally understand GA’s and to identify the conditions under which they perform well.
30
Design and Analysis of Algorithms Chapter 3
3-174Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Summary
Local search algorithms
• Hill-climbing search
• Local beam search
• Simulated annealing search
• Genetic algorithms
3-175Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
Local Search Summary
Surprisingly efficient search technique
Wide range of applications
Formal properties elusive
Intuitive explanation:
• Search spaces are too large for systematic search anyway. . .
Area will most likely continue to thrive
3-176Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3
The End
Kurt SchmidtDrexel University
Solving Recurrence
Relations
So what does
T(n) = T(n-1) +n
look like anyway?
3-178Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Recurrence Relations
Can easily describe the runtime of recursive algorithms
Can then be expressed in a closed form (not defined in terms of itself)
Consider the linear search:
3-179Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Eg. 1 - Linear Search
Recursively
Look at an element (constant work, c), then search the remaining elements…
• T(n) = T( n-1 ) + c
• “The cost of searching n elements is the cost of
looking at 1 element, plus the cost of searching n-1
elements”
31
Design and Analysis of Algorithms Chapter 3
3-180Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Linear Seach (cont)
Caveat:
You need to convince yourself (and others) that the single step, examining an element, *is* done in constant time.
Can I get to the ith element in constant time, either directly, or from the (i-1)th element?
Look at the code
3-181Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Methods of Solving Recurrence Relations
Substitution (we’ll work on this one in this lecture)
Accounting method
Draw the recursion tree, think about it
The Master Theorem*
Guess at an upper bound, prove it
* See Cormen, Leiserson, & Rivest, Introduction to Algorithms
3-182Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Linear Search (cont.)
We’ll “unwind” a few of these
T(n) = T(n-1) + c (1)
But, T(n-1) = T(n-2) + c, from aboveSubstituting back in:
T(n) = T(n-2) + c + c
Gathering like terms
T(n) = T(n-2) + 2c (2)
3-183Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Linear Search (cont.)
Keep going:
T(n) = T(n-2) + 2c
T(n-2) = T(n-3) + c
T(n) = T(n-3) + c + 2c
T(n) = T(n-3) + 3c (3) One more:
T(n) = T(n-4) + 4c (4)
3-184Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Looking for Patterns
Note, the intermediate results are enumerated
We need to pull out patterns, to write a general expression for the kth unwinding
• This requires practise. It is a little bit art. The brain learns
patterns, over time. Practise.
Be careful while simplifying after substitution
3-185Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Eg. 1 – list of intermediates
Result at ith unwinding i
T(n) = T(n-1) + 1c 1
T(n) = T(n-2) + 2c 2
T(n) = T(n-3) + 3c 3
T(n) = T(n-4) + 4c 4
32
Design and Analysis of Algorithms Chapter 3
3-186Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Linear Search (cont.)
An expression for the kth unwinding:
T(n) = T(n-k) + kc We have 2 variables, k and n, but we have a relation
T(d) is constant (can be determined) for some constant d (we know the algorithm)
Choose any convenient # to stop.
3-187Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Linear Search (cont.)
Let’s decide to stop at T(0). When the list to search is empty, you’re done…
0 is convenient, in this example…
Let n-k = 0 => n=k Now, substitute n in everywhere for k:
T(n) = T(n-n) + nc
T(n) = T(0) + nc = nc + c0 = O(n)
( T(0) is some constant, c0 )
3-188Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Binary Search
Algorithm – “check middle, then search lower ½ or upper ½”
T(n) = T(n/2) + c
where c is some constant, the cost of checking the middle…
Can we really find the middle in constant time? (Make sure.)
3-189Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Binary Search (cont)
Let’s do some quick substitutions:
T(n) = T(n/2) + c (1)
but T(n/2) = T(n/4) + c, so
T(n) = T(n/4) + c + c
T(n) = T(n/4) + 2c (2)
T(n/4) = T(n/8) + c
T(n) = T(n/8) + c + 2c
T(n) = T(n/8) + 3c (3)
3-190Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Binary Search (cont.)
Result at ith unwinding i
T(n) = T(n/2) + c 1
T(n) = T(n/4) + 2c 2
T(n) = T(n/8) + 3c 3
T(n) = T(n/16) + 4c 4
3-191Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Binary Search (cont)
We need to write an expression for the kth unwinding (in n & k)
• Must find patterns, changes, as i=1, 2, …, k
• This can be the hard part
• Do not get discouraged! Try something else…
• We’ll re-write those equations…
We will then need to relate n and k
33
Design and Analysis of Algorithms Chapter 3
3-192Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Binary Search (cont)
Result at ith unwinding i
T(n) = T(n/2) + c =T(n/21) + 1c 1
T(n) = T(n/4) + 2c =T(n/22) + 2c 2
T(n) = T(n/8) + 3c =T(n/23) + 3c 3
T(n) = T(n/16) + 4c =T(n/24) + 4c 4
3-193Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Binary Search (cont)
After k unwindings:
T(n) = T(n/2k) + kc Need a convenient place to stop unwinding – need to relate
k & n
Let’s pick T(0) = c0 So,
n/2k = 0 =>
n=0
Hmm. Easy, but not real useful…
3-194Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Binary Search (cont)
Okay, let’s consider T(1) = c0
So, let:
n/2k = 1 =>
n = 2k =>
k = log2n = lg n
3-195Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 3Kurt Schmidt
Drexel University
Binary Search (cont.)
Substituting back in (getting rid of k):
T(n) = T(1) + c lg(n)
= c lg(n) + c0
= O( lg(n) )