more on intractability & beyond cs161: online algorithms monday, august 11 th
DESCRIPTION
More On Intractability & Beyond CS161: Online Algorithms Monday, August 11 th. Announcements. PS#6 due w ednesday at midnight Project evaluation/ c ompetition r esults on Wednesday Final exam i nformation (later this lecture) Evaluations for the class is open on axess. - PowerPoint PPT PresentationTRANSCRIPT
1
More On Intractability &
Beyond CS161: Online Algorithms
Monday, August 11th
2
Announcements
1. PS#6 due wednesday at midnight
2. Project evaluation/competition results on Wednesday
3. Final exam information (later this lecture)
4. Evaluations for the class is open on axess
3
Outline For Today
1. Approximate Set Cover
2. Approximate Vertex Cover
3. Final Exam Information
4. Beyond CS 161: Online Algorithms
Recap: Knapsack FPTAS
4
KnapsackAlgorithm
I: (wi, vi, W)
accuracy ε (say 0.01)
≥ (1-ε)*OPT
(1-ε)-approx
Key Takeaway: We are approximating an
NP-complete problem to arbitrary
precision.
Note On Approximating NP-complete Problems
5
Knapsack is NP-complete.
All NP problems (e.g., TSP) reduce to it.
If we solved Knapsack exactly we solve
all NP problems exactly.
Alg for KNPS
Π1 ∈ TSP
Poly-time TSP -> KNPS
Converter
Π2 ∈ KNPS
Solution to KNPS
Solution to TSP
Poly-time KNPS Sol. -> TSP
Solution Converter
Note On Approximating NP-complete Problems
6
But there are NP problems which can’t be
approximated to any constant (e.g. TSP)!
Poly-time Approx.Alg for KNPS
Π1 ∈ TSP
Poly-time TSP -> KNPS
Converter
Π2 ∈ KNPS
Approx. Solution to KNPS
Approx Solution to TSP
Poly-time KNPS Sol. -> TSP
Solution ConverterX
Note On Approximating NP-complete Problems
7
Key Takeaway: Although we can maintain
the exact solutions through reductions,
approximate solutions cannot be
maintained in general.
In other words: The information about the
approximate solutions can be lost across
reductions (though exact solutions can be
maintained)!
8
Outline For Today
1. Approximate Set Cover
2. Randomized Approximate Vertex Cover
3. Final Exam Information
4. Beyond CS 161: Online Algorithms
9
Set Cover Problem (Sec 11.3)
Input: U={1, …, n} items, S1, …, Sm sets s.t.
S1∪S2∪ … ∪Sm=U
Output: minimum # sets required to cover U
Fact: Set Cover is NP-complete: one of Karp’s 21
NP-complete algorithms (Vertex Cover ≤p Set
Cover)
10
Set Cover Example
1234
US1 5
6783
S2
9101112
S3
15926
10
S4
610711
S5
123456
789101112
48
S6
11
Set Cover Example
1
2
3
4
5
6
7
8
9
10
11
12
S1 S2 S3
S4
S5
S6
12
Set Cover Example
1
2
3
4
5
6
7
8
9
10
11
12
S1 S2 S3
Copt: S1∪S2∪S3
13
Set Cover Example
1
2
3
4
5
6
7
8
9
10
11
12
14
Greedy Set-Cover Algorithm
Idea: Iteratively pick the set that covers
the most “uncovered” elements. procedure Greedy-SetCover(U, S1,…,Sn):
C = ∅while U is not empty
pick Si that maximizes |Si ∩ U|C = C + Si
U = U \ Si
return Cmin(|U|, n) iterations, each iteration O(n*|
U|)
Total: O(n*|U|*min(|U|, n)) time.
15
Greedy Algorithm Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1 S2 S3
S4
S5
S6
Cgreedy:
16
Greedy Algorithm Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1 S2 S3
S5
S6
Cgreedy: S4
S4
17
Greedy Algorithm Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1 S2 S3
S5
S6
Cgreedy: S4
18
Greedy Algorithm Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1 S2 S3
S5
S6
Cgreedy: S4, S2
19
Greedy Algorithm Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1 S3
S5
S6
Cgreedy: S4, S2
20
Greedy Algorithm Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1
S5
S6
Cgreedy: S4, S2, S3
21
Greedy Algorithm Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1
S6
Cgreedy: S4, S2, S3
22
Greedy Algorithm Simulation
1
2
3
4
5
6
7
8
9
10
11
12
Cgreedy: S4, S2, S3, S1
Size is 4, Not
Optimal
23
Thought Experiment
Cost of each set in the output is 1.
Distribute the cost of each set Si over the
new elements that Si covers when it’s
picked.
24
Thought Experiment Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1 S2
S4
S5
S6
2
3
4
6
7
8
9
10
11
12
1 5
Costs of ElementsS3
25
Thought Experiment Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S4
21/6
3
4
61/6
7
8
91/6
101/6
11
12
11/6
51/6
Costs of Elements
26
Thought Experiment Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S4
21/6
4
61/6
7
8
91/6
101/6
11
12
11/6
51/6
Costs of ElementsS2
3
27
Thought Experiment Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S4
21/6
4
61/6
71/3
81/3
91/6
101/6
11
12
11/6
51/6
Costs of ElementsS2
31/3
28
Thought Experiment Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S4
21/6
4
61/6
71/3
81/3
91/6
101/6
11
12
11/6
51/6
Costs of ElementsS2
31/3
S3
29
Thought Experiment Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S4
21/6
4
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
Costs of ElementsS2
31/3
S3
30
Thought Experiment Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1
S4
21/6
4
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
Costs of ElementsS2
31/3
S3
31
Thought Experiment Simulation
1
2
3
4
5
6
7
8
9
10
11
12
S1
S4
21/6
41/1
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
Costs of ElementsS2
31/3
S3
Q1: “Cost of the Universe” U?
A: |Cgreedy|
b/c each time the
greedy algorithm
picks a new set we
distribute a cost of 1
to newly covered
elements.
21/6
41/1
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
31/3
Q1: “Cost of the Universe” U?
21/6
41/1
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
31/3
b/c each time the
greedy algorithm
picks a new set we
distribute a cost of 1
to newly covered
elements.
Q2: Sum of the “Costs of the Sets” in Copt?
21/6
41/1
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
31/3
Q2: Sum of the “Costs of the Sets” in Copt?
21/6
41/1
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
31/3
S1
1/6 + 1/6+ 1/3 + 1/1
Q2: Sum of the “Costs of the Sets” in Copt?
21/6
41/1
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
31/3
S1
1/6 + 1/6+ 1/3 + 1/1
S2
1/6 + 1/6+ 1/3 + 1/3
Q2: Sum of the “Costs of the Sets” in Copt?
21/6
41/1
61/6
71/3
81/3
91/6
101/6
111/2
121/2
11/6
51/6
31/3
S1
1/6 + 1/6+ 1/3 + 1/1
S2
1/6 + 1/6+ 1/3 + 1/3
S3
1/6 + 1/6+ 1/2 + 1/2
Q2: Sum of the “Costs of the Sets” in Copt?
b/c Copt is a set cover.
Goal
Bound the cost of each set in Copt
then, we can get a bound on|Cgreedy| in terms of |Copt|
To say that Cgreedy is not much larger than Copt
We currently have:
“Cost of Each Set S” is ≤ H(|S|)=O(ln(|S|)Claim: Cost of set S (not just the ones in
Copt but in any set S) is ≤ H(|S|)=O(ln(|S|)
If the claim is true then:
41
Proof of Claim: by picture
fd e
S
ca b g
1/61/4 1/5
H(S)
1/31 1/2 1/7
42
Proof of Claim: by picture
fd e
S
ca b g
1/61/4 1/5
H(S)
1/31 1/2 1/7
Q: Suppose the first time S’s elements are covered, 3 are covered: e, f, g. What can you assert about the costs they get?
43
Proof of Claim: by picture
f≤1/7
de
≤1/7
S
ca bg
≤1/7
1/61/4 1/5
H(S)
1/31 1/2 1/7
A: ≤ 1/7 (b/c S had 7 uncovered elements but S was not picked. So the set that’s picked must have had at least 7 uncovered elements.)
44
Proof of Claim: by picture
f≤1/7
de
≤1/7
S
ca bg
≤1/7
1/61/4 1/5
H(S)
1/31 1/2 1/7
Q: Suppose the 2nd time S’s elements are covered, 1 was covered: d What can you assert about the cost of d?
45
Proof of Claim: by picture
f≤1/7
d≤1/4
e≤1/7
S
ca bg
≤1/7
1/61/4 1/5
H(S)
1/31 1/2 1/7
A: ≤ 1/4 (by the same argument)
46
Proof of Claim: by picture
f≤1/7
d≤1/4
e≤1/7
S
ca bg
≤1/7
1/61/4 1/5
H(S)
1/31 1/2 1/7
Q: Suppose the 3rd time S’s elements are covered, 2 was covered: b and c What can you assert about the costs of b and c?
47
Proof of Claim: by picture
f≤1/7
d≤1/4
e≤1/7
Sc
≤1/3
ab
≤1/3
g≤1/7
1/61/4 1/5
H(S)
1/31 1/2 1/7
A: ≤ 1/3 (by the same argument)
48
Proof of Claim: by picture
f≤1/7
d≤1/4
e≤1/7
Sc
≤1/3
ab
≤1/3
g≤1/7
1/61/4 1/5
H(S)
1/31 1/2 1/7
Q:What can you assert about the cost of a?
49
Proof of Claim: by picture
f≤1/7
d≤1/4
e≤1/7
Sc
≤1/3
a≤1
b≤1/3
g≤1/7
1/61/4 1/5
H(S)
1/31 1/2 1/7
A: ≤ 1 (by the same argument)
50
Proof of Claim: by picture
f≤1/7
d≤1/4
e≤1/7
Sc
≤1/3
a≤1
b≤1/3
g≤1/7
1/61/4 1/5
H(S)
1/31 1/2 1/7
Conclusion: costs of a + b + … + g ≤ 1 + 1/2 + 1/3 + … + 1/7costs of a + b + … + g ≤ H(7) ≤ ln(7)
Q.E.D.
51
A More Formal Proof By Induction TemplateList the items of S covered in reverse order:
Let k = |S|
e1, e2, …, ek
Proof by (Reverse) Induction
Claim: ei ≤ 1/i
Base case is k (argue that it holds)
Assume holds for k, k-1, …, I
Show holds for i-1 by the same argument in
the proof by picture.
52
Summary: Greedy is log(n)-approximation.1. Assigned costs to each element by
distributing the cost 1 of each new set S
added by greedy equally to each new
element covered by S.
2. By construction:
3. B/c Copt covers all elements
4.
5. Put a log(|S|) bound to the “cost of each
set S
6. Concluded
53
Key Takeaway
For NP-complete Problems
the algorithmic tools in our toolbox
can be used as is.
But we have to give up something:
(1) generality, (2) exactness, or
(3) efficiency.
54
Outline For Today
1. Approximate Set Cover
2. Approximate Vertex Cover
3. Final Exam Information
4. Beyond CS 161: Online Algorithms
Recap: Vertex Cover
55
Input: Undirected Graph G(V, E)
Output: Minimum Vertex Cover of G
Vertex Cover: S ⊆ V, s.t. for each (u, v) ∈ E:
either u ∈ S, or v ∈ S.
Fact: Vertex Cover is NP-complete:
3-SAT≤pCLIQUE≤pVERTEX-COVER
Vertex Cover Example
56
B
DC
A
E
F
Vertex Cover Example
57
B
DC
A
E
F
Vertex Cover Example
58
B
DC
A F
Min Vertex Cover: {A, B}
F
2-Approximation VC
59
procedure 2-Approx-VC(G(V, E)):VC-OUT = ∅for (u, v) ∈ E:
if neither u or v is in VC:VC-OUT = VC-OUT ∪ {u, v}
return VC-OUT
Run-time: Can be done in O(n + m) time (exercise)
Claim 1: 2-Approx-VC returns a Vertex Cover.
Proof: By construction each edge (u, v) is either
already covered when we loop over it, or we cover it
by adding both u and v.
2-Approximation VC Example
60
B
DC
A
E
F
2-Approximation VC Example
61
B
DC
A F
F
2-Approximation VC Example
62
B
DC
A F
F
Output : {A, C, B, F}, 4 vertices
Output In Terms of Disjoint Edges
63
B
DC
A
E
F
Output : {(A, C), (B, F)}, 2 disjoint edgesProof idea: For each edge, any VC has to
contain one vertex.
Claim 2: 2-Approx-VC is a 2 approximation
64
Proof: We identify a set of “disjoint edges”
(ui, vi), i.e., no pair of edges we pick have a
common vertex.
Since any VC has to have either ui or vi in it:
any VC must have at least |VC-OUT|/2 vertices!
the optimal VC must have size ≥|VC-OUT|/2.
Fact: Best Approximation known. It’s open whether a
better one can exist or not.
65
Randomized 2-Approx Vertex Cover
procedure Rand-2-Approx-VC(G(V, E)):VC-OUT = ∅for (u, v) ∈ E:
if neither u or v is in VC: put either u or v into VC-OUT
randomlyreturn VC-OUT
Again this algorithm outputs a VC by construction.
Exercise: Show that E[|VC-OUT|] ≤ 2|VCopt.|
66
Outline For Today
1. Approximate Set Cover
2. Randomized Approximate Vertex Cover
3. Final Exam Information
4. Beyond CS 161: Online Algorithms
67
Final Exam Information This Saturday at 3:30pm. At Gates B01
Closed book/notes, etc. One double-sided A4 cheat-sheet
is allowed.
140 points.
1 problem consisting of 10 T/F questions (no proofs
required). +2 points for correct -2 for incorrect answers
1 problem testing mathematical tools we’ve used
4 or 5 questions on designing and analyzing algorithms.
You can use any algorithm we have covered as a
subroutine without re-proving any run-time and
correctness claims. But you have to know the run-times
of the algorithms we covered.
68
Topics Covered
Cumulative until the first half of today’s lecture.
8 Category of Topics/Algorithms
1. Mathematical Tools: Big-oh Notation, Master
Theorem, Substitution Method, Linearity of
Expectation, Independence
2. Data Structures: Heaps, Union-Find, Hash Tables,
Bloom Filters
3. Fund. Graph Primitives: BFS/DFS, Topological Sort
of DAGs, Undirected Conn. Comp., Directed
(Strongly) Connected Components
69
Topics Covered4. DC & Algs: MergeSort, Strassen
5. Greedy & Algs: Dijkstra, Prim, Kruskal (for MST), Cut
Property and Lemmas for MST, Huffman, Scheduling Problems
(and others in PSs), Greedy proof techniques: Greedy Stays
Ahead, Exchange Arguments
6. Randomized Algs: QuickSort/QuickSelect, Karger,
Approximate Max-Cut/Vertex Cover
7. DP & Algs: DP Recipe, Linear Ind. Set, Sequence Alignment,
Bellman-Ford, Floyd-Warshall, Pseudo-polynomial Knapsack
Algorithms
8. Intractability: P, NP, NP-complete, reductions, Options for
Confronting NP-complete Problems, Knapsack Greedy Approx,
Knapsack FPTAS, Set Cover, TSP with Triangle Inequality
70
A Final Note About The Final
For most problems, we will give you a computational
problem and ask you to solve it, just as in PSs.
71
Outline For Today
1. Approximate Set Cover
2. Randomized Approximate Vertex Cover
3. Final Exam Information
4. Beyond CS 161: Online Algorithms
72
CS 161’s Computational Model Assumptions1. Inputs to computational problems are fixed size n.
2. Input is correct/error-free.
3. Computation performed on a serial machine
(single processor)
4. Computation is performed on a classic machine, i.
e., each bit stores a 0 or 1 vs Quantum Machines
with qbits.
And others, such as Random Access Memory model.
Different Computational Models Drop One or More of
These Assumptions
73
Streaming Applications Input is a possibly infinite stream.
At each point in time, the application needs to
make an algorithmic decision.
E.g. Caching in OSs, infinite disk lookup requests.
OSCache
… … … R3 R2 R1
Algorithmic Decision: If there is a miss, what to evict?
Question: How optimal is the alg’s eviction strategy?
74
Streaming Applications News Feeds, FB, Twitter receiving continuous
tweets/ user updates.
At each point, these apps need to decide which
news/update should appear in whose news feeds.
CS161 Tools Cannot Analyze the algorithmic
decision these apps make.
FB/Twitter/Google
… … … new friendship
news
upd
75
Online Algorithms Takes as input a possibly infinite stream.
At each point in time t make a decision based on
what has been seen so far
but without knowing the rest of the input
Type of Optimality Analysis: Competitive Ratio
“Worst” (Cost of online algorithm)/(Cost of OPT)
ratios against any input stream
Where OPT is the best solution possible if we
knew the entire input in advance
76
Example 1: Skiing in Tahoe Buying equipment costs $500
Renting costs $50
Q: Should we buy or rent?
A: If we will go 9 times or fewer then rent, o.w.
buy.
An online algorithm for this problem makes a
decision of whether to buy or not each time we go
to Tahoe.
Once the algorithm buys, there’s no other
decision to make, everything is free.
77
Example 1: Skiing in TahoeAn Online
Skiing Algorithm
t=1 RENT
An Online Skiing
Algorithmt=2 RENT
An Online Skiing
Algorithmt=k BUY
…
Observation: Any online algorithm is completely described
by the time k it buys the equipment.
Q: What’s the optimal choice of k?
78
Competitive Ratio If We Pick k = 1An Online
Skiing Algorithm
t=1 BUY
Q1: What’s the cost of this algorithm?
A1: $500
Q2: What’s the competitive ratio of the algorithm that
pick k=1?
i.e., what’s the worst case input for this algorithm?
A: Going only once.
Then the optimal solution would be just rent for $50.
=> CR = $500/50 = 10
79
Competitive Ratio If We Pick k = 2An Online
Skiing Algorithm
t=1 RENT
Q1: What’s the cost of this algorithm?
A1: $550
Q2: What’s the CR?
An Online Skiing
Algorithmt=2 BUY
80
Competitive Ratio If We Pick k = 2
Case 1: If we go once: We pay $50, opt is $50, ratio = 1.
Case 2: If we go twice: We pay $550, opt is 100, ratio: 5.5
Case 3: If we go three times: We pay $550, opt is 150,
ratio: 3.6
Case 4: If we go four times: We pay $550, opt is 200,
ratio: 2.75
…
A: CR is 5.5
(much better than k=1 algorithm)
81
Competitive Ratio If We Pick k < 10An Online
Skiing Algorithm
t≤k-1 RENT
Q1: What’s the cost of this algorithm?
A1: (k-1)50 + 500
Q2: What’s the CR?
A: If we go ≤ k-1 times, we’re optimal.
If we go k times then the ratio:
An Online Skiing
Algorithmt=k BUY
82
Competitive Ratio If We Pick k > 10
Q1: What’s the cost of this algorithm?
A1: (k-1)50 + 500
Q2: What’s the CR?
A: If we go < 10 times, we’re optimal.
What if we go ≥ 10 times?
Then OPT is $500.
If we go t 10 < t < k times, ratio: t50/500 (so increasing
by 0.1)
If we go exactly k times
83
Competitive Ratio If We Pick k > 10
Q1: What’s the cost of this algorithm?
A1: (k-1)50 + 500
Q2: What’s the CR?
A: If we go < 10 times, we’re optimal.
What if we go ≥ 10 times?
Then OPT is $500.
If we go t 10 < t < k times, ratio: t50/500 (so increasing
by 0.1)
If we go exactly k times
84
Optimal k
Case 1: k < 10, CR:
Case 2: k > 10,
CR:
**Optimal k = 10 => CR: 1.9**
Best online strategy is to wait until we
go 10 times and then buy the
equipment.
Case 2: k = 10,
CR:
85
Caching
Slow Disk
Fast CachePage1
……
Pagek
… … … R3 R2 R1O.w (miss),
send request to disk, put the
page into cache.
Q: Which page to evict?
If page is in cache (hit) reply directly from cache
86
Caching
Input: N pages in disk, and stream of infinite page
requests.
Online Algorithm: Decide which page to evict from
cache when it’s full and there’s a miss.
Goal: minimize the number of misses.
Idea: LRU: Remove the Least Recently Used
page
87
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
miss
88
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
4
89
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
4
miss
90
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
4
1
91
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
4
1
miss
92
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
4
1
2
93
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
4
1
2
hit
94
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
4
1
2
miss
95
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
5
1
2
96
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
5
1
2
miss
97
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
5
1
3
98
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
5
1
3
miss
99
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
5
4
3
100
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
5
4
3
hit
101
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
5
4
3
miss
102
LRU with k = 3
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
LRU
1
4
3
so and so forth…
103
Competitive Ratio Claim
Claim: If the optimal sequence of choices for a size-h
cache causes m misses. Then, for the same sequence
of requests, LRU for a size-k cache causes
misses
Interpretation: If LRU had twice as much cache size as
an algorithm OPT that knew the future, it would have
at most twice the misses of OPT.
Note will prove the claim for
104
Proof of Competitive Ratio
Recursively break the sequence of inputs into phases.
Let t be the time when we see the (k+1)st different
request.
Phase 1: a1 … at-1
Let t` be the time we see the (k+1)st different element
starting from at
Phase 2: at … at’-1
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
105
Proof of Competitive Ratio
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
k=
3
Phase 1 Phase 2 Phase 3 Phase 4
By construction, each phase has k distinct requests.
Q: At most how many misses does LRU have in each
phase?
A: k b/c even if it evicted everything in the k+1st
item, it would have at most k misses.
106
Proof of Competitive Ratio
4 1 2 1 5 3 4 4 1 1 3 2 4 5 1
Phase 1 Phase 2 Phase 3 Phase 4
Q: What’s the minimum misses that any size-h
cache must have in any phase?
A: k-h b/c k distinct items will be in the cache at
different points during the phase, so at least k-h of
them must trigger misses.
Therefore the CR: k/k-h
Q.E.D.
107
Wednesday
More on Beyond CS 161
Parallel Algorithms