analysis of algorithms the greedy approach. greedy algorithms algorithms work in stages,...
TRANSCRIPT
Analysis of Algorithms
The Greedy Approach
Greedy Algorithms
Algorithms work in stages, considering one input at a time.
At each stage a decision is made
regarding whether or not a particular input is in an optimal solution.
Inputs are considered to be in an order determined by some selection procedure.
If the inclusion of an input into a partially constructed optimal solution will result in an infeasible solution, then
this input is not added to the partial solution.
Greedy Algorithms
Greedy algorithm obtains an optimal solution to a problem by
making a sequence of choices. For each decision point in the algorithm,
the choice that seems best at the moment is chosen. This heuristic strategy does not always produce an optimal solution. How can one tell if a greedy algorithm will solve a particular
optimization problem? No way in general, But there are some key ingredients that are exhibited by most problems
that lend themselves to a greedy strategy.
Greedy Algorithm Example
The sales clerk often encounter the problem of giving change for a purchase.
Customers usually don’t want to receive a lot of coins. The goal of sales clerk is not only to give the correct
change, but to do so with as few coins as possible. A solution to an instance of change problem is a set of
coins that adds up to the required amount. An optimal solution to a problem is such a set of
minimum size.
Greedy Algorithm Example
A greedy approach to the problem could proceed as follows.
Initially there are no coins in the change. Sales clerk starts by looking for the largest coin (in
value) he can found. I.e. His criterion for deciding which coin is best (locally
optimal) is the value of the coin. This is called a selection procedure greedy
algorithm.
Greedy Algorithm Example
Next he sees if adding this coin to the change would make the total value of the change exceed the amount required.
This is called the feasibility check in a greedy algorithm.
If adding the coin would not make the change exceed the amount required, he adds the coin to the change.
Next he checks to see if the value of the change is now equal to the amount required.
This is the solution check in the greedy algorithm.
Greedy Algorithm Example
If they are not equal, he gets another coin using his selection procedure, and repeats the process.
He does this until the value of the change equals the amount required or he runs out of coins.
In the later case, he is not able to return the exact amount required.
Greedy Algorithm Example
while there are more coins and the instance is not solved doGrab the largest remaining coin //selection procedureif adding the coin makes the change exceed the amount
required then //feasibility checkreject the coin
elseadd the coin to the change
if the total value of the change equals the amount required then //solution check
the instance is solved
Greedy Algorithm Example
In the feasibility check, when we determine that adding a coin would make the change exceed the amount required, we learn that
The set obtained by adding that coin can not be completed to give a solution to the instance.
Therefore that set is infeasible and is rejected.
Greedy Algorithms
I. Greedy Choice Property A globally optimal solution can be arrived at by making a
locally optimal (greedy) choice. In dynamic programming,
We make a choice at each step, but the
Choice may depend on the solutions to subproblems.
Greedy Algorithms
In a greedy algorithm
We make whatever choice seems best at the moment and then solve the subproblems arising after the choice is made.
The choice made by greedy algorithm may depend on choices so far, but
it can not depned on any future choices or on the solutions to subproblems.
A greedy algorithms starts with a locally optimal choice, and continues making locally optimal choice until a solution is found
Greedy Algorithms
II. Optimal Substructure Optimal solution to the problem contains
within it optimal solutions to sub-problems. This is a key ingredients of accessing the
applicability of dynamic programming as well as greedy algorithms.
Minimum Spanning Tree
A Spanning Tree for a connected, undirected graph, G = (V, E), is a subgraph of G that is an undirected tree and contains all the vertices of G.
In a weighted graph G = (V, E, W), the weight of a subgraph is the sum of the weights of the edges in the subgraph.
A minimum spanning tree (MST) for a weighted graph is a spanning tree with minimum weight.
Minimum Spanning Tree
Consider the following graph
A B
DC
2.0
2.03.0
The possible spanning trees for
this graph are
A B
DC
2.0
1.03.0
A B
DC
2.0 1.03.0
A B
DC
2.0
2.0
4.0
1.03.0
MST Weight is 6 MST Weight is 6 Weight is 7
Minimum Spanning Tree
Minimum spanning trees are useful when we want to find the cheapest way to connect a
Set of cities by roads
Set of electrical terminals or computers by wires or telephone lines
Etc…
Prims’s Algorithm for Minimum Spanning Tree
Prim’s algorithm begins by selecting an arbitrary starting vertex, and then “branches out” form the past of the tree constructed so far by choosing a new vertex and edge at each iteration.
The new edge connects the new vertex to the previous tree. During the course of the algorithm, the vertices may be thought
of as divided into three (disjoint) categories as follows:1. Tree Vertices: in the tree constructed so far2. Fringe Vertices: Not in the tree, but adjacent
to some vertex in the tree.3. Unseen vertices: all others
Prims’s Algorithm for Minimum Spanning Tree
The key step in the algorithm is the selection of a vertex from the fringe and an incident edge.
Prim’s algorithm always chooses an edge of minimum weight from a tree vertex to a fringe vertex.
The general algorithm structure is
Prims’s Algorithm for Minimum Spanning Tree
Prim MST(G, n)Initialize all the vertices as unseenSelect an arbitrary vertex s to start the tree; reclassify it as tree.Reclassify all the vertices adjacent to s as fringe.While there are fringe verticesSelect an edge of minimum weight between a tree vertex t and a fringe vertex v.Reclassify v as tree; add edge tv to the tree;Reclassify all unseen vertices adjacent to v as fringe.
Prims’s Algorithm for Minimum Spanning Tree
AA BB
GG
FF II HH CC
EE DD
22
77 33 66
11 3355 44 22
44
6622 88
2211
22
33
77
AA
BB
GG
FF
The tree so farThe tree so far
Fringe VerticesFringe Vertices
The tree and fringe after the The tree and fringe after the starting vertex A is selectedstarting vertex A is selected
Prims’s Algorithm for Minimum Spanning Tree
AA BB
GG
FF II HH CC
EE DD
22
77 33 66
11 3355 44 22
44
6622 88
2211
33
77AA
CC
GG
FF
The tree so farThe tree so far
Fringe VerticesFringe Vertices
After Selecting an edge and vertex: BG After Selecting an edge and vertex: BG is not shown because AG is a better is not shown because AG is a better
choice to reach G.choice to reach G.
BB22
44
Prims’s Algorithm for Minimum Spanning Tree
AA BB
GG
FF II HH CC
EE DD
22
77 33 66
11 3355 44 22
44
6622 88
2211
77AA
CC
GG
FF
The tree so farThe tree so far
Fringe VerticesFringe Vertices
After Selecting an edge AG : GB is not After Selecting an edge AG : GB is not shown because vertex B is already shown because vertex B is already
include in a tree.include in a tree.
BB22
44
33II
HH
1133
Prims’s Algorithm for Minimum Spanning Tree
AA BB
GG
FF II HH CC
EE DD
22
77 33 66
11 3355 44 22
44
6622 88
2211
AA GG
FF
The final The final Minimum Spanning treeMinimum Spanning tree after prim’s algorithm isafter prim’s algorithm is
BB22
55
33 II EE11 1122 DD
CC22
HH22
Prim’s AlgorithmAlgorithm prim(G)
F=empty
for i=2 o n
nearest[i]=1;distance[i]=w[1:i]
end
repeat n-1 times
min=∞
for i=2 to n
if 0<dist[i]<min
min=dist[i], near=i
e= edge connecting vertices index by near and nearest[near]
add e to f
dist[near]=-1
for i= 2 to n
if w[i,near]<distance[i]
distance[i]=w[i,near], nearest[i]=near
Analysis
n-1(2(n-1))=n2 using (adjacency matrix) it may be changed if data structure is
changed. if implemented via min heap its complexity
would be (v-1+E)log(v)=Elog(v)
25
Kruskal's Algorithm Edge based algorithm Add the edges one at a time, in increasing
weight order The algorithm maintains A – a forest of
trees. An edge is accepted it if connects vertices of distinct trees
We need a data structure that maintains a partition, i.e.,a collection of disjoint sets MakeSet(S,x): S S {{x}} Union(Si,Sj): S S – {Si,Sj} {Si Sj} FindSet(S, x): returns unique Si S, where x Si
26
Kruskal's Algorithm The algorithm adds the cheapest edge that
connects two trees of the forestMST-Kruskal(G,w) A for each vertex v V[G] do Make-Set(v) sort the edges of E by non-decreasing weight w
for each edge (u,v) E, in order by non-decreasing weight do
if Find-Set(u) Find-Set(v) then A A {(u,v)} Union(u,v) return A
27
Kruskal Example
28
Kruskal Example (2)
29
Kruskal Example (3)
30
Kruskal Example (4)
31
Kruskal Running Time A detailed analysis will show O(V) + O(Elog(E)) +
O(Elog(V)). We need O(V) operations to build the initial forest
with |V| trees each containing one node. The edges are stored in a priority queue and each
time the smallest edge is retrieved, hence we need O(Elog(E)) operations to process the edges.
Finally, the disjoint set operations are implemented by a tree with V nodes, O(Elog(V));(comparison of each edge is performed in worst case)
Disregarding the lower term O(V) we get O(E (log(V) + log(E)).At the worst case E = O(V2). Hence log(E) = O(log(V2)) = O(2log(V)) = O(log(V).Thus we get complexity O(Elog(V)). On the other hand, V = O(E), hence we can reduce the complexity expression
Prim’s Vs Kruskal
For sparse trees Kruskal's algorithm is better - since it is guided by the edges.
For dense trees Prim's algorithm is better - the process is limited by the number of the processed vertices