analysis and design of algorithms part 3

50
G hR i d Graph Representation and Traversals Deepak John Department Of Computer Applications, SJCET-Pala

Upload: deepak-john

Post on 20-Aug-2015

711 views

Category:

Entertainment & Humor


0 download

TRANSCRIPT

Page 1: Analysis and design of algorithms part 3

G h R i dGraph Representation and Traversals

Deepak JohnDepartment Of Computer Applications, SJCET-Pala

Page 2: Analysis and design of algorithms part 3

Graph terminology - overviewp gy

A graph consists oft f ti V { }◦ set of vertices V = {v1, v2, ….. vn}

◦ set of edges that connect the vertices E ={e1, e2, …. em} Two vertices in a graph are adjacent if there is an edgeg p j g

connecting the vertices. Two vertices are on a path if there is a sequences of vertices

beginning with the first one and ending with the second onebeginning with the first one and ending with the second one Graphs with ordered edges are directed. For directed graphs,

vertices have in and out degrees.W i h d G h h l i d i h d Weighted Graphs have values associated with edges.

Deepak John,Department Of IT,CE Poonjar

Page 3: Analysis and design of algorithms part 3

Graph representation – undirectedp p

h Adj li Adj igraph Adjacency list Adjacency matrix

Graph representation – directed

Deepak John,Department Of IT,CE Poonjar

graph Adjacency list Adjacency matrix

Page 4: Analysis and design of algorithms part 3

Adjacency Lists RepresentationAdjacency Lists Representation A graph of n nodes is represented by a one-dimensional array L of

linked lists, where, L[i] is the linked list containing all the nodes adjacent from node

i. The nodes in the list L[i] are in no particular order

Deepak John,Department Of IT,CE Poonjar

Page 5: Analysis and design of algorithms part 3

Pros and Cons of Adjacency Matrices P Pros: Simple to implement Easy and fast to tell if a pair (i,j) is an edge: simply check ify p ( ,j) g p y

A[i][j] is 1 or 0 Cons: No matter how few edges the graph has the matrix takes O(n2) No matter how few edges the graph has, the matrix takes O(n2)

in memoryPros and Cons of Adjacency Lists Pros: Saves on space (memory): the representation takes as many

memory words as there are nodes and edge.memory words as there are nodes and edge. Cons: It can take up to O(n) time to determine if a pair of nodes (i,j) is

d ld h t h th li k d li t L[i] hi h

Deepak John,Department Of IT,CE Poonjar

an edge: one would have to search the linked list L[i], whichtakes time proportional to the length of L[i].

Page 6: Analysis and design of algorithms part 3

Graph Traversal TechniquesGraph Traversal Techniques There are two standard graph traversal techniques: Depth First Search (DFS) Depth-First Search (DFS) Breadth-First Search (BFS)

In both DFS and BFS, the nodes of the undirected graph are, g pvisited in a systematic manner so that every node is visitedexactly one.

Both BFS and DFS give rise to a tree: Both BFS and DFS give rise to a tree: When a node x is visited, it is labeled as visited, and it is added

to the treeIf h l d f d i i d h If the traversal got to node x from node y, y is viewed as theparent of x, and x a child of y

Deepak John,Department Of IT,CE Poonjar

Page 7: Analysis and design of algorithms part 3

Depth-First SearchDepth-First Search DFS follows the following rules:

S l t i it d d i it it d t t th t1. Select an unvisited node x, visit it, and treat as the currentnode

2. Find an unvisited neighbor of the current node, visit it, andgmake it the new current node;

3. If the current node has no unvisited neighbors, backtrack tothe its parent and make that parent the new current node;the its parent, and make that parent the new current node;

4. Repeat steps 3 and 4 until no more nodes can be visited.5. If there are still unvisited nodes, repeat from step 1.

Deepak John,Department Of IT,CE Poonjar

Page 8: Analysis and design of algorithms part 3

• It searches ‘deeper’ the graph when possible.• Starts at the selected node and explores as far as possible alongStarts at the selected node and explores as far as possible along

each branch before backtracking.• Vertices go through white, gray and black stages of color.

Whit i iti ll– White – initially– Gray – when discovered first– Black – when finished i.e. the adjacency list of the vertex isBlack when finished i.e. the adjacency list of the vertex is

completely examined.• Also records timestamps for each vertex

d[ ] h th t i fi t di d– d[v]when the vertex is first discovered– f[v] when the vertex is finished

Deepak John,Department Of IT,CE Poonjar

Page 9: Analysis and design of algorithms part 3

Depth-first search: Strategy (for digraph) choose a starting vertex, distance d = 0 vertices are visited in order of increasing distance from the

starting vertex, examine One edges leading from vertices (at distance d) to examine One edges leading from vertices (at distance d) to

adjacent vertices (at distance d+1) then, examine One edges leading from vertices at distance d+1 to

distance d+2, and so on, until no new vertex is discovered, or dead end then backtrack one distance back up and try other edges and so then, backtrack one distance back up, and try other edges, and so

on until finally backtrack to starting vertex, with no more new vertex

Deepak John,Department Of IT,CE Poonjar

to be discovered.

Page 10: Analysis and design of algorithms part 3

DFS(G)1 for each vertex u ∈ V [G]2 d l [ ] WHITE // l ll ti hit t th i t NIL2 do color[u] ← WHITE // color all vertices white, set their parents NIL3 π[u] ← NIL4 time ← 0 // zero out time5 for each vertex u ∈ V [G] // call only for unexplored vertices5 for each vertex u ∈ V [G] // call only for unexplored vertices6 do if color[u] = WHITE // this may result in multiple sources7 then DFS-VISIT(u)

DFS-VISIT(u)1 color[u] ← GRAY ▹White vertex u has just been discovered.2 time ← time +12 time ← time +13 d[u] time // record the discovery time4 for each v ∈Adj[u] ▹Explore edge(u, v).5 do if color[v] = WHITE5 do if color[v] WHITE6 then π[v] ← u // set the parent value7 DFS-VISIT(v) // recursive call8 color[u] BLACK ▹ Blacken u; it is finished

Deepak John,Department Of IT,CE Poonjar

8 color[u] BLACK Blacken u; it is finished.9 f [u] ▹ time ← time +1

Page 11: Analysis and design of algorithms part 3

forward edges- which point from a node of the tree to one of itsdescendants

back edges-which point from a node to one of its ancestors cross edges, is any other edge in graph G. It connects vertices in

two different DFS-tree or two vertices in the same DFS-tree neitherof which is the ancestor of the other.

tree edges edges which belong to the spanning tree itself are tree edges, edges which belong to the spanning tree itself, areclassified separately from forward edges

Deepak John,Department Of IT,CE Poonjar

Page 12: Analysis and design of algorithms part 3

Deepak John,Department Of IT,CE Poonjar

Page 13: Analysis and design of algorithms part 3

Depth first search - analysisDepth first search analysis Lines 1-3, initialization take time Θ(V). Lines 5-7 take time Θ(V), excluding the time to call the DFS-

VISITVISIT. DFS-VISIT is called only once for each node (since it’s called

only for white nodes and the first step in it is to paint the node)gray).

Loop on line 4-7 is executed |Adj(v)| times. Since, ∑vєV |Adj(v)| =Ө (E), the total cost ofDFS-VISIT it θ(E)

Th t t l t f DFS i θ(V+E)The total cost of DFS is θ(V+E)

Deepak John,Department Of IT,CE Poonjar

Page 14: Analysis and design of algorithms part 3

Breadth-First SearchBreadth First Search BFS follows the following rules:

1 Select an unvisited node x visit it have it be the root in a BFS1. Select an unvisited node x, visit it, have it be the root in a BFStree being formed. Its level is called the current level.

2. From each node z in the current level, in the order in whichthe level nodes were visited, visit all the unvisited neighborsof z. The newly visited nodes from this level form a new levelthat becomes the next current level.

3. Repeat step 2 until no more nodes can be visited.4. If there are still unvisited nodes, repeat from Step 1.

Deepak John,Department Of IT,CE Poonjar

Page 15: Analysis and design of algorithms part 3

Breadth first search conceptsBreadth first search - concepts• To keep track of progress, it colors each vertex - white, gray or

blackblack.• All vertices start white.• A vertex discovered first time during the search becomes

hinonwhite.• All vertices adjacent to black ones are discovered. Whereas, gray

ones may have some white adjacent vertices.y j• Gray represent the frontier between discovered and undiscovered

vertices.

Deepak John,Department Of IT,CE Poonjar

Page 16: Analysis and design of algorithms part 3

Breadth-first search: Strategy (for digraph) choose a starting vertex, distance d = 0g , vertices are visited in order of increasing distance from the

starting vertex, examine all edges leading from vertices (at distance d) to examine all edges leading from vertices (at distance d) to

adjacent vertices (at distance d+1) then, examine all edges leading from vertices at distance d+1

t di t d+2 dto distance d+2, and so on, until no new vertex is discovered

The predecessor of u is stored in the variable π[u]. The predecessor of u is stored in the variable π[u].

Deepak John,Department Of IT,CE Poonjar

Page 17: Analysis and design of algorithms part 3

BFS - algorithmBFS(G, s) // G is the graph and s is the starting node1 for each vertex u ∈ V [G] - {s}2 do color[u] ← WHITE // color of vertex u[ ]3 d[u] ←∞ // distance from source s to vertex u4 π[u] ← NIL // predecessor of u5 color[s] ← GRAY6 d[ ] 06 d[s] ← 07 π[s] ← NIL8 Q ← Ø // Q is a FIFO - queue9 ENQUEUE(Q, s)Q (Q, )10 while Q ≠ Ø // iterates as long as there are gray vertices. Lines 10-1811 do u ← DEQUEUE(Q)12 for each v ∈Adj[u]13 d if l [ ] WHITE // di h di d dj i13 do if color[v] = WHITE // discover the undiscovered adjacent vertices14 then color[v] ← GRAY // enqueued whenever painted gray15 d[v] ← d[u] + 116 π[v] ← u

Deepak John,Department Of IT,CE Poonjar

16 π[v] u17 ENQUEUE(Q, v)18 color[u] ← BLACK // painted black whenever dequeued

Page 18: Analysis and design of algorithms part 3

Deepak John,Department Of IT,CE Poonjar

Page 19: Analysis and design of algorithms part 3

Breadth first search - analysis

•The while-loop in breadth-first search is executed at most |V|times. The reason is that every vertex enqueued at most once. So,y q ,we have O(V).•The for-loop inside the while-loop is executed at most |E| timesif G is a directed graph or 2|E| times if G is undirected Theif G is a directed graph or 2|E| times if G is undirected. Thereason is that every vertex dequeued at most once and weexamine (u, v) only when u is dequeued. Therefore, every edge

i d if di d i if di dexamined at most once if directed, at most twice if undirected.So, we have O(E).•Therefore, the total running time for breadth-first search, gtraversal is O(V + E).

Deepak John,Department Of IT,CE Poonjar

Page 20: Analysis and design of algorithms part 3

STRONGLY CONNECTED COMPONENTS OF A DIRECTED GRAPHDIRECTED GRAPH A directed graph is called strongly connected if there is a path

from each vertex in the graph to every other vertex. The strongly connected components of a directed graph G are

its maximal strongly connected sub graphs.

Deepak John,Department Of IT,CE PoonjarGraph with strongly connected components marked

Page 21: Analysis and design of algorithms part 3

PropertiesProperties Reflexive property: For all a, a # a. Any vertex is strongly

connected to itself, by definition., y Symmetric property: If a # b, then b # a. For strong connectivity,

this follows from the symmetry of the definition. The same twoth ( f t b d th f b t ) th t h th tpaths (one from a to b and another from b to a) that show that a ~

b, looked at in the other order (one from b to a and another from ato b) show that b ~ a.

Transitive property: If a # b and b # c, then a # c. Let's expandthis out for strong connectivity: if a ~ b and b ~ c, we have fourpaths: a b b a b c and c b Concatenating them in pairs a b cpaths: a-b, b-a, b-c, and c-b. Concatenating them in pairs a-b-cand c-b-a produces two paths connecting a-c and c-a, so a ~ c,showing that the transitive property holds for strong connectivity.

Deepak John,Department Of IT,CE Poonjar

Page 22: Analysis and design of algorithms part 3

Algorithm to Find Strongly Connected ComponentAlgorithm to Find Strongly Connected Component Strategy:

Phase 1: A standard depth-first search on G is performed, and the

vertices are put in a stack at their finishing timesPh 2Phase 2: A depth-first search is performed on GT, the transpose graph. To start a search vertices are popped off the stack To start a search, vertices are popped off the stack. A strongly connected component in the graph is identified by

the name of its starting vertex (call leader).

Deepak John,Department Of IT,CE Poonjar

Page 23: Analysis and design of algorithms part 3

Deepak John,Department Of IT,CE Poonjar

Page 24: Analysis and design of algorithms part 3

Deepak John,Department Of IT,CE Poonjar

Page 25: Analysis and design of algorithms part 3

Deepak John,Department Of IT,CE Poonjar

Page 26: Analysis and design of algorithms part 3

Bi-connected components of an Undirected graph

Biconnected graph: A connected undirected graph G is said to

b bi t d if it i t dbe biconnected if it remains connectedafter removal of any one vertex and theedges that are incident upon that vertex.

Bi t d t Biconnected component: A biconnected component of a undirected

graph is a maximal biconnected subgraph,that is a biconnected s bgraph notthat is, a biconnected subgraph notcontained in any larger biconnectedsubgraph.

Artic lation point:Artic lation points are theArticulation point:Articulation points are thepoints where the graph can be broken downinto its biconnected components

C is an articulation point C is an articulation point

Deepak John,Department Of IT,CE Poonjar

Page 27: Analysis and design of algorithms part 3

Discovery of Biconnected Components via Articulation PointsArticulation Points

If we can find articulation points then can compute biconnectedcomponents.Idea:• During DFS, use stack to store visited edges.

E h ti l t th DFS f t hild f ti l ti• Each time we complete the DFS of a tree child of an articulationpoint, pop all stacked edges currently in stack• These popped off edges form a biconnected component.p pp g p

Deepak John,Department Of IT,CE Poonjar

Page 28: Analysis and design of algorithms part 3

Bi-connected components,

Undirected graph

Deepak John,Department Of IT,CE Poonjar

biconnected components

Page 29: Analysis and design of algorithms part 3

Deepak John,Department Of IT,CE Poonjar

Page 30: Analysis and design of algorithms part 3

Deepak John,Department Of IT,CE Poonjar

Page 31: Analysis and design of algorithms part 3

Deepak John,Department Of IT,CE Poonjar

Page 32: Analysis and design of algorithms part 3

WHAT IS BINARY RELATION A binary relation R from the set S to the set T is a subsetof S×T, R S×T. If S = T, we say that the relation is a binary

l i Srelation on S.

P ti f Bi R l ti Properties of Binary RelationLet R be a binary relation on S. Then R is

1 Reflexive1. Reflexive2. Symmetric3 Anti-symmetric3. Anti-symmetric4. Transitive

Deepak John,Department Of IT,CE Poonjar

Page 33: Analysis and design of algorithms part 3

Transitive closure of a graphTransitive closure of a graph

The problem: Given a directed graph, G = (V, E), find all of thei h bl f i i Vvertices reachable from a given starting vertex v ϵV.

Transitive closure (definition): Let G = (V E) be a graph where x RTransitive closure (definition): Let G = (V, E) be a graph, where x Ry, y R z (x, y, z ϵ V). Then we can add a new edge x R z. A graphcontaining all of the edges of this nature is called the transitiveclosure of the original graph.

The best way to represent the transitive closure graph (TCG) is byThe best way to represent the transitive closure graph (TCG) is bymeans of an adjacency matrix.

Deepak John,Department Of IT,CE Poonjar

Page 34: Analysis and design of algorithms part 3

Consider the following adjacency matrix on the left representing adirected graph, the transitive closure is given on the right illustratingg p , g g gwhich vertices can reach other vertices

•there is an edge from a to b and e b d b d•there is an edge from a to b and e.•b can reach d•d can reach c

h ll i

a b c d e a b c d ea 0 1 0 0 1 1 1 1 1 1b 0 0 0 1 0 0 1 1 1 0

•a can reach all vertices.•But b cannot reach a

c 0 1 0 0 0 0 1 1 1 0 d 0 0 1 0 0 0 1 1 1 0 e 0 0 0 1 0 0 1 1 1 1e 0 0 0 1 0 0 1 1 1 1

Deepak John,Department Of IT,CE Poonjar

Page 35: Analysis and design of algorithms part 3

Strategy for Transitive ClosureStrategy for Transitive Closure We noted earlier that if there is an path from a to b and from b to c,

then there is a path from a to c Our strategy for deriving a transitive closure matrix will be based on

this simple idea Start with a, compare it against each other vertex and see if there is, p g

an edge if so, the corresponding matrix value is true if not see if there is already a path known from some vertex c to if not, see if there is already a path known from some vertex c to

b and an edge from a to b, if so, then we know that there is a pathfrom a to b

Thi ill i th f 3 t d f l f th t ti This will require the use of 3 nested for-loops, one for the startingvertex of a path, one for the destination vertex of a path, and one tosee if a path already exists from start to this point and from this

i d i i

Deepak John,Department Of IT,CE Poonjar

point to destination

Page 36: Analysis and design of algorithms part 3

It implies the following rules for generating R(k) from R(k-1):

RR((kk))[[i ji j]] == RR((kk--11))[[i ji j]] oror ((RR((kk--11))[[i ki k]] andand RR((kk--11))[[k jk j])])RR(( ))[[i,ji,j]] RR(( ))[[i,ji,j]] oror ((RR(( ))[[i,ki,k]] andand RR(( ))[[k,jk,j])])

Rule 1 If an element in row i and column j is 1 in R(k-1),it remains 1in R(k)

Rule 2 If an element in row i and column j is 0 in R(k-1), it has to bechanged to 1 in R(k) if and only if the element in its row i and columnchanged to 1 in R(k) if and only if the element in its row i and columnk and the element in its column j and row k are both 1’s in R(k-1)

Constructs transitive closure T as the last matrix in the sequence ofn-by-n matrices R(0), … , R(k), … , R(n)

Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)

Deepak John,Department Of IT,CE Poonjar

Page 37: Analysis and design of algorithms part 3

Warshall’s Algorithmg

Deepak John,Department Of IT,CE Poonjar

Page 38: Analysis and design of algorithms part 3

It should be obvious that the complexity is O(n3) because of the 3nested for-loops.

The result is an NxN matrix where entry R[i, j] is true if there is apath from vertex i to vertex j.

The algorithm will work on either undirected or directed The algorithm will work on either undirected or directedgraphs

Page 39: Analysis and design of algorithms part 3

Warshall’s Algorithm (example)3

13

13

1

42

0 0 1 0

42 42

0 0 1 01 0 0 10 0 0 0R(0) =

0 0 1 01 0 1 10 0 0 0R(1) =

0 0 1 01 1 1 10 0 0 0R(2) =

0 1 0 0 0 1 0 00 0 0 01 1 1 1

33

4

1 31

0 0 1 01 1 1 1

42

0 0 1 01 1 1 1

42

Deepak John,Department Of IT,CE Poonjar

1 1 1 10 0 0 01 1 1 1

R(3) =1 1 1 10 0 0 01 1 1 1

R(4) =

Page 40: Analysis and design of algorithms part 3

All-Pairs Shortest PathsAll Pairs Shortest Paths

Given a weighted graph G(V,E,w), the all-pairs shortest pathsproblem is to find the shortest paths between all pairs of verticesvi, vj ∈ V.

A number of algorithms are known for solving this problem A number of algorithms are known for solving this problem.

FLOYD’S ALGORITHM: ALL PAIRS SHORTEST PATHSFLOYD’S ALGORITHM: ALL PAIRS SHORTEST PATHS

Problem: In a weighted (di)graph, find shortest paths between everypair of vertices

Same idea: construct solution through series of matrices D(0), …,D (n)

Deepak John,Department Of IT,CE Poonjar

Page 41: Analysis and design of algorithms part 3

Time efficiency: Θ(n3)

Space efficiency: Matrices can be written over their predecessors

Deepak John,Department Of IT,CE Poonjar

Space efficiency: Matrices can be written over their predecessors

Page 42: Analysis and design of algorithms part 3

Floyd’s Algorithm (example)

0 ∞ 3 ∞ 2 0

0 ∞ 3 ∞ 2 0 5

21 2

2 0 ∞ ∞∞ 7 0 16 ∞ ∞ 0

D(0) = 2 0 5 ∞∞ 7 0 16 ∞ 9 0

D(1) =

3 1

3 6 7

4

0 3 0 10 3 4 0 10 3 4

1

0 ∞ 3 ∞2 0 5 ∞9 7 0 1D(2) =

0 10 3 42 0 5 69 7 0 1D(3) =

0 10 3 42 0 5 67 7 0 1D(4) =

6 ∞ 9 0 6 16 9 0 6 16 9 0

Deepak John, Department Of IT,CE Poonjar

Page 43: Analysis and design of algorithms part 3

Dynamic Programmingy g g Dynamic Programming is an algorithm design technique for

optimization problems: often minimizing or maximizing.p p g g Like divide and conquer, DP solves problems by combining

solutions to sub problems.U lik di id d b bl t i d d t Unlike divide and conquer, sub problems are not independent. Sub problems may share subsubproblems, However, solution to one sub problem may not affect the solutions to other sub

bl f h blproblems of the same problem. DP reduces computation by Solving sub problems in a bottom-up fashion.g p p Storing solution to a sub problem the first time it is solved. Looking up the solution when sub problem is encountered again.

Key: determine structure of optimal solutions

Deepak John,Department Of IT,CE Poonjar

Key: determine structure of optimal solutions

Page 44: Analysis and design of algorithms part 3

Steps in Dynamic Programming1 Characterize structure of an optimal solution1. Characterize structure of an optimal solution.2. Define value of optimal solution recursively.3. Compute optimal solution valuesp p4. Construct an optimal solution from computed values.

Elements of Dynamic Programming Optimal substructure Overlapping subproblems

Deepak John,Department Of IT,CE Poonjar

Page 45: Analysis and design of algorithms part 3

Optimal Binary Search TreesOptimal Binary Search Trees OBST is one special kind of advanced tree. It focus on how to reduce the cost of the search of the BST It focus on how to reduce the cost of the search of the BST. A good example of a dynamic algorithm

• Solves all the small problemsild l i l bl f h• Builds solutions to larger problems from them

• Requires space to store small problem results

Deepak John,Department Of IT,CE Poonjar

Page 46: Analysis and design of algorithms part 3

Problem Given sequence K = k1 < k2 <··· < kn of n sorted keys, with aq 1 2 n y ,

search probability pi for each key ki. Want to build a binary search tree (BST)

with minimum expected search costwith minimum expected search cost. Actual cost = number of items examined. For key ki, cost = depthT(ki)+1, where For key ki, cost depthT(ki) 1, where

depthT(ki) = depth of ki in BST T .

TE ]incostsearch[

n

iiiT pk

TE

1)(depth1

]in cost search [

i 1

Page 47: Analysis and design of algorithms part 3

Consider 5 keys with these search probabilities:p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3.

k2 i depthT(ki) depthT(ki)·pi1 1 0 25

k1 k4

1 1 0.252 0 03 2 0.14 1 0 2

k3 k5

4 1 0.25 2 0.6

1.15k3 k5

Therefore, [search cost] = 2.15.

Deepak John,Department Of IT,CE Poonjar

Page 48: Analysis and design of algorithms part 3

p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3.

i depthT(ki) depthT(ki)·pi1 1 0.252 0 0

k2

2 0 03 3 0.154 2 0.45 1 0 3

k1 k5

5 1 0.31.10

k4

Therefore, E[search cost] = 2.10.

k3 This tree turns out to be optimal for this set of keys

Deepak John,Department Of IT,CE Poonjar

3 This tree turns out to be optimal for this set of keys.

Page 49: Analysis and design of algorithms part 3

Optimal Substructure

Any sub tree of a BST contains keys in a contiguous range ki, ..., kj Tj T

T’

If T is an optimal BST and T contains sub tree T’ with keys ki, ... ,kj , then T’ must be an optimal BST for keys ki, ..., kj.

Deepak John,Department Of IT,CE Poonjar

, j , p y i, , j1

Page 50: Analysis and design of algorithms part 3

Pseudo-code

OPTIMAL-BST(p, q, n)OPTIMAL-BST(p, q, n)(p q )1. for i← 1 to n + 12. do e[i, i 1] ← 03. w[i, i 1] ← 0

(p q )1. for i← 1 to n + 12. do e[i, i 1] ← 03. w[i, i 1] ← 04. for l← 1 to n5. do for i← 1 to nl + 16. do j←i + l1

4. for l← 1 to n5. do for i← 1 to nl + 16. do j←i + l17. e[i, j ]←∞8. w[i, j ] ← w[i, j1] + pj9. for r←i to j

7. e[i, j ]←∞8. w[i, j ] ← w[i, j1] + pj9. for r←i to j10. do t← e[i, r1] + e[r + 1, j ] + w[i, j ]11. if t < e[i, j ]12. then e[i, j ] ← t

10. do t← e[i, r1] + e[r + 1, j ] + w[i, j ]11. if t < e[i, j ]12. then e[i, j ] ← t13. root[i, j ] ←r14. return e and root13. root[i, j ] ←r14. return e and root