unit - ii

62
Data Structures and Algorithms UNIT – II Snapshots Introduction The General Method Binary Search Finding the Maximum and Minimum Merge Sort Quick Port Heap Sort Strassen’s Matrix Multiplication. The General Method Knapsack Problem Minimum Spanning Tree Optimal Storage on Tapes Single Source Shortest path 2.0 Introduction This chapter deals with the concepts and different types of sorting. Sorting is one of the techniques used to arrange data in an order. There are basically two types of sorting - internal sort and external sort. In this chapter, various types of sorting techniques based on Divide and Conquer method are dealt in detail. 2.1 Objective The objective of this lesson is giving the skill about the sorting concept. The content of this lesson starts sorting and deals with different sorting techniques and their algorithms. It gives the clear idea and knowledge about internal sorting and external sorting. At end of this lesson you’ll able to sharp your skill and knowledge about Strassen’s matrix, Knapsack problem and single source shortest path. Page 37

Upload: jit-agg

Post on 03-Nov-2014

27 views

Category:

Documents


1 download

DESCRIPTION

u2

TRANSCRIPT

Page 1: UNIT - II

Data Structures and Algorithms

UNIT – II

Snapshots Introduction The General Method Binary Search Finding the Maximum and Minimum Merge Sort Quick Port Heap Sort Strassen’s Matrix Multiplication. The General Method Knapsack Problem Minimum Spanning Tree Optimal Storage on Tapes Single Source Shortest path

2.0 Introduction This chapter deals with the concepts and different types of sorting. Sorting is one

of the techniques used to arrange data in an order. There are basically two types of sorting - internal sort and external sort. In this chapter, various types of sorting techniques based on Divide and Conquer method are dealt in detail.

2.1 ObjectiveThe objective of this lesson is giving the skill about the sorting concept. The

content of this lesson starts sorting and deals with different sorting techniques and their algorithms. It gives the clear idea and knowledge about internal sorting and external sorting. At end of this lesson you’ll able to sharp your skill and knowledge about Strassen’s matrix, Knapsack problem and single source shortest path.

2.2 Content

2.2.1 The General Method [Divide and Conquer]

The major rule behind Divide and Conquer (DAC) algorithm is the design paradigm, an easier method to solve several small instances of a problem than one large one. This algorithm divides the problem into several smaller instances and the smaller instances recursively and finally combines with the solutions to obtain the original input.

To sketch this algorithm, the subroutines implemented are directly divide and combine. k is the number of smaller instances into which the input is divided. For an input size n let b(n) be the number of steps done by directly solve, let d(n) be the number of steps done by divide and let c(n) be the number of step done by combine. The general

Page 37

Page 2: UNIT - II

Data Structures and Algorithms

form of the recurrence equation which specifies the amount of work done by the algorithm is: k

T(n)=D(n)+ T(size(Ii))+C(n) for n>smallsize

i=1

Solve (I)N=size(I);

If (n smallsize)Solution=directlysolve(I);

Elsedivide I into I1……..Ik

for each i [1…..k};Si=solve(i);solution=combine(S1…..Sk);return solution;

With the base case T(n)=B(n) for n smallsize. For many DAC algorithm either the divide is step or the combine step is very simple and the recurrence equation for T is simpler than the general form. First decide how to partition the list into sublist after they are sorted, how to combine the sublist into one. There are various methods available that are given below:

Binary Search Straight MaxMin Recursive Algorithm Merge Sort Quick Sort Heap Sort

2.2.2 Binary SearchA binary search tree is a binary tree in which the nodes are labeled as an element

in a set or equivalent value. The major property of the binary tree is that all elements stored in the left subtree of any node x are all less than the element stored at x and all the elements stored at right subtree of x greater than the elements stored at x.

The data in an array is sorted in increasing numerical order or equivalently alphabetically. Then there is an extremely efficient searching algorithm called binary search, which can be used to find the location of a given data. This approach requires that the entries in the list be of a scalar or other type and that the list is already in order.

Order list – is a list in which the entry contains a key such that the keys are in order. That is if entry I comes before J in the list then the key entry I is less than or equal to the entry J.

The operation of order list includes all those for ordinary list. For example consider searching of an element in an array. Suppose one is searching the data 30 in the array of size 170. One will search at random towards the later half of the array. If the array value is less than 30 one will search at left; if it is greater, search at right. Repeat the process until 30 was found. Binary search requires the sorted data to operate on. It cannot be guessed which division has the data item required so, divide the list into two

Page 38

Page 3: UNIT - II

Data Structures and Algorithms

equal halves. Use binary search method to find the data ‘Scorpio’ in the following list of 11 zodiac signs.

AriesAquariusCancerCapriconGeminiLeoLibraPiscesSagittariusScorpioTaurus

1 234567891011

Comparison 1 (Leo Scorpio)

Comparison 2 (Sagittarius Scorpio)Comparison 3 ( = Scorpio)

The above table has sorted list of size 11. The first comparison is with the middle Element index value 6. that is, Leo. This eliminates the first 5 data elements in the table, then the second comparison is with the middle index from 7 to 11 that is, 9 Sagittarius, this search removes 7 to 9 index values. The third comparison is with middle element from 9 to 11. that is, 10 ‘Scorpio’. Finally find the search value in the third comparison. If this value search is done in sequential search in takes 10 comparisons.

The algorithm of binary search is given below.Procedure binary search

This algorithm represents the binary search method to find a required item in a sorted list in ascending order.Input: sorted list of size N, Target Value TOutput: Position of T in the List=I

Algorithm binsearch(t,n,I)Var max,min,mid:integer;BeginMax=n min=1Found:=false;Repeatmid=(min + max) div 2 if (T<list[mid]) then

max=mid-1else if (t>mid[list])

min=mid+1elsebeginI=midFound =trueEnd ifUntil( found or (max<min))

Page 39

Page 4: UNIT - II

Data Structures and Algorithms

End (binary search end)

Assume that the keys in our list can be compared under the operations < and >, but this algorithm is easily extended to handle character strings as keys. Binary search is excellent for linked list, since it requires jumping operation back and forth from the one end of the list to the middle. An action is easy within an array but it is slow for linked list. Binary search tree diagram will represent the following figure.

Figure 2.1 Binary decision tree for binary search, n=15

In binary search method always the key in the middle of the list currently being examined is used for comparison. The splitting of the list can be exemplified through a binary decision tree in which the value of a node is the index of the key that is being tested. Suppose there are 14 nodes then the first key is compared with the location 7. If the key is less than the key at location 3 and is greater than the key, location 11 is tested. The process of binary tree diagram is shown in figure 2.1 of having 14 nodes.

2.2.3 Finding the Maximum and Minimum

Divide and conquer technique is used for finding the maximum and minimum number in the set of n elements by using the straightforward algorithm. In this algorithm the time complexity is major one for element comparison. In this n element set are polynomial, vectors, very large numbers or string, the cost of this element comparison is higher than the cost of other operations. Therefore time determines the cost of element comparison.

This straight max and min algorithm is used to find the maximum and minimum element in the set of n elements. Let q=(n, a[i]….a[j]) denote the arbitrary instance of the problem. Here n is the number of elements in the list a[i],….,a[j] and find the max and min of the list. This algorithm has two stages - one is small (q) being true when n2. In this case if a[i] have values as n=1 and n=2 the problem can be solved by making one

Page 40

3

51

7

11

4 62 8

9

10

12

13

14

Page 5: UNIT - II

Data Structures and Algorithms

comparison. If the list has more than two elements q is into smaller instances q1=([n/2],a[1],…,a([n/2]) and q2=(n-[n/2],a[[n/2]+1],…,a[n]). After having divided q into two smaller subproblems, one can solve them by recursively invoking the same divide and conquer algorithm.

Algorithm StraightMaxMin(a, n, max, min)//set the values max=maximum and //min=minimum of a[1:n]{max:=min:=a[1];for i:=2 to n do { if a[i]>max) then max:=a[i]; if a[i]<min) then min:=a[i]; }}

This algorithm requires 2(n-1) element comparison for the best, average and worst case. Comparing the a[i]< min is needed only when a[i] > max is false.

The best-case - occur when the elements are in increasing order, the number of elements comparison is n-1.The worst case - occur when the elements are in decreasing order the number of elements comparison is 2(n-1).Average Case - occurs when the element comparison is less than 2(n-1), the average occurrence a[i] is greater than max at half of the time then the resultant comparison is 3n/2-1.

MaxMin recursive algorithm finds the maximum and minimum of the set of elements {a(i),a(I+1)….a(j)}. In this situation the set is divided into two one is (i=j) and another set is (i=j-1) in the set containing more than two elements, the midpoint is determined so that two new sub problems are generated. This problem determines the max and min value, compares the values and attain the solutions.Algorithm for recursive max and min procedure

maxmin(i,j,max,min){//a[1:n] is a global array // parameters i and j are integers, iijn. //to set max and min to the largest and smallest values in a[i:j]. if (i=j) max=min=a[i];elseif (i=j-1){if (a[i]<a[j]) max=a[j];min=a[i];else max=a[i];min=a[j];}elsemid=[(i+j)/2];maxmin(i,mid,max,min);maxmin(mid+1,j,max1,min1);

Page 41

Page 6: UNIT - II

Data Structures and Algorithms

if (max<max1) max=max1;if (min<min1) min=min1;return;}

The unsorted values of the list are given below22,13,-5,-8,15,60,17,31,47

Figure 2.2 for tree Recursive calls MaxMin

A better way of keeping track of recursive calls is to build a tree by adding a node each time a new call is made. For this algorithm each has four items of information. The root node contains 1, 9 that is values of i and j are initialized to max and min. The execution of the procedure produces the two new calls to MaxMin, where i and j have the values 1,5 and 6,9 respectively and it splits the subset into two subsets of approximately equal size. From the above figure it is known that the circled numbers the upper left corner of each node represent the orders in which max and min values are assigned.

2.2.4 Merge Sort

The most common algorithm used in an external sorting that is for the problem in which data is stored in disks or magnetic tapes. Merge sort is an excellent sorting method. This technique is used to sort the files in the following way. Divide the files into two equal sized sub files and the sort the sub files separately, then merge the sorted files into one.

Algorithm MergeSort(l,h)//a[l:h] is a gloabal array to be sorted //small(p) is true if there is only one element sort.

//In this case the list already sorted. // if there are more than one element Divide p into subproblem

{if (l<h)

Page 42

1,9,22,47

1,5,22,15 6,9,60,47

1,3,22,-5 4,5,-8,15 6,7,60,17

1,2,22,13 2,3,-5,-5

8,9,31,47

8

9

1

2

5

3

7

6

4

Page 7: UNIT - II

Data Structures and Algorithms

{mid = [l+h]/2;//solve the subproblemsmergesort(l,mid);mergesort(mid+1,h);//combine the solutionsmerge(l,mid,h);}

}

The array name is ‘A’ the l,h,mid are parameters of the array. The array is split into equal half size separately that is one range is l to mid and the other range is mid+1 to h and sort them separately. Finally the sorted lists are combined as l,mid and h. This merge sort subroutine is responsible for allocating additional workspace needed.

[(first+last)/2]First Last

Sort recursively Sort recursivelyBy merge sort By merge sort

Figure 2.3 for merge strategy

The A array values are given below that is 25, 57, 49, 36, 13, 98, 80, 30 In the first stage divide the array value into equal size a1 =25,49,13,80. And

a2=57,36,98,30 and compare the equal index values of both arrays. Comparison operation stages are given below:

Original [25] [57] [49] [36] [13] [98] [80] [30]Files

I st [25 57] [49 36] [13 98] [80 30]Stage

II ndStage [25 36 49 57] [13 30 80 90]

III rdStage [13 25 30 36 49 57 80 90]

Page 43

Sorted Sorted

Sorted

Page 8: UNIT - II

Data Structures and Algorithms

Figure 2.4: Successive stages of merge sort

Divide the array into equal size of two sub list and merge the adjacent(disjoint) pairs of sublist. Repeat the process until there is only one list remaining of size n. The above figure illustrates the operation of the merge list. Each individual list is contained in braces.

2.2.5 Quick Sort

Quick sort is widely used for internal sorting method. It was invented by C.A.R.Hoare in 1960. Its popularity is based on the easy implementation, moderate use of the resources and acceptable and variety of sorting cases.

The basis of quicksort is the divide and conquers strategy. That is, divide the problem into sub problems until the solved subproblems sorted are found.

In the quick sort the array is divided into two sub arrays, so that the sorted subarrays do not need to be merged later. In this sorting, the list is partitioned into lower and upper sublists. This method moves data in the correct direction just sufficient for it to arrive at final place in the array, therefore it reduces the unnecessary swaps and puts the item in great distance in one move. The first item in the array is called pivot. Partition the entries so that all those keys less than the pivot come one side (sublist) and all those with greater than key on other sublist. This procedure is applied recursively on either parts of the array until the whole array is sorted. This mechanism is applied in array A using the follow steps.

1. Remove the first data item in the list and call that item as ‘A1’, mark the position of the item A1 and scan the entire array from right to left, comparing the data items values with A1 value. When the first smallest value is found, remove it from its current position and put in position a(1).

2. Scan the line from left to right beginning with array a(2) position compare data item value with A1. When you find the a(n+1) value is greater than the A1 value, extract it and store in the position marked by parenthesis.

3. Beginning from right to left, scan the line of value a(n-2) looking for a value smaller than A1. When it is found, extract it and store it in the position marked by parenthesis.

4. Begin the scan from left to right start with value a(3) find a value greater than A1 value, remove it, mark its position and store it inside the parentheses.

5. Now scan the line from right to left; start with position a(n-3) when no value is smaller than A1 value, come to the parenthesis position put the A1 value in a[5].Finally observe that the first value is put in the middle location and all the values

in the left of A1 are less than A1 value and all the values to the right side are greater than A1. This process can now apply recursively to second segment of the array on the left and right of A1 value.Suppose the array A is initially appearing as: 14,21,4,7,94,11,81,16,8,54

Page 44

Page 9: UNIT - II

Data Structures and Algorithms

A(1) A(2) A(3) A(4) A(5) A(6) A(7) A(8) A(9) A(10)

(14) 21 4 7 94 11 81 16 8 54

8 21 4 7 94 11 81 16 () 54

8 () 4 7 94 11 81 16 19 54

8 11 4 7 94 () 81 16 19 54

8 11 4 7 () 94 81 16 19 54 8 11 4 7 14 94 81 16 19 54

array 1 correct position array 2

Procedure of the Quicksort is given below:

Quicksort(int a[],int x, int i){int l,r,v;

if (i>x){v=a[i], l=x-1,r=1;for(;;){ while(a[++l]>v);

while(a[--r]<v);if (l==r) break;swap(a,l,r)

}swap(a,l,i)quicksort(a,x,l-1)quicksort(a,l+1,i)

}}

The average run time stack efficiency of the quick sort o(n(log2n)) is the best one that has been achieved by large array of size n. In worst case the array is already sorted, the efficiency of the quicksort may drop down to o(n2) due to the right to left scan all the way to the last left boundary. The performance can be improved by keeping in mind the following tips:

1. Switch to a faster sorting scheme like insertion sort when the sublist size becomes comparatively small.

2. Use a better dividing element I in the implementation.

Page 45

Page 10: UNIT - II

Data Structures and Algorithms

2.2.6 Heap Sort The heap sort is based on a tree structure that reflects the packing order in a corporate hierarchy. This algorithm sorts a contiguous list of length n with 0(n log n) comparisons and movements of entries even in its worst case. A heap is complete binary tree in which each node satisfies the heap condition represented as an array.

Definition- a heap is a list in which each entry contains a key and for all positions k in the list, the key at position k is at least as large as the keys in positions 2k and 2k+1 provided these positions exist in the list.

A complete binary tree is said to satisfy the heap condition if the key of each node is greater than or equal to the key in its children, thus the root node will have the largest key value. This method is originally described by Floyd and has two phases.

Phase I- the array containing n data items is viewed as equivalent to a binary tree that is full at all levels except for its rightmost element.

To achieve the list the following steps are used:

1. Process the node that is parent of the right most node on lowlevel. If the value is less than the value of its largest child, swap the values, if not do not do anything.

2. Move left on the same level compare the value of the parent node with the values of the child nodes. If parent is smaller than the largest child then swap it.

3. When the left end of this level is reached move one level above and begin with the rightmost parent node. Repeat step 2 continue swapping the original parent with the larger of its child until it is larger than its children. As a result the original parent is being walked down the tree in a fashion that ensures that numbers will be in increasing order along the path.

4. Repeat step(3) until all level 1 nodes have been processed.(always note that the root is at zero level).

Phase 2 - In the second phase the heap finds the node with the largest value in the tree and cut it from the tree. Then repeat to find the second largest value, which is also removed from the tree. The process continues until only two nodes are left in the tree, which are exchanged if necessary. In the second phase the root of the tree has the largest key, which belongs to the end of the list. The precise steps for phase two are as follows:

1. Compare the root node with its children, swapping it with the largest child if the largest child is larger than the root.

2. If a swap occurred in step (1), then continue swapping the value, which was originally in the root partition until it is larger than its children. In effect this original root node value is being walked down a path in the tree to ensure that all paths retain values arranged in ascending order from leaf node to root node.

3. Swap the root node with the bottom rightmost child, server the new bottom rightmost child from the tree. This is the largest value.

Page 46

Page 11: UNIT - II

Data Structures and Algorithms

4. Repeat step(1) through (3) until only two elements are left.Both phase 1 and 2 use the same strategy of walking a parent down a path of the

tree via a series of swaps with its children: the procedure of heap sort given below:

void heapSort(int numbers[], int array_size) { int i, temp;

for (i = (array_size / 2)-1; i >= 0; i--)siftDown(numbers, i, array_size); for (i = array_size-1; i >= 1; i--)

{ temp = numbers[0]; numbers[0] = numbers[i]; numbers[i] = temp;

siftDown(numbers, 0, i-1); } }

void siftDown(int numbers[], int root, int bottom) {

int done, maxChild, temp;done = 0; while ((root*2 <= bottom) && (!done)) {

if (root*2 == bottom) maxChild = root * 2; else if (numbers[root * 2] > numbers[root * 2 + 1])

maxChild = root * 2;else maxChild = root * 2 + 1;

if (numbers[root] < numbers[maxChild]) {

temp = numbers[root];numbers[root] = numbers[maxChild]; numbers[maxChild] = temp; root = maxChild;

} else done = 1;

} }

Page 47

11

1

7

12

5

8

6

2

17

104

Page 12: UNIT - II

Data Structures and Algorithms

The diagrammatic structure of the heap sort basic diagram is given above using II phase.

Swap 6 & 10

Swap 7 & 8

Page 48

11

1

7

12

5

8

6

2

17

104

11

1

7

12

5

8

100 2

17

64

Page 13: UNIT - II

11

1

8

12

17

7

10

2

5

64

Data Structures and Algorithms

swap 5 & 17

swap 1 & 10

Page 49

11

1

8

12

5

7

10

2

17

64

Page 14: UNIT - II

Data Structures and Algorithms

Swap 1 & 6

Phase I activated

Swap 11 & 17

Page 50

Page 15: UNIT - II

Data Structures and Algorithms

Swap 12 & 11

Swap 2 & 17

Swap 2 & 12

Swap 2 & 11

Page 51 17

Page 16: UNIT - II

Data Structures and Algorithms

Swap 1 & 12

swap 1 & 5

swap 4 & 11

Page 52

Page 17: UNIT - II

Data Structures and Algorithms

Second phase activated

The diagrammatic representation of the heap sort is given above. In general, the heap sort does not perform better than the quick sort. Only when the array is nearly sorted to begin with does the heap sort algorithm gain an advantage.

2.2.7 Strassen’s Matrix Multiplication

In Strassen’s Matrix Multiplication consider two matrixes A and B of size n x n. The result of the C matrix is A cross B i.e. C=AB. The C matrix is also n X n matrix whose values are taken by ith row of A and jth column of B the values range from 1 to n. using the below formula to compute c(i,j):

C(i,j)= A(i,k)B(k,j) 1kn

This C matrix has n values; n multiplication is needed, if the matrix has n2

elements, the time for calculating the result matrix multiplication algorithm is the conventional method is (n3).

The divide-and-conquer strategy suggests another way to compute the product of two n X n matrices. For simplicity assume that n is a power of 2, that is, there exists a non-negative integer k such that n=2k. In case n is not a power of two, then enough rows and columns of zeros can be added to both A and B so, that the resulting dimensions are powers of two. Imagine that A and B are each partitioned into four square sub matrices, each submatrix having dimensions n/2 X n/2. Then the product AB can be computed by using the above formula for the product of 2X2 matrices: if AB is

A11 A12 B11 B12 C11 C12 ---- (2.9) A21 A22 B21 B22 C21 C22

then

C11 = A11B11+A12B21

C12 = A11B12+A12B22 --(2.10) C21 = A21B11+A22B21

Page 53

Page 18: UNIT - II

Data Structures and Algorithms

C22 = A21B12+A22B22

If n=2, then formulas (2.9) and (2.10) are computed using a multiplication operation for the elements of A and B. These elements are typically floating point numbers. For n>2, the elements of C can be computed using matrix multiplication and addition operations applied to matrices of size n/2Xn/2. Since n is a power of 2, these matrix products can be recursively computed by the same algorithm we are using for the n X n case. This algorithm will continue applying itself to smaller-sized sub matrices until n becomes suitably small(n=2) so that the product is computed directly.

To compute AB using (2.10) we need to perform eight multiplications of n/2 x n/2 matrices and four additions of n/2 x n/2 matrices. Since two n/2 x n/2 matrices can be added in time cn2 for some constant c, the overall computing time T(n) of the resulting divide-and-conquer algorithm is given by the recurrence

b n 2T(n)=

8T(n/2)+cn2 n>2

where b and c are constants.

This recurrence can be solved in the same way as earlier recurrences to obtain T(n) = O(n3). Hence no improvement over the conventional method has been made. Since matrix multiplications are more expensive than matrix additions (O(n3) versus O(n2)), one can attempt to reformulate the equations for Cij so as to have fewer multiplications and possibly more additions. Volker strassen has discovered a way to compute the C ij’s of (2.10) using only seven multiplications and 18 additions or subtractions. His method involves first computing the seven n/2 x n/2 matrices P,Q,R,S,T,U and V can be computed using seven matrix multiplications and 10 matrix additions or subtractions. The Cij’s require an additional 8 additions or subtractions.

P = (A11+A22)(B11+B22) Q = (A21+A22)B11

R = A11(B12-B22) S = A22(B21-B11) --(2.11) T = (A11+A12)B22

U = (A21-A11)(B11+B12) V = (A12-A22)(B21+B22)

C11 = P+S-T+VC12 = R+TC21 = Q+S --(2.12)C22 = P+R-Q+U

The resulting recurrence relation for T(n) is

Page 54

Page 19: UNIT - II

Data Structures and Algorithms

b n 2T(n)= --(2.13)

7T(n/2)+an2 n > 2

where a and b are constants. Working with this formula, we get

T(n) = an2[1+7/4+(7/4)2+….+(7/4)k-1] + 7kT(1) cn2(7/4)log

2n + 7 log

2n, c a constant

= cnlog24+log

27-log

24+ nlog

27

= O(nlog27)O(n2.81)

The Greedy Algorithm

Introduction

This chapter deals with the general greedy method which is a straightforward design technique. In optimization problems the algorithms need to make a different choice and those overall results is best according to some limited short-term criterion that is not too expensive to calculate. This chapter presents the algorithm for finding minimum spanning tree in an undirected graph according to R.C. Prim; a more closely related algorithm is for finding single source shortest path in directed and undirected graph as per E.W .Dijkistra, and a second algorithm for finding minimum spanning tree, according to J.B. Kruskal. All the above three-algorithm use the priority queues. 2.2.8 The General Method

This method has several algorithms namely, Dijkstra’s, Kruskal’s etc. For any individual stage greedy algorithm selects that option, which is “locally optimal” in some particular sense, for making changes, produces an overall optimal solution only because of special properties.

Feasible solution: - some of the problems have n inputs and require us to obtain a subset that satisfies some constraints; any subset that satisfies these constraint is called feasible solution.

Object function: - one needs to find a feasible solution that either maximizes or minimizes a given problem.

Optimal solution: - A feasible solution that does this is called as optimal solution.

One usually requires a feasible solution but not necessarily an optimal solution. In this method the algorithm work is planned in stages, considering one input at a time. At

Page 55

Page 20: UNIT - II

Data Structures and Algorithms

each stage a decision is made regarding whether a specific input is optimal or not. Considering the inputs in an order determined by some selection procedure does this.

The next input is included into the partially constructed optimal solution only if it does not result in an infeasible solution. The selection procedure itself is based on some optimization measure, which may be the objective function. In fact, several different optimization measures may be possible for a given problem and these will result in algorithms that generate suboptimal solutions. This version of the greedy technique is called the subset paradigm. Algorithm 3.1 explains the greedy method control abstraction for the subset paradigm.

The function Select selects an input from a[] and removes it. The selected input’s value is assigned to x. The function Feasible, a Boolean-valued function, determines whether x can be included into the solution vector. The function Union combines x with the solution and updates the objective function. The function Greedy describes the essential way that a greedy algorithm will look, once a particular problem is chosen and the functions Select, Feasible and Union are properly implemented. Each decision is made using an optimization criterion that can be computed using decisions already made.

Algorithm Greedy(a,n)//a[1:n] contains the n inputs.{ solution:=0; // Initialize the solution. for i:=1 to n do { x:=Select(a); if Feasible(solution, x) then solution:=Union(solution, x); } return solution;}

Algorithm 2.1: Greedy method control abstraction for the subset paradigm

2.2.9 Knapsack Problem

The greedy method can be applied to solve the knapsack problem. Let us consider n objects. Object i has a weight wi and the knapsack has a capacity m. If a fraction x i, 0xi1, of object i is placed into the knapsack, then a profit of pixi is earned. The objective is to obtain a filling of the knapsack that maximizes the total profit earned. Since the knapsack capacity is m, it is required that the total weight of all chosen objects to be at most m. Formally, the problem can be stated as

maximize pixi --(2.1) 1in

Page 56

Page 21: UNIT - II

Data Structures and Algorithms

subject to wixi m --(2.2) 1in

and 0xi1, 1in --(2.3)

The profits and weights are positive numbers.

A feasible solution is any set(x1,…,xn) satisfying (2.2) and (2.3) above. An optimal solution is a feasible solution for which (2.1) is maximized.

Example 2.1:

In case the sum of all the weights is m, then xi=1, 1in is an optimal solution.

So let us assume the sum of weights exceeds m. Now all the x i’s cannot be 1. Another observation to make is:

Example 2.2:

All optimal solutions will fill the knapsack exactly.

Example 2.2 is true because one can always increase the contribution of some object i by a fractional amount until the total weight is exactly m.

First, one can try to fill the knapsack by including the object with largest profit. If an object under consideration does not fit, then a fraction of it is included to fill the knapsack. Thus each time an object is included into the knapsack, one can obtain the largest possible increase in profit value. Note that if only a fraction of the last object is included, then it may be possible to get a bigger increase by using a different object. For example, if two units of space is left and two objects with (p i=4, wi=4) and (pj=3, wj=2) remaining, then using j is better than using half of i.

Since at each step, it has been chosen to introduce that object which would increase the objective function value, the method used to obtain this solution is termed a greedy method and does not yield an optimal solution.

One can formulate at least two other greedy approaches attempting to obtain optimal solutions. From the preceding example, we note that considering objects in order of non-increasing profit values does not yield an optimal solution because even though the objective function value takes on large increases at each step, the number of steps is few as the knapsack capacity is used up at a rapid rate. This requires us to consider the objects in order of nondecreasing weights wi. Using Example 2.1 solution 3 results. This too is suboptimal.

Thus, our next attempt is an algorithm that strives to achieve a balance between the rate at which profit increases and the rate at which capacity is used. At each step we include that object which has the maximum profit per unit of capacity used. This means that objects are considered in order of the ratio pi/wi. If the objects have already been sorted into non increasing order of pi/wi, then function GreedyKnapsack (Algorithm 2.2)

Page 57

Page 22: UNIT - II

Data Structures and Algorithms

obtains solutions corresponding to this strategy. Note that solutions corresponding to the first two strategies can be obtained using this algorithm if the objects are initially on the appropriate order. Disregarding the time to initially sort the objects, each of the three strategies outlines above requires only O(n) time.

When one applies the greedy method to the solution of the knapsack problem, there are at least three different measures one can attempt to optimize when determining which object to include next. These measures are total profit, capacity used and the ratio of an accumulated profit to capacity used. Once an optimization measure has been chosen, the greedy method suggests choosing objects for inclusion into the solution in such a way that each choice optimizes the measure at that time. Thus a greedy method using profit as its measure will at each step choose an object that increases the profit the most. If the capacity measure is used, the next object included will increase this the least. Although greedy-based algorithms using the first two measures do not guarantee optimal solutions for the knapsack problem, Theorem 2.1 shows that a greedy algorithm using strategy three always obtains an optimal solution. This theorem is proved using the following technique.

Compare the greedy solution with any optimal solution. If the two solutions differ, then find the first xi at which they differ. Next, it is shown how to make the xi in the optimal solution equal to that in the greedy solution without any loss in total value. Repeated use of this transformation shows that the greedy solution is optimal.

Algorithm GreedyKnapsack(m,n)//p[1:n] and w[1:n] contain the profits and weights //respectively of the n objects ordered such that //p[i]/w[i] p[i+1]/w[i+1]. m is the knapsack size//and x[1:n] is the solution vector.{ for i:= 1 to n do x[i]:=0.0; //Initialize x. U:=m; for i:=1 to n do { if (w[i]>U)then break; x[i]:=1.0; U:=U-w[i]; } if (in) then x[i]:= U/w[i];}

Algorithm 2.2: Algorithm for greedy strategies for the knapsack problem

Theorem 2.1:

If p1/w1p2/w2…..pn/wn. Then Greedy Knapsack generates an optimal solution to the given instance of the knapsack problem.

Page 58

Page 23: UNIT - II

Data Structures and Algorithms

Proof:

Let x=(x1,…,xn)be the solution generated by Greedy Knapsack. If all the x i equal one, then clearly the solution is optimal. so, let j be the least index such that x j1. From the algorithm it follows that xi=1 for 0 i 1. Let y=(y1,…,yn) be an optimal solution. From Lemma 2.2, one can assume that wiyi=m. Let k be the least index such that ykxk. Clearly, such a k must exist. It also follows that yk<xk. To see this, consider the three possibilities k<j, k=j, or k>j.

1. If k<j, then xk=1. But, ykxk, and so yk<xk.2. If k=j, then since wixi=m and yi=xi for 1ij,

it follows that either yk<xk or wiyi>m. (since xk<> yk)

3. If k>j, then wixi>mi, and this is not possible.

Now suppose one increases yk to xk and decrease as many of (yk+1,…,yn) as necessary so that the total capacity used is still m. This results in a new solution z=(z1,…,zn) with zi=xi, 1ik, and k<in wi(yi-zi)-wk(zk-yk). Then, for z we have

pizi = piyi+(zk-yk)wk pk/wk -(yi-zi)wi pi/wi

1in 1ik 1<ik

piyi+ (zk-yk)wk - (yi-zi)wi pk/wk

1ik k<in

= piyi

1ik

if pizi > piyi, then y could not have been an optimal solution. If these sums are equal, then either z=x and x is optimal or zx. In the latter case, repeated use of the above argument will either show that y is not optimal or transform y into x and thus show that x too is optimal.

2.2.10 Minimum-Cost Spanning Trees

Definition: A spanning tree for a connected, undirected graph, G=(V,E) is a sub graph of G that is an undirected tree and contains all the vertices of G.

In a weighted graph G =(V,E,W), the weight of a subgraph is the sum of the weights of the edges in the subgraph. A Minimum spanning tree for a weighted graph is a spanning tree with minimum weight.

Applications of Spanning tree

Page 59

Page 24: UNIT - II

Data Structures and Algorithms

a. To find the cheapest way to connect a set of terminals (like cities, computers, factories by using roads, wires or telephone lines) MST for the graph wit an edge for each possible connection weighted by the cost of that connection is used.

b. MST is used in routing algorithms, that is, for finding efficient paths through a graph that visits every vertex.

Example: Figure (3.1) shows the complete graph on four nodes together with three of its spanning trees.

Figure 2.5 An undirected graph and three of its spanning trees

Another application of spanning trees arises from the property that a spanning tree is a minimal subgraph G’ of G such that V(G’)=V(G) and G’ is connected. Any connected graph with n vertices must have at least n-1 edges and all connected graphs with n-1 edges are trees. If the nodes of G represent cities and the edges represent possible communication links connecting two cities, then the minimum number of links needed to connect the n cities is n-1. The spanning trees of G represent all feasible choices.

In practical situations, the edges have weights assigned to them. These weights may represent the cost of construction, the length of the link and so on. Given such a weighted graph, one would then wish to select cities to have minimum total cost or minimum total length. In either case the links selected have to form a tree. If this is not so, then the selection of links contains a cycle. Removal of any one of the links on this cycle results in a link selection of less cost connecting all cities. One is therefore interested in finding a spanning tree of G with minimum cost. Figure (2.6) shows a graph and one of its minimum-cost spanning trees. Since the identification of a minimum-cost

Page 60

Page 25: UNIT - II

Data Structures and Algorithms

spanning tree involves the selection of a subset of the edges, this problem fits the subset paradigm.

28

10 14 16

(a)24

25 18 12

22

1014

16

(b) 25

12 22

Figure 2.6 (a) and (b): A graph and its minimum cost spanning tree

Prim’s Algorithm

A Greedy method to obtain a minimum-cost spanning tree builds this tree edge by edge. The next edge to include is chosen according to some optimization criterion. The simplest such criterion is to choose an edge that results in a minimum increase in the sum of the costs of the edges so far included. There are two possible ways to interpret this criterion. In the first, the set of edges so far selected form a tree. The next edge (u,v) to be

Page 61

1

2

45

376

4

5

376

2

1

Page 26: UNIT - II

Data Structures and Algorithms

included in A is a minimum-cost edge not in A with the property that AU{(u,v)} is also a tree. The corresponding algorithm is known as Prim’s algorithm.

Example

Figure (2.7) shows the working of Prim’s method on the graph of Figure (2.6(a)). The spanning tree obtained is shown in Figure (2.6(b)) and has a cost of 99.

Having seen how Prim’s method works, let us obtain a pseudo code algorithm to find a minimum-cost spanning tree using this method. The algorithm will start with a tree that includes only a minimum-cost edge of G. Then, edges are added to this tree one by one. The next edge (i,j) to be added is such that i is a vertex already included in the tree, j is a vertex not yet included and the cost of (i,j), cost [i,j], is minimum among all edges (k,l) such that vertex k is in the tree and vertex l is not in the tree. To determine this edge (i,j) efficiently, associate with each vertex j not yet included in the tree a value near[j]. The value near[j] is a vertex in the tree such that cost[j,near[j]] is minimum among all choices for near[j]. It is defined that near[j]=0 for all vertices j that are already in the tree. The next edge to include is defined by the vertex j such that near[j]≠ 0(j not already in the tree) and cost[j, near[j]]is minimum.

In function Prim (Algorithm 2.3) line 9 selects a minimum-cost edge. Lines10 to 15 initialize the variables so as to represent a tree comprising only the edge(k,l). In the for loop of line 16 the remainder of the spanning tree is built up edge by edge. Lines 18 and 19 select (j,near[j]) as the next edge to include. Lines 23 to 25 update near. The time required by algorithm Prim is O(n2), where n is the number of vertices in the graph G. To see this, note that line 9 takes O(|E|) time and line 10 takes (1) time. The for loop of line 12 takes (n) time. Lines 18 and 19 and the for loop of line 23 require O(n) time. So, each iteration of the for loop of line 16 takes O(n) time. The total time for the for loop of 16 is therefore O(n2). Hence, Prim runs in O(n2) time.

(1, 2) = 28(1, 6) = 10So (1,6) is

chosen

(a)

Page 62

4

5

376

2

1

Page 27: UNIT - II

Data Structures and Algorithms

(6,5) = 25

10

(b)

25

(5,7) = 24 (5,4) = 22

10

(c)

25

22

(4,3) = 12

10

(d)

25 12

22

Page 63

1

2

45

376

1

2

4

5

376

1

2

4

5

376

Page 28: UNIT - II

Data Structures and Algorithms

(3,2) = 16

10 16

(e)

25 12

22

(2, 7) = 14 10

14 16

(f)

25 12

22

Figure 2.7: Stages in Prim’s algorithm

If ones stores the nodes not yet included in the tree as a red-black tree, Lines 18 and 19 take O(log n) time. The for loop of line 23 has to examine only the nodes adjacent to j. Thus its overall frequency is O(|E|). Updating in lines 24 and 25 also takes O(log n) time. Thus the overall run time is O((n+|E|) log n).

One can also start the algorithm with a tree consisting of any arbitrary vertex and no edge. Then edges can be added one by one. The changes needed are in lines 9 to 17. The following lines can replace these lines.

mincost := 0;for i:=2 to n do near[i] := 1;

Page 64

1

2

4

5

376

1

2

4

5

376

Page 29: UNIT - II

Data Structures and Algorithms

//Vertex 1 is initially in t.near[1]:=0;

for i := 1 to n-1 do{ // Find n-1 edges for t,

Algorithm Prim(E,cost,n,t)//E is the set of edges in G.// cost[1:n,1:n] is the cost adjacency matrix of an n vertex graph// such that cost[i,j] is either a positive real number //or if no edge (i,j) exists. //A minimum spanning tree is computed and stored as a set of edges//in the array t[1:n-1,1:2].// (t[i,1],t[i,2]) is an edge in the minimum-cost spanning tree. //The final cost is returned.{ Let (k,l) be an edge of minimum cost in E; mincost := cost[k,l]; t[1,1] := k; t[1,2]:=l; for i:= 1 to n do // Initialize near. if (cost[i,l]<cost[i,k]) then near[i]:=l; else near[i] := k; near[k]:= near[i]:=0; for i:= 2 to n-1 do {// find n-2 additional edges for t. Let j be an index such that near[j]0 and cost[j,near[j]]is minimum; t[i,1]:=j; t[i,2] := near[j]; mincost := mincost + cost[j,near[j]]; near[j]:=0; for k:= 1 to n do // Update near[]. If((near[k]0) and (cost [k,near[k]] >cost[k,j])) then near[k] := j;}return mincost;}

Algorithm 2.3: Prim’s minimum-cost spanning tree algorithm

KrusKal’s Algorithm

The general outline of Krukal’s algorithm isi. At each step it chooses the lowest weighted edge from anywhere in

the graph.ii. Any edge forming a cycle with already chosen one is discarded.

iii. Edges chosen at any time will form a forest.iv. It terminates when all edges have been processed.

Page 65

Page 30: UNIT - II

Data Structures and Algorithms

Example:

16 1 2

21 11 6 19 6 3 33 14 10 5 4 18

Graph

1 2

6 3

5 4

Spanning Tree

The edges of the graph are considered for inclusion in the minimum cost spanning tree in the order (2, 3), (2, 4), (3, 4), (2, 6), (4, 6), (1, 2), (1, 2), (4, 5), (1, 5), (1, 6) and (5, 6). They are taken in the ascending order of their associated costs. The tree so constructed is called T, which is built edge by edge. The collection of nodes are {1, 2, 3, 4, 5, 6}.

Edge Cost Accept / Reject

Tree

1 2 3 4 5 6

(2, 3) 5 Accepted 1 2 3 4 5 6

(2, 4) 6 Accepted 1 2 3 5 6

Page 66

Page 31: UNIT - II

Data Structures and Algorithms

4 (3, 4) 10 Rejected It will form a cycle(2, 6) 11 Accepted 1 2 3 5

6 4

(4, 6) 14 Rejected It will form a cycle(1, 2) 16 Accepted 1 2 3 5

6 4

(4, 5) 18 Accepted 1 2 3

6 5 4

(1, 5) and (1, 6) are also rejected because they too will form a cycle.

t:=;while ((t has less than n-1 edges) and (E))do{ Choose an edge (v,w) from E of lowest cost; Delete (v,w) from E; if (v,w) does not create a cycle in t then add(v,w) to t; else discard (v,w);}

Algorithm 2.4: Early form of minimum-cost spanning tree algorithm due to Kruskal

Algorithm Kruskal(E,cost,n,t)//E is the set of edges in G. G has n vertices. //Cost[u,v] is the cost of edge(u,v). t is the set//of edges in the minimum-cost spanning tree. The //final cost is returned.{ Construct a heap out of the edge costs using Heapify; for i:=1 to n do parent[i]:=-1; //Each vertex is in a different set. i:=0; micost := 0.0; while ((i<n-1) and (heap not empty)) do{

Page 67

Page 32: UNIT - II

Data Structures and Algorithms

Delete a minimum cost edge (u,v) from the heap and reheapify using Adjust j:=Find(u); k:=Find(v); if (jk) then { i:=i+1; t[i,1]:=u; t[i,2]:=v; mincost := mincost + cost[u,v]; Union(j,k); }}if (in-1) then write (“No spanning tree”);else return mincost;}

Algorithm 2.5: KrusKal’s algorithm

Theorem:

KrusKal’s algorithm generates a minimum-cost spanning tree for every connected undirected graph G.

An Optimal Randomized Algorithm

Any algorithm for finding the minimum-cost spanning tree of a given graph G(V,E) will have to spend (|V|+|E|) time in the worst case, since it has to examine each node and each edge at least once before determining the correct answer. A randomized Las Vegas algorithm that runs in time ~O(|V|+|E|) can be devised as follows: (1) Randomly sample m edges from G (for some suitable m). (2) Let G’ be the induced subgraph; that is G’ has V as its node set and the sampled edges in its edge set. The subgraph G’ need not be connected. Recursively find a minimum-cost spanning tree for each component of G’. Let F be the resultant minimum-cost-spanning forest of G’. (3) Using F, eliminate certain edges (called the F-heavy edges) of G that cannot possibly be in a minimum-cost spanning tree. Let G” be the graph that results from G after elimination of the F-heavy edges. (4) Recursively find a minimum-cost spanning tree for G”. This will also be a minimum-cost spanning tree for G.

Steps 1 to 3 are useful in reducing the number of edges in G. The algorithm can be speeded up further if one can reduce the number of nodes in the input graph as well. Such node elimination can be effected using the Boruvka steps. In a Boruvka step, for each node, an incident edge with minimum weight is chosen. For example in Figure (2.8), the edge (1,3) is chosen for node 1, the edge (6,7) is chosen for node 7 and so on. All the chosen edges are shown with thick lines. The connected components of the induced graph are found. In the example of Figure (3.4), the nodes 1,2 and 3 form one component, the nodes 4 and 5 form a second component and the nodes 6 and 7 form another component. Replace each component with the single node. The component with nodes 1,2 and 3 is replaced with the node a. The other two components are replaced with the nodes b and c, respectively. Edges within the individual components are thrown away. The resultant

Page 68

Page 33: UNIT - II

Data Structures and Algorithms

graph is shown in Figure (3.4). In this graph keep only an edge of minimum weight between any two nodes. Delete any isolated nodes.

Since an edge is chosen for every node, the number of nodes after one Boruvka step reduces by a factor of at least two. A minimum-cost spanning tree for the reduced graph can be extended easily to get a minimum-cost spanning tree for the original graph. If E’ is the set of edges in the minimum-cost spanning tree of the reduced path, simply include into E’ the edges chosen in the Boruvka step to obtain the minimum-cost spanning tree edges for the original graph. In the example of Figure (2.8), a minimum-cost spanning tree for (c) will consist of the edges (a,b) and (b,c). Thus a minimum-cost spanning tree for the graph of (a) will have the edges: (1,3), (3,2),(4,5),(6,7),(3,4), and (2,6). More details of the algorithms are given in Figure 2.8.

Definition:

Let F be a forest that forms a subgraph of a given weighted graph G(V,E). If u and v are any two nodes in F, let F(u,v) denote the path (if any) connecting u and v in F and let Fcost(u,v) denote the maximum weight of any edge in the path F(u,v). If there is no path between u and v in F, Fcost(u,v) is taken to be . Any edge (x,y) of G is said to be F-heavy if cost[x,y]>Fcost(x,y) and F-light otherwise.

Theorem:

A minimum-weight spanning tree for any given weighted graph can be computed in time O(|V|+|E|).

5 a

8

3 4 8 6

9 7 c

6 2

11

(a) b

8

6 98

Page 69

12

4

53

7

6

ac

Page 34: UNIT - II

Data Structures and Algorithms

6 11

(b)

8

6 9

(c)

Figure 2.8: A Boruvka step

2.2.11 Optimal Storage on Tapes

There are n programs that are to be stored on a computer tape of length l. Associated with each program i is a length l i, 1in. Clearly, all programs can be stored on the tape if and only if the sum of the lengths of the programs is at most l. One assumes that whenever a program is to be retrieved from this tape, the tape is initially positioned at the front. Hence, if the programs are stored in the order I=i1,i2,…,in, the time tj needed to retrieve program ij is proportional to 1kj lik.

If all programs are retrieved equally often, then the expected or mean retrieval time (MRT) is (1/n) 1jntj. In the optimal storage on tape problem, it is required to find a permutation for the n programs so that when they are stored on the tape in this order the MRT is minimized. This problem fits the ordering paradigm. Minimizing the MRT is equivalent to minimizing d(I)= 1jn1kjlik. Following theorem shows that the MRT is minimized when programs are stored in this order.

Theorem:

If l1l2….ln, then the ordering ij=j,1kj, minimizes

n k

lij

k=1 j=1

over all possible permutations of the ij.

Proof:

Let I=i1,i2,….,in be any permutation of the index set {1,2,…,n}. Then

n k nd(I)= lij= (n-k+1)lik

Page 70

b

a

b

c

Page 35: UNIT - II

Data Structures and Algorithms

k=1 j=1 k=1

if there exist a and b such that a<b and l ia>lib, then interchanging ia and ib results in a permutation I’ with

d(I’) = (n-k+1)lik + (n-a+1)lib+(n-b+1)lia

k ka kb

Subtracting d(I’) from d(I), one obtains

d(I)-d(I’) = (n-a+1)(lia-lib) + (n-b+1)(lib-lia) = (b-a)(lia-lib) > 0

Hence, no permutation that is not in non-decreasing order of the l i’s can have minimum d. It is easy to see that all permutations in nondecreasing order of the l i’s have the same d value. Hence, the ordering defined by ij=j, 1jn, minimizes the d value.

The tape storage problem can be extended to several tapes. If there are m>1 tapes, T0,…,Tm-1, then the programs are to be distributed over these tapes. For each tape a storage permutation is to be provided. If Ij is the storage permutation for the subset of programs on tape j, then d(Ij) is as defined earlier. The total retrieval time (TD) is 0jm-1

d(Ij).The objective is to store the programs in such a way as to minimize TD.

The obvious generalization of the solution for the one-tape case is to consider the programs in nondecreasing order of li’s. The program currently being considered is placed on the tape that results in the minimum increase in TD. This tape will be the one with the least amount of tape used so far. If there is more than one tape with this property, then the one with the smallest index can be used. If the jobs are initially ordered so that l1l2….ln, then the first m programs are assigned to tapes T0,…,Tm-1 respectively. The next m programs will be assigned to tape Ti mod m. On any given tape the programs are stored in non-decreasing order of their lengths. Algorithm 2.6 presents this rule in pseudocode. It assumes that the programs are ordered. It has a computing time of (n) and does not need to know the program lengths. Following theorem proves that the resulting storage pattern is optional.

Algorithm Store(n,m)//n is the number of programs and m the number of //tapes.{ j:=0; // Next tape to store on for i:= 1 to n do

Page 71

Page 36: UNIT - II

Data Structures and Algorithms

{ write (“append program”, i, “to permutation for tape”,j); j:= (j+1) mod m; }}

Algorithm 2.6: Assigning programs to tapes

Theorem:

If l1l2….ln, then Algorithm 2.6 generates an optimal storage pattern for m tapes.Proof:

In any storage pattern for m tapes, let ri be one greater than the number of programs following program i on its tape. Then the total retrieval time TD is given by

nTD = rili

i=1

In any given storage pattern, for any given n, there can be at most m programs for which ri=j. From Theorem it follows that TD is minimized if the m longest programs have ri=1, the next m longest programs have ri=2 and so on. When programs are ordered by length, that is l1l2….ln, then this minimization criteria is satisfied if ri=[(n-i+1)/m]. Observe that Algorithm 2.6 results in a storage pattern with these ri’s.

The Proof of theorem shows that there are many storage patterns that minimize TD. If we compute ri=[n-i+1)/m] for each program i, then so long as all programs with the same ri are stored on different tapes and have ri-1 programs following them, TD is the same. If n is a multiple of m, then there are at least (m!)n/m storage patterns that minimize TD.2.2.12 Single-Source Shortest Paths

Graphs can be used to represent the highway structure of a state or country with vertices representing cities and edges representing sections of highway. The edges can then be assigned weights, which may be either the distance between the two cities connected, by the edge or the average time to drive along that section of highway. A motorist wishing to drive from city A to B would be interested in answers to the following questions:

Is there a path from A to B? If there is more than one path from A to B, which is the shortest path?

The problems defined by these questions are special cases of the path problem we study in this section. The length of a path is now defined to be the sum of the weights of the edges on that path. The starting vertex of the path is referred to as the source and the last vertex the destination. In the problem consider, a directed graph G=(V,E), a weighting function cost for the edges of G and a source vertex vo. The problem is to

Page 72

Page 37: UNIT - II

Data Structures and Algorithms

determine the shortest paths from vo to all the remaining vertices of G. It is assumed that all the weights are positive. The shortest path between vo and some other node v is an ordering among a subset of the edges. Hence this problem fits the ordering paradigm.

Example:

Consider the directed graph of Figure (2.9(a)). The numbers on the edges are the weights. If node 1 is the source vertex, then the shortest path from 1 to 2 is 1,4,5,2. The length of this path is 10+15+20=45. Even though there are three edges on this path, it is shorter than the path 1,2 which is of length 50. There is no path from 1 to 6. Figure (2.9(b)) lists the shortest paths from node 1 to nodes 4, 5, 2 and 3, respectively. The paths have been listed in nondecreasing order of path length.

45

50 10

30 15 35

10 20 20

15 3(a) Graph

Path Length1 1,4 102 1,4,5 253 1,4.5,2 454 1,3 45

(b) Shortest paths from 1

Figure 2.9: Graph and shortest paths from vertex 1 to all destinationsTo formulate a greedy-based algorithm to generate the shortest paths, one must

conceive of a multistage solution to the problem and also of an optimization measure. One possibility is to build the shortest paths one by one. As an optimization measure we can use the sum of the lengths of all paths so far generated. For this measure to be minimized, each individual path must be of minimum length. If one has already constructed i shortest paths, then using this optimization measure, the next path to be constructed should be the next shortest minimum length path. The greedy way to generate the shortest paths from vo to the remaining vertices is to generate these paths in nondecreasing order of path length. First, a shortest path to the nearest vertex is generated. Then a shortest path to the second nearest vertex is generated and so on. For the graph of Figure (2.9(a)) the nearest vertex to vo=1 is 4 (cost[1,4]=10). The path 1,4 is the first path generated. The second nearest vertex to node 1 is 5 and the distance between 1 and 5 is 25. The path 1,4,5 is the next path generated. In order to generate the

Page 73

1 2

4 5

3

6

Page 38: UNIT - II

Data Structures and Algorithms

shortest paths in this order we need to determine (1) the next vertex to which a shortest path must be generated and (2) a shortest path to this vertex. Let S denote the set of vertices (including vo) to which the shortest paths have already been generated. For w not in S, let dist[w] be the length of the shortest path starting from vo, going through only those vertices that are in S, and ending at w. We observe that:

1. If the next shortest path is to vertex u, then the path begins at vo, ends at u, and goes through only those vertices that are in S. To prove this, show that all the intermediate vertices on the shortest path to u are in S. Assume there is a vertex w on this path that is not in S. Then, the vo to u path also contains a path from vo

to w that is of length less than the vo to u path. By assumption the shortest paths are being generated in nondecreasing order of path length and so the shorter path vo to w must already have been generated. Hence, there can be no intermediate vertex that is not in S.

2. The destination of the next path generated must be that of vertex u which has the minimum distance, dist[u], among all vertices not in S. This follows from the definition of dist and observation 1. In case there are several vertices not in S with the same dist, then any of these may be selected.

3. Having selected a vertex u as in observation 2 and generated the shortest v0 to u path, vertex u becomes a member of S. At this point the length of the shortest paths starting at v0, going through vertices only in S, and ending at a vertex w not in S may decrease; that is, the value of dist[w] may change. If it does change, then it must be due to a shorter path starting at v0 and going to u and then to w. The intermediate vertices on the v0 to u path and the u to w path must all be in S. Further, the v0 to u path must be the shortest such path; otherwise dist[w] is not defined properly. Also, the u to w path can be chosen so as not to contain any intermediate vertices. Therefore, we can conclude that if dist[w] is to change (i.e., decrease), then it is because of a path from v0 to u to w, where the path from v0 to u is the shortest such path and the path from u to w is the edge (u,w). The length of this path is dist[u]+cost[u,w].

The above observations lead to a simple Algorithm 2.7 for the single source shortest path problem. This algorithm (known as Dijkstra’s algorithm) only determines the lengths of the shortest paths from v0 to all other vertices in G. The generation of the paths requires a minor extension to this algorithm and is left as an exercise.

In the function ShortestPaths (Algorithm) it is assumed that the n vertices of G are numbered 1 through n. The set S is maintained as bit array with s[i]=0 if vertex i is not in S and s[i]=1 if it is. It is assumed that the graph itself is represented by its cost adjacency matrix with cost[i,j]’s being the weight of the edge i,j. The weight cost[i,j] is set to some large number, , in case the edgei,j is not in E(G). For i=j, cost[i,j] can be set to any non-negative number without affecting the outcome of the algorithm.

From our earlier discussion, it is easy to see that the algorithm is correct. The time taken by the algorithm on a graph with n vertices is O(n2). To see this, note that the for loop of line 7 in Algorithm 2.7 takes (n) time. The for loop of line 12 is executed n-2 times. Each execution of this loop requires O(n) time at lines 15 and 16 to select the next

Page 74

Page 39: UNIT - II

Data Structures and Algorithms

vertex and again at the for loop of line 18 to update dist . So the total time for this loop is O(n2). In case a list t of vertices currently not in s is maintained, then the number of nodes on this list would at any time be n-num. This would speed up lines 15 and 16 and the for loop of line 18, but the asymptotic time would remain O(n2).

Any shortest path algorithm must examine each edge in the graph at least once since any of the edges could be in a shortest path. Hence the minimum possible time for such an algorithm would be (|E|). Since cost adjacency matrices were used to represent the graph, it takes O(n2) time just to determine which edges are in g, and so any shortest path algorithm using this representation must take (n2) time. For this representation then, algorithm ShortestPaths is optimal to within a constant factor. If a change to adjacency lists is made, the overall frequency of the for loop of line 18 can be brought down to O(|E|). If V-S is maintained as a red-black tree, each execution of lines 15 and 16 takes O(log n) times for insert, delete(an arbitrary element), find-min, and search(for an arbitrary element). Each update in line 21 takes O(log n). Thus the overall run time is O((n+|E|) log n).

Algorithm ShortestPaths(v,cost,dist,n)//dist[j], 1jn, is set to the length of the //shortest path from vertex v to vertex j in a //diagraph G with n vertices. dist[v] is set to //zero. G is represented by its cost adjacency //matrix cost[1:n,1:n].{ for i:= 1 to n do {// Initialize S. S[i]:= false; dist[i] := cost[v,i]; } S[v]:= true; dist[v]:=0.0; //Put v in S. for num:= 2 to n-1 do

{ //Determine n-1 paths from v. Choose u from among those vertices not

in S such that dist[u] is minimum; S[u] := true; // Put u in S.

for (each w adjacent to u with S[w}=false)do //Update distances. if (dist[w]>dist[u]+cost[u,w])) then dist[w]:=dist[u]+cost[u,w]; }}

Algorithm 2.7: Greedy algorithm to generate shortest pathsExample:

Consider the eight vertex digraph of Figure (2.10(a)) with cost adjacency matrix as in Figure (2.10(b)). The values of dist and the vertices selected at each iteration of the for loop of line 12 in Algorithm 2.7 for finding all the shortest paths from Boston are shown in Figure(3.7). To begin with, S contains only Boston. In the first iteration of the for loop (that is, for num=2), the city u that is not in S and whose dist[u] is minimum is

Page 75

Page 40: UNIT - II

Data Structures and Algorithms

identified to be New York. New York enters the set S. Also the dist[] values of Chicago, Miami, and New Orleans get altered since there are shorter paths to these cities via New York. In the next iteration of the for loop, the city that enters S is Miami since it has the smallest dist[] value from among all the nodes not in S. None of the dist[] values are altered. The algorithm continues in a similar fashion and terminates when only seven of the eight vertices are in S. By the definition of dist, the distance of the last vertex, in this case Los Angeles, is correct as the shortest path from Boston to Los Angeles can go through only the remaining six vertices.

One can easily verify that the edges on the shortest paths from a vertex v to all remaining vertices in a connected undirected graph G form a spanning tree of G. This spanning tree is called a shortest-path spanning tree. Clearly, this spanning tree may be different for different root vertices v. Figure(3.8) shows a graph G, its minimum-cost spanning tree and a shortest-path spanning tree from vertex 1.

(a) DiGraph Boston

Chicago 1500

1000 250San Francisco 1200 New York

800

300 Denver 1000 1400 900

1700Los Angeles New Orleans 1000

Miami

1 2 3 4 5 6 7 81 02 300 03 100 800 04 1200 05 1500 0 2506 1000 0 900 14007 0 10008 1700 0

(b) Length-adjacency matrix

Page 76

2

4

5

3

7

6

18

Page 41: UNIT - II

Data Structures and Algorithms

Figure 2.10: Figures for Example mentioned above

Iteration SVertex Selected

DistanceLA SF DEN CHI BOST NY MIA NO[1] [2] [3] [4] [5] [6] [7] [8]

Initial --- --- + + + 1500 0 250 + +1 {5} 6 + + + 1250 0 250 1150 16502 {5,6} 7 + + + 1250 0 250 1150 16503 {5,6,7} 4 + + 2450 1250 0 250 1150 16504 {5,6,7,4} 8 3350 + 2450 1250 0 250 1150 16505 {5,6,7,4,8} 3 3350 3250 2450 1250 0 250 1150 16506 {5,6,7,4,8,3}

{5,6,7,4,8,3,2}2 3350 3250 2450 1250 0 250 1150 1650

Figure2.11: Action of Shortest Paths

55 25 45

30

50 5 40 20

1535 10

(a) A Graph

25

30

5 40 20

15

10

(b) Minimum Cost spanning tree

Page 77

2 4

5

3

7

6

1

8

2 4

5

3

7

6

1

8

Page 42: UNIT - II

Data Structures and Algorithms

55 25 45

30

5

15

a 10

(C) Shortest path spanning tree from vertex 1Figure 2.12: Graphs and Spanning Trees

2.3 Revision Points

SortingThe term sorting refers to the operation of arranging data in some given order, such as increasing or decreasing with numerical data or alphabetically, with character data.

Merge SortMerge sort is an excellent sorting method. This technique is used to sort the files in the following way. Divide the files into two equal size sub files and sort the sub files separately, then merge the sorted files into a single file. This merge sort subroutine is responsible for allocating additional workspace needed.

Binary SearchIf items are placed in an array and are to be sorted in either ascending or descending order on the key first, a much better performance is to be obtained with an extremely efficient searching algorithm is known as Binary Search.

Page 78

2 4

5

3

7

6

1

8

Page 43: UNIT - II

Data Structures and Algorithms

2.4 Intext Questions

1. Describe the general method for Divide and Conquer in detail.

2. Explain the binary search in detail.

3. Describe version of MaxMin derived against Straight MaxMin method.

Count Comparisons.

4. Explain the Merge Sort algorithm in detail with suitable example.

5. How is Strassen’s Matrix Multiplication done?

6. Define feasible solution.

7. Define optimal solution.

8. What is Knapsack problem?

9. Define Prim’s algorithm.

10. Explain KrusKal’s Algorithm

11. Define the shortest path-spanning tree.

12. How is the Optimal Storage on Tapes done?

13. Explain the method to generate Shortest Paths.

14. Define the Spanning tree.

15. Define MRT.

2.5 Summary

Divide and Conquer algorithm divides the problem into several smaller instances of the same problem; the smaller instances recursively and finally combines the solutions to obtain the original input.

Binary search-The data in an array is sorted in increasing numerical order or equivalently alphabetically, an extremely efficient searching, which can be used to find the location of a given data.

Merge sort - In this method split the list into two sub lists of equal size and sort them separately and finally merge the two sorted list into single sorted list.

Page 79

Page 44: UNIT - II

Data Structures and Algorithms

Quick sort - This method rearranges the elements to be sorted. It sorts the two sub ranges of “small” and “large” keys recursively with the result that the entire array is sorted.

A Heap is a complete binary tree, in which each node satisfies the heap condition, represented as an array.

Feasible solution is some of the problems have n inputs and require us to obtain a subset that satisfies some constraints any subset that satisfies this constraint is called feasible solution.

Object function is needed to find a feasible solution that either maximizes or minimizes a given problem.

Optimal solution is a feasible solution that either maximizes or minimizes a given problem.

Compare the greedy solution with any optimal solution. If the two solutions differ, then find the first xi at which they differ. Next, it is shown how to make the xi in the optimal solution equal to that in the greedy solution without any loss in total value. Repeated use of this transformation shows that the greedy solution is optimal.

The knapsack problem calls for selecting a subset of the objects and hence fits the subset paradigm and also selects a xi for each object.

The first are the set of edges so far selected form a tree. The next edge (u,v) to be included in A is a minimum-cost edge not in A with the property that AU{(u,v)} is also a tree. The corresponding algorithm is known as Prim’s algorithm.

Any shortest path algorithm must examine each edge in the graph at least once since any of the edges could be in a shortest path. Hence the minimum possible time for such an algorithm would be (|E|).

One can easily verify that the edges on the shortest paths from a vertex v to all remaining vertices in a connected undirected graph G form a spanning tree of G. This spanning tree is called a shortest-path spanning tree.

2.6 Terminal Exercises

1. Prove that Prim’s method generates Minimum-Cost Spanning trees.

2. Explain the heap sort in detail with suitable example in pictorial

representation

3. Explain the Algorithm for Greedy method control abstraction for the subset paradigm.

Page 80

Page 45: UNIT - II

Data Structures and Algorithms

4. Show how Quick Sort sorts the following sequences of keys - 1,1,1,1,1,1,1

and 5,5,8,3,4,3,2. and write the procedure for this sort.

2.7 Supplementary Materials

1. Ellis Horowitz, Sartaj Sahni, “Fundamentals of Computer Algorithms”,

Galgotia Publications, 1997.

2. Aho, Hopcroft, Ullman, “Data Structures and Algorithms”, Addison Wesley,

1987.

3. Jean Paul Trembly & Paul G.Sorenson, “An introduction to Data Structures

with Applications”, McGraw-Hill, 1984.

2.8 Assignments

1. Analyze how searching is important and necessary for real time systems.

2. Discuss in detail about shortest path algorithm.

2.9 Suggested Reading/Reference Books/Set Books

1. Mark Allen Weiss, “Data Structures and Algorithm Analysis in C++”,

Addison Wesley, 1999.

2. Yedidyah Langsam, Moshe J.Augenstein, Aaron M. Tanenbaum, “Data

Structures Using C and C++”, Prentice-Hall, 1997.

2.10 Learning Activities

1. Collect information on sorting techniques from internet.

2. Collect research reports and information on KrusKal’s Algorithm.

2.11 Keywords

Binary Search

Sorting

Quick Sort

Merge Sort

Spanning Tree

Page 81