Final Algorithm lab manual.doc

Download Final Algorithm lab manual.doc

Post on 02-Jan-2016




4 download

Embed Size (px)




1. Algorithm InsertionSort(a,n)

// a is a global array that containing n elements to be sorted.


for j:=2 to n do


key := a[j];

i:= j-1;

while( i> 0 and a[i] > key) do


a[i+1]:= a[i]; i:= i-1;





2. Algorithm Tower_of_Hanoi(A, B, C, N)

//This algorithm finds the moves to solve the Tower of Hanoi problem for N discs. N is // total number of discs.


if(N=1) then {

Move top disc from A to C;





Tower_of_Hanoi(A, C, B, N-1);

Tower_of_Hanoi(A, B, C, 1);

Tower_of_Hanoi(B, A, C, N-1);



Number of discs is 3.

TOH(A,B,C,3) TOH(A,C,B,2) TOH(A,B,C,1) TOH(B,A,C,2)


TOH(A,B,C,1) TOH(A,C,B,1) TOH(C,A,B,1) TOH(B,C,A,1) TOH(B,A,C,1) TOH(A,B,C,1)

A->C A->B C->B B->A B->C A->C3. HEAP SORT:

Heap: A max(min) heap is a complete binary tree with the property that the value at each node is at least as large as(as small as) the values at its children(if they exist).

Algorithm HeapSort(a,n)

// a[1:n] contains n elements to be sorted. HeapSort rearranges them inplace into

// nondecreasing order.


Heapify(a,n); // Transform the array into a heap. Interchange the new maximum

// with the element at the end of the array.

for i:=n to 2 step -1 do


t:=a[i]; a[i]:=a[1]; a[1]:=t;




Algorithm Heapify(a,n)

// Readjust the elements in a[1:n] to form a heap.


for i:= to 1 step -1 do



Algorithm Adjust(a,i,n)

// the complete binary trees with roots 2i and 2i+1 are combinded with node i to form

// heap rooted at i. No node has an address greater than n or less than 1.


j:= 2i; item :=a[i];

while(jn) do


if (( j0, determine whether // x is present, and if so return j. such that x=a[j]; else return 0.


low:=1; high:=n;

while ( low high ) do


mid :=;

if ( x< a[mid]) then high := mid-1;

else if (x > a[mid]) then low := mid +1;

else return mid;


return 0;


Divide-and-Conquer:These algorithms have the following outline: To solve a problem, divide it into sub problems. Recursively solve the sub problems. Finally glue the resulting solutions together to obtain the solution to the original problem.

5. MERGE SORT:Merging two sorted subarrays using Merge algorithm.

Algorithm Merge (low, high, mid)

// a [low: high] is a global array containing two sorted subsets in a [low: mid] and in// a[mid+1: high]. The goal is to merge these two sets into a single set residing in // a [low:high]. b [] is an auxiliary global array.{


while((h 0. For any job i the profit pi is earned iff the job is completed by its deadline. To complete a job, one has to process the job on a machine for one unit of time. Only one machine is available for processing jobs. A feasible solution for this problem is a subset J of jobs such that each job in this subset can be completed by its deadline. The value of a feasible solution J is the sum of the profits of the jobs in J, or i J pi.Algorithm JS(d,j,n)

// d[i] 1 , 1in are the deadline n 1 , The jobs are ordered such that p[1] p[2] // . p[n]. J[i] is the ith job in the optimal solution , 1 i k. Also, at termination // d[J[i]] d[J[i+1]], 1 i k.


d[0] :=J[0] := 0; // initialize

J[1] :=1 ; // Include job 1

for i:= 2 to n do

{ // Consider jobs in the nonincreasing order of p[i].Find position for i and

// check feasibility of insertion .

r := k ;

while((d[J[r]] > d[i]) and (d[J[r]] r) ) do r := r-1;

if((d[J[r]] d[i]) and (d[i] > r)) then


// Insert i into J[]

for q := k to (r+1) step -1 do J[q+1] := J[q];

J[r+1] := i; k=k+1;



return k;


DYNAMIC PROGRAMMINGDynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems. (Programming in this context refers to a tabular method, not to writing computer code.) In divide-and- conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. In this context,a divide-and-conquer algorithm does more work than necessary, repeatedly solving the common subsubproblems. A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered. Dynamic programming is typically applied to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.15. MATRIX-CHAIN MULTIPLICATION:dynamic programming is an algorithm that solves the problem of matrix-chain multiplication. We are given a sequence (chain) A1, A2, . . . , An of n matrices to be multiplied, and we wish to compute the product A1 A2 An . We can evaluate the expression using the standard algorithm for multiplying pairs of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in how the matrices are multiplied together. A product of matrices is fully parenthesized if it is either a single matrix or the product of two fully parenthesized matrix products, surrounded by parentheses. Matrix multiplication is associative, and so all parenthesizations yield the same product. For example, if the chain of matrices is A1, A2, A3, A4 the product A1 A2 A3 A4 can be fully parenthesized in five distinct ways:(A1(A2(A3 A4))) ,

(A1((A2 A3)A4)) ,

((A1 A2)(A3 A4)) ,

((A1(A2 A3))A4) ,

(((A1A2)A3)A4) .Algorithm Matrix-Chain-Order(p)


n:= length[p] 1;

for i := 1 to n do

{ m[i, i ] := 0;

} for l := 2 to n do // l is the chain length. { for i := 1 to n l + 1 do {

j i + l 1 ; m[i, j ]:= ;

for k :=i to j 1 do

{ q := m[i, k] + m[k + 1, j ] + pi1 pk pj ; if q < m[i, j ] then {

m[i, j ]:= q ; s[i, j ] := k ;




} return m and s ;

}Algorithm PRINT-OPTIMAL-PARENS(s, i, j ) {

if i = j then print Ai; else


print ( ; PRINT-OPTIMAL-PARENS(s, i, s[i, j ]) ; PRINT-OPTIMAL-PARENS(s, s[i, j ] + 1, j ) ; print ) ;



16. THE FLOYD-WARSHALL ALGORITHM: We shall use a different dynamic-programming formulation to solve the all-pairs shortest-paths problem on a directed graph G = (V, E). The resulting algorithm, known as the Floyd-Warshall algorithm, runs in (V3) time. As before, negative-weight edges may be present, but we assume that there are no negative-weight cycles. We shall follow the dynamic programming process to develop the algorithm. After studying the resulting algorithm, we shall present a similar method for finding the transitive closure of a directed graph.Let dij(k) be the weight of a shortest path from vertex i to vertex j for which all intermediate vertices are in the set {1, 2, . . . , k}. When k = 0, a path from vertex i to vertex j with no intermediate vertex numbered higher than 0 has no intermediate vertices at all. Such a path has at most one edge, and hence dij(0) = wij . A recursive definition following the above discussion is given by

dij(k) = wij if k = 0;

min{ dij(k-1), dik(k-1)+ dkj(k-1)} if k 1;The procedure returns the matrix D(n) of shortest-path weights.FLOYD-WARSHALL(W)


N:= rows[W]

D(0):= W

for k := 1 to n do

for i := 1 to n do

for j := 1 to n do

dij(k) := min{ dij(k-1), dik(k-1)+ dkj(k-1)}

return D(n) }Constructing a shortest path

We can compute the predecessor matrix on-line just as the Floyd-Warshall algorithm computes the matrices D(k). Specifically, we compute a sequence of matrices (0) (1)