# Final Algorithm lab manual.doc

Post on 02-Jan-2016

44 views

Category:

## Documents

4 download

Embed Size (px)

DESCRIPTION

vv

TRANSCRIPT

1. Algorithm InsertionSort(a,n)

// a is a global array that containing n elements to be sorted.

{

for j:=2 to n do

{

key := a[j];

i:= j-1;

while( i> 0 and a[i] > key) do

{

a[i+1]:= a[i]; i:= i-1;

}

a[i+1]:=key;

}

}

2. Algorithm Tower_of_Hanoi(A, B, C, N)

//This algorithm finds the moves to solve the Tower of Hanoi problem for N discs. N is // total number of discs.

{

if(N=1) then {

Move top disc from A to C;

return;

}

else

{

Tower_of_Hanoi(A, C, B, N-1);

Tower_of_Hanoi(A, B, C, 1);

Tower_of_Hanoi(B, A, C, N-1);

}

}

Number of discs is 3.

TOH(A,B,C,3) TOH(A,C,B,2) TOH(A,B,C,1) TOH(B,A,C,2)

A->C

TOH(A,B,C,1) TOH(A,C,B,1) TOH(C,A,B,1) TOH(B,C,A,1) TOH(B,A,C,1) TOH(A,B,C,1)

A->C A->B C->B B->A B->C A->C3. HEAP SORT:

Heap: A max(min) heap is a complete binary tree with the property that the value at each node is at least as large as(as small as) the values at its children(if they exist).

Algorithm HeapSort(a,n)

// a[1:n] contains n elements to be sorted. HeapSort rearranges them inplace into

// nondecreasing order.

{

Heapify(a,n); // Transform the array into a heap. Interchange the new maximum

// with the element at the end of the array.

for i:=n to 2 step -1 do

{

t:=a[i]; a[i]:=a; a:=t;

Adjust(a,1,i-1);

}

}

Algorithm Heapify(a,n)

// Readjust the elements in a[1:n] to form a heap.

{

for i:= to 1 step -1 do

Adjust(a,i,n);

}

Algorithm Adjust(a,i,n)

// the complete binary trees with roots 2i and 2i+1 are combinded with node i to form

// heap rooted at i. No node has an address greater than n or less than 1.

{

j:= 2i; item :=a[i];

while(jn) do

{

if (( j0, determine whether // x is present, and if so return j. such that x=a[j]; else return 0.

{

low:=1; high:=n;

while ( low high ) do

{

mid :=;

if ( x< a[mid]) then high := mid-1;

else if (x > a[mid]) then low := mid +1;

else return mid;

}

return 0;

}

Divide-and-Conquer:These algorithms have the following outline: To solve a problem, divide it into sub problems. Recursively solve the sub problems. Finally glue the resulting solutions together to obtain the solution to the original problem.

5. MERGE SORT:Merging two sorted subarrays using Merge algorithm.

Algorithm Merge (low, high, mid)

// a [low: high] is a global array containing two sorted subsets in a [low: mid] and in// a[mid+1: high]. The goal is to merge these two sets into a single set residing in // a [low:high]. b [] is an auxiliary global array.{

h:=lowi:=lowj:=mid+1

while((h 0. For any job i the profit pi is earned iff the job is completed by its deadline. To complete a job, one has to process the job on a machine for one unit of time. Only one machine is available for processing jobs. A feasible solution for this problem is a subset J of jobs such that each job in this subset can be completed by its deadline. The value of a feasible solution J is the sum of the profits of the jobs in J, or i J pi.Algorithm JS(d,j,n)

// d[i] 1 , 1in are the deadline n 1 , The jobs are ordered such that p p // . p[n]. J[i] is the ith job in the optimal solution , 1 i k. Also, at termination // d[J[i]] d[J[i+1]], 1 i k.

{

d :=J := 0; // initialize

J :=1 ; // Include job 1

for i:= 2 to n do

{ // Consider jobs in the nonincreasing order of p[i].Find position for i and

// check feasibility of insertion .

r := k ;

while((d[J[r]] > d[i]) and (d[J[r]] r) ) do r := r-1;

if((d[J[r]] d[i]) and (d[i] > r)) then

{

// Insert i into J[]

for q := k to (r+1) step -1 do J[q+1] := J[q];

J[r+1] := i; k=k+1;

}

}

return k;

}

DYNAMIC PROGRAMMINGDynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems. (Programming in this context refers to a tabular method, not to writing computer code.) In divide-and- conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. In this context,a divide-and-conquer algorithm does more work than necessary, repeatedly solving the common subsubproblems. A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered. Dynamic programming is typically applied to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.15. MATRIX-CHAIN MULTIPLICATION:dynamic programming is an algorithm that solves the problem of matrix-chain multiplication. We are given a sequence (chain) A1, A2, . . . , An of n matrices to be multiplied, and we wish to compute the product A1 A2 An . We can evaluate the expression using the standard algorithm for multiplying pairs of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in how the matrices are multiplied together. A product of matrices is fully parenthesized if it is either a single matrix or the product of two fully parenthesized matrix products, surrounded by parentheses. Matrix multiplication is associative, and so all parenthesizations yield the same product. For example, if the chain of matrices is A1, A2, A3, A4 the product A1 A2 A3 A4 can be fully parenthesized in five distinct ways:(A1(A2(A3 A4))) ,

(A1((A2 A3)A4)) ,

((A1 A2)(A3 A4)) ,

((A1(A2 A3))A4) ,

(((A1A2)A3)A4) .Algorithm Matrix-Chain-Order(p)

{

n:= length[p] 1;

for i := 1 to n do

{ m[i, i ] := 0;

} for l := 2 to n do // l is the chain length. { for i := 1 to n l + 1 do {

j i + l 1 ; m[i, j ]:= ;

for k :=i to j 1 do

{ q := m[i, k] + m[k + 1, j ] + pi1 pk pj ; if q < m[i, j ] then {

m[i, j ]:= q ; s[i, j ] := k ;

}

}

}

} return m and s ;

}Algorithm PRINT-OPTIMAL-PARENS(s, i, j ) {

if i = j then print Ai; else

{

print ( ; PRINT-OPTIMAL-PARENS(s, i, s[i, j ]) ; PRINT-OPTIMAL-PARENS(s, s[i, j ] + 1, j ) ; print ) ;

}

}

16. THE FLOYD-WARSHALL ALGORITHM: We shall use a different dynamic-programming formulation to solve the all-pairs shortest-paths problem on a directed graph G = (V, E). The resulting algorithm, known as the Floyd-Warshall algorithm, runs in (V3) time. As before, negative-weight edges may be present, but we assume that there are no negative-weight cycles. We shall follow the dynamic programming process to develop the algorithm. After studying the resulting algorithm, we shall present a similar method for finding the transitive closure of a directed graph.Let dij(k) be the weight of a shortest path from vertex i to vertex j for which all intermediate vertices are in the set {1, 2, . . . , k}. When k = 0, a path from vertex i to vertex j with no intermediate vertex numbered higher than 0 has no intermediate vertices at all. Such a path has at most one edge, and hence dij(0) = wij . A recursive definition following the above discussion is given by

dij(k) = wij if k = 0;

min{ dij(k-1), dik(k-1)+ dkj(k-1)} if k 1;The procedure returns the matrix D(n) of shortest-path weights.FLOYD-WARSHALL(W)

{

N:= rows[W]

D(0):= W

for k := 1 to n do

for i := 1 to n do

for j := 1 to n do

dij(k) := min{ dij(k-1), dik(k-1)+ dkj(k-1)}

return D(n) }Constructing a shortest path

We can compute the predecessor matrix on-line just as the Floyd-Warshall algorithm computes the matrices D(k). Specifically, we compute a sequence of matrices (0) (1) . . . (n), where = (n) and ij(k) is defined to be the predecessor of vertex j on a shortest path from vertex i with all intermediate vertices in the set {1, 2, . . . , k}.We can give a recursive formulation of ij(k). When k = 0, a shortest path from i to j has no intermediate vertices at all. Thus,

ij(0) = NIL if i = j or wij =

i if i j and wij < For k 1, if we take the path i ( k ( j, where k j , then the predecessor

of j we choose is the same as the predecessor of j we chose on a shortest path from k with all intermediate vertices in the set {1, 2, . . . , k 1}. Otherwise, we choose the same predecessor of j that we chose on a shortest path from i with all intermediate vertices in the set {1, 2, . . . , k 1}. Formally, for k 1,

ij(k-1) if dij(k-1) dik(k-1)+ dkj(k-1)ij(k) =kj(k-1) if dij(k-1) > dik(k-1)+ dkj(k-1)

BACKTRACKING:In the search for fundamental principles of algorithm design, backtracking represents one of the most general techniques. Many problems which deal with searching for a set of solutions or which ask for an optimal solution satisfying some constraints can be solved using the backtracking formation.

In many applications of the backtrack method, the desire solution is expressible as an n-tuple(x1, . . . , xn ), where the xi are choosen from some finite set Si .17. EIGHT QUEENS PROBLEM:A clasic combinatorial problem is to place eight queens on an 8x8 chessboard so that no two attack that is, so that no two of them are on the same row, column, or diagonal. Let us number the rows and columns of the chessboard 1 through 8.

Column

1 2 3 4 5 6 7 81 Q

2 Q

3 Q

4 Q

5 Q

6 Q

7 Q

8 Q

Since each queen must be on a different row, we can without loss of generality assume queen I is to be placed on row i. All solutions to the 8-queens problem can therefore be represented as 8-tuples((x1, . . . , x8 ), where xi is the column on which queen i is placed.

Algorithm NQueens(k,n)

// Using backtracking, this procedure prints all possible placements of n queens on // an n*n chessboard so that they are nonattacking.

{

for i:=1 to n do

{

if(Place(k,i) then

{

x[k]:=i;

if(k=n) then write(x[1:n]);

}

}

}

Algorithm Place(k,i)

// Returns true if a queen can be placed in kth row and ith column. Otherwise it

// returns false. X[] is a gloabal array whose first (k-1) values have been set. Abs( r ) // returns the absolute value of r.

{

for j:= 1 to k-1 do

if((x[j]=i) //Two in the same column

or(Abs(x[j]-i) = Abs(j-k))) // or in the same diagonal

then return false;

return true;

}

18. GRAPH COLORING: Let G be a graph and m be a given positive integer. We want to discover whether the nodes of G can be colored in such a way that no two adjacent nodes have the same color yet only m colors are used.Note that if d is the degree of the given graph, then it can be colored with d+1 colors. The m-colorability optimization problem asks for thr smallest integer m for which the graph G can be colored. This integer is referred to as the chromatic number of the graph.

2

3

1

31

An example graph and its coloring

Algorithm mColoring(k)

// This algorithm was formed using the recurrsive backtracking schema. The graph is //represented by its booleen adjacency matrix G[1:n,1:n]. All assignments of 1,2,,m //to the vertices of the graph such that adjacency vertices are assigned distict integers //are printed. k is the index of the next vertex to color.

{

repeat

{ // Generate all legal assignments for x[k].

NextValue(k);//Assign to x[k] a legal color.

if(x[k]=0) then return; // No new color possible.

if(k==n) then // At most m colors have been used to color the n vertices.

Write (x[1:n]);

else

mColoring(k+1);

}until(false);

}

Algorithm NextValue(k)// x,,x[k-1] have been assigned integer values in the range [1,m] such that

// adjacency vertices have distinct integers. A value for x[k] is determined in the

// range [0,m]. x[k] is assigned the next highest numbered color while maintaining

// distinctness from the adjacent vertices of vertex k. if no such color exixts, then

// x[k] is 0.

{

repeat

{

x[k]:= (x[k] +1 ) mod (m+1); //Next highest color.

if(x[k] = 0) then return; // All colors have been used.

for j:= 1 to n do

{

// check if this color is distinct from adjacency colors.

if( ( G[k,j] 0) and (x[k] = x[j]))

// if (k,j) is and edge and if adjacent vertices have the same color.

then break;

}

if( j=n+1) then return; // New color found

}until(false); // otherwise try to find another color.

}

19. HAMILTONIAN CYCLES:Let G=(V,E) be a connected graph with n vertices. A Hamiltonian cycles is a round trip path along n edges of G that visits every vertex once and returns to its starting position. In other words if a Hamiltonian begins at some vertex v1 G and the vertices of G are visited in the order v1, v2, . . . , vn+1 , then the edges (vi , vi+1) are in E, 1in, and the vi are distinct except for vi vn+1 , which are equal.

Algorithm...